Bill Franks December 10, 2024

Collected at: https://datafloq.com/read/artificial-intelligence-concerns-predictions-for-2025/

As 2024 comes to an end, I find myself worrying about a couple of aspects of the rapid progress we’ve made with AI this year. 2024 will certainly go down in history as one of massive advancement in AI capabilities, and many of these advancements have been impressive and impactful. At the same time, I’m convinced that we’re glossing over some substantive issues as we race forward with AI. 

I’ll cover these 5 topics:

  1. LLMs: What’s Popular, Not What’s Accurate
  2. The Risks Of AI Agents
  3. Autonomous AI-Powered Devices
  4. AI Surveillance
  5. AI Capabilities Outpacing Ethical Assessment

1) LLMs: What’s Popular, Not What’s Accurate

A lot of attention has been given to Retrieval Augmented Generation (RAG), and various variations of it, as a way to help LLMs do a better job of sticking to facts instead of hallucinating and making up answers. I agree that RAG-style approaches have a lot of potential and are helpful. However, they aren’t foolproof or complete solutions. One reason is because even with RAG, a model is still constrained by the “facts” fed into it. 

LLMs are not truly good at generating answers that are true. Rather, they are good at generating answers that reflect the most common or popular answers in their training data. Ideally, the difference in practice between “true” and “popular” would be small. However, I think we all know that many “facts” from the internet are not, in fact, factual! Worse, there are many examples of what “experts” declare to be true later being found to not be true.

Thus, I think we will continue to struggle to get public AI models to produce accurate information given that the models are only as good as their imperfect input data. With that said, privately tuned, targeted models fed with carefully curated input data will do better. Additionally, I believe that in 2025, most people will continue to vastly overestimate how much they can trust answers they get from AI, which will continue to cause problems. 

2) The Risks Of AI Agents

AI Agents are all the rage today. I see AI agents as the latest evolution of analytically driven process automation capabilities. Many of the agents I’ve read about have the potential to be highly valuable and we’ll certainly continue to see more AI agents rapidly roll out. My concerns stem from the level of control we’re turning over to some of these agents and the risks of errors that will inevitably occur.

For example, some agents take full or partial control of your computer to search the web, draft emails or documents, and even make purchases. This is all great when it works well, but without strong guardrails it is easy to foresee costly errors occurring. Having a flight automatically booked is great, but what about when a glitch books you on 10 flights in a matter of seconds or minutes before you realize what’s happening? While I’m excited about the potential for AI agents, I also expect that in 2025 there will be some public, embarrassing, and even entertaining examples of agents gone haywire. 

3) Autonomous AI-Powered Devices

The combination of AI and robotics is also making leaps and bounds. It seems like every few weeks I see another video of a robot that has been trained to do some impressive tasks far more quickly, and with far less human input, than ever before. The creepy dog-like robots of a few years ago can now run up and down mountainous terrain that even humans struggle with.

Automated robot or device control is one area where we desperately need to exercise caution. This is especially true as it relates to weaponized robots or drones. We are moving perilously close to having fully armed, highly capable robots and drones being widely deployed. A primary pushback on concerns is that such equipment still requires human oversight and command. However, it is a very small leap from human oversight to full autonomy. With the pace of advancement today, I won’t be surprised if some limit pushing brings this topic to the forefront in 2025 – especially with all the conflicts breaking out across the globe. 

4) AI Surveillance 

I hope that most readers have at least some discomfort with the level of corporate and governmental data collection and surveillance happening today. Using AI tools such as facial recognition and voice recognition, we are at risk of soon having virtually no privacy anywhere. While our movements and activities can already be tracked through our phones, it requires access to the telecom databases to get at the information. Unfortunately, we are rapidly nearing a time where any person or business with a standard camera will be able to quickly identify who is at the door or in the store using cheap, publicly available facial recognition models.

With our images and voice spread out across social media and the internet, there is plenty of data for a generic, publicly available AI model to identify people in real time. Most people aren’t comfortable with the thought of governments tracking our every move. But, is it much better to have a network of neighbors and businesses tracking us? The issue of when biometric identification is ok – and when it is not – is going to remain hot in 2025.

5) AI Capabilities Outpacing Ethical Assessment

I’ve been very interested in the ethical implications of AI, and analytics more generally, for quite some time. The capabilities we have developed have outpaced our ability to thoughtfully assess the ethical implications and develop widely agreed upon guidelines for those capabilities as they evolve. AI has progressed so quickly that today’s AI capabilities are further in front of ethical assessment and consideration than ever before.

The good news is that both the general public and those developing AI technologies are more aware of ethics than in the past. The public now demands at least a veneer of ethical assessment to accompany new AI-based tools and products. However, there is a lot of catchup to do and I anticipate 2025 will see more examples of very public debate over how to ethically deploy and utilize newly evolved AI capabilities.

I’d love to hear your opinions on my points and/or the additional points you’d like to add to the conversation.

Originally posted in the Analytics Matters newsletter on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments