Eric Gonzalez July 28, 2024
Collected at: https://datafloq.com/read/ai-impact-data-jobs-change-industry/
Chess legend, Gary Kasparov, who was the first chess grandmaster to lose to artificial intelligence (AI), has been vocal about the worth of what he calls, “centaurs”: these are human-machine partnerships, which he believes are superior, not just to humans, but to pure machine teams. Kasparov says that, “Human intellect and creativity, paired with powerful tools, is the winning combination. It always has been”. The promise of AI today is that centaurs may become a productive part of data jobs, increasing efficiencies, productivity, and unleashing new tasks and products. The question is, just what is the impact of AI, specifically, generative AI (genAI) on data jobs. We are already seeing widespread adoption. Gartner’s reporting shows that data and analytics(D&A) functions are already mostly either using genAI or there are plans for them to do so, with just 7% of respondents having no such plans:
Source: Gartner
The Uses of GenAI
Last year, Marc Zao-Sanders and his firm, filtered.com, studied the uses of generative AI, and produced the chart you will find at the end of this essay. Briefly, they found that uses of AI fell into six categories, with associated shares of use:
The Uses of GenAI | |
Content Creation & Editing | 23% |
Technical Assistance & Troubleshooting | 21% |
Personal & Professional Support | 17% |
Learning & Education | 15% |
Creativity & Recreation | 13% |
Research, Analysis & Decision Making | 10% |
Source: Harvard Business Review
In terms of data jobs, according to Gravitas Data Recruitment, the biggest uses seem to be for troubleshooting, excel formulas, improving code, fixing bugs in code, generating code, rubber duck debugging, data entry, data manipulation, translating code, suggesting code libraries, sampling data, and spotting anomalies.
One person interviewed on this topic said, “I have to write a lot of .vb and Excel formulas to reconcile data from less technical people. ChatGPT helps 45-minute tasks take about three to five minutes.” This is the promise of genAI: to take complex tasks that would otherwise take a long time to do, and do them quickly. There’s also the promise of removing what anthropologist, David Graeber, called “bullsh*t jobs”: jobs that seem to add no value, and are tiresome, boring and repetitive. Repetitive data entry, for instance, is something that AI can do now. Ideally, this means that data jobs will, in future, involve more exercise of human creativity, better planning and strategic thinking, and be less tedious.
Across the board, the most interesting thing about genAI is that this single biggest use case is for idea generation. This is surprising given that genAI is mechanistic and “merely” finds the most probable next sequence of words, or images, or sounds, as the mathematician, Stephen Wolfram explained in a piece on ChatGPT. This is a very clear move toward Kasparov’s idea of centaurs: people are not just using genAI to produce stuff, they are using it as a partner.
In data analysis, Bernard Marr in a piece for Forbes, explained that AI is “transforming traditional roles by automating the routine processing of large datasets”, which is having the effect of shifting the focus from “basic data handling to more strategic decision-making”. What this is doing is enabling teams to be more ambitious and to ask questions that may have been too challenging to ask before.
Gartner specifically interrogated data experts on their use of genAI, and found that the largest use case was for data exploration, which chimes with Zao-Sanders’ work:
Source: Gartner
The Limits of GenAI
The hype cycle is clear: generative AI will transform the nature of work. Yet, research by Goldman Sachs has found that, despite enormous investments in generative AI, there is little to show for it. In their report, Daron Acemoglu, Institute Professor at MIT, argues that it will only be cost-effective to automate just 25% of AI-exposed tasks in the next decade, with a real world impact of just 5% of all tasks. Even though many will argue that AI costs will decline, he is skeptical that this will occur quickly or as steeply as previous inventions. He also argues that it is not a “law of nature” that technologies lead to new tasks and products. Goldman Sachs’ Head of Global Equity Research, Jim Covell, believes that AI is still not able to solve complex problems, and that previous technologies provided low-cost solutions, disrupting high-cost solutions. Given the challenges in building inputs such as GPU chips, securing energy, and other things, there may never be enough competition to reduce prices.
Perhaps the biggest criticism of genAI from an output perspective was provided by researchers Michael Townsen Hicks, James Humphries, and Jay Slater, whose viral paper argues that ChatGPT’s output is “bullsh*t”. Bullsh*t here is a technical term, believe it or not, that they believe is more accurate than “hallucinations”:
“Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.”
Because genAI is indifferent to truth, it cannot be relied upon to tell it. This is a problem that is largely constrained with data jobs, because genAI is very good at highly structured tasks, and so, it is not surprising that research finds that data jobs have been the biggest beneficiaries of genAI.
Appendix:
Source: Harvard Business Review
Leave a Reply