Earlier this year, India released its annual Economic Survey. Interestingly, the 2024-25 Economic Survey has a chapter titled ‘Labour in the AI Era: Crisis or Catalyst’. The Chapter takes a realistic stock of AI adoption trends and forecasts. It concludes that “estimates about the magnitude of labor market impacts (by AI) may be well above what might actually materialize.” Given the nascent stage of AI development and deployment, the National Economic Survey refrains from deterministically predicting the impact of AI on the labor market.
However, the survey poses an important question worth considering: “What were the problems in the world that demanded AI as the answer?” In other words, is AI a solution in search of a problem?”. This question is to be read in light of India’s unemployment crisis. The International Labor Organization’s India Employment Report 2024 revealed that the proportion of educated youth who are unemployed doubled from 35.2% in 2000 to 65.7% in 2022. The trend of AI adoption raises alarms about automating jobs, especially white-collar jobs. In October 2024, it was reported that Indian fintech company PhonePe laid off 60% of its customer support staff over the past five years as part of a shift to AI-powered solutions.
I wonder how the Indian Public will react to having the possibility of climbing out of poverty stolen from them by rich fucks and their computers.
It concludes that “estimates about the magnitude of labor market impacts (by AI) may be well above what might actually materialize.”
I can believe that in the short term. Especially if someone is raising money for Product X, they have a strong incentive to say “oh, yeah, we can totally have a product that’s a drop-in replacement for Job Y in 2-3 years”.
So, they’re highlighting something like this:
A 2024 study by the Indian Institute of Management, Ahmedabad, on labor force perception of AI (“IIMA Study”) states that 68% of the surveyed white-collar employees expect AI to partially or fully automate their jobs in the next five years.
I think that it is fair to say that there is very probably a combination of people over-predicting generalized capabilities of existing systems based on what they see where existing systems can work well in very limited roles. Probably also underpredicting the fact that there are probably going to be hurdles that we crash into that we don’t yet know about.
But I am much more skeptical about people underestimating impact in the long term. Those systems are probably going to be considerably more-sophisticated and may work rather differently than the current generative AI things. Think about how transformative industrialization was, when we moved to having machines fueled by fossil fuels doing a lot of what had to be manual labor done by humans in the past. The vast majority of things that people were doing pre-industrialization aren’t done by people anymore.
https://en.wikipedia.org/wiki/History_of_agriculture_in_the_United_States
In Colonial America, agriculture was the primary livelihood for 90% of the population
https://www.agriculturelore.com/what-percentage-of-americans-work-in-agriculture/
The number of Americans employed in agriculture has been declining for many years. In 1900, 41% of the workforce was employed in agriculture. In 2012, that number had fallen to just 1%.
Basically, the jobs that 90% of the population had were in some way replaced.
That being said, I also think that if you have AI that can do human-level tasks across-the-board, it’s going to change society a great deal. I think that the things to think about are probably broader than just employment; like, I’d be thinking about things like major shifts in how society is structured, or dramatic changes in the military balance of power. Hell, even merely take the earlier example: if you were talking to someone in 1776 about how the US would change by the time it reached 2025, if they got tunnel vision and focused on the fact that about 90% of jobs would be replaced in that period, you’d probably say that that’s a relatively-small facet of the changes that happened. The way people live, what they do, how society is structured, all that, is quite different from the way it had been for the preceeding ~12k years, the structures that human society had developed since agriculture was introduced.
I’d agree that in the short term, AI is overhyped and in the long term, who really knows.
One thing I’ve always found funny though is that if we have AI’s that can replace programmers then don’t we also, by definition, have AI’s that can create AI’s? Isn’t that literally the start of the “singularity”, where every office worker is out of a job in a week and labourers only lasting long enough for our AI overlords to sort out robot bodies?
One thing I’ve always found funny though is that if we have AI’s that can replace programmers then don’t we also, by definition, have AI’s that can create AI’s?
Well, first, I wouldn’t say that existing generative AIs can replace a programmer (or even do that great a job at assisting one, increasing productivity). I do think that there’s potentially an unexplored role for creating an LLM-based “grammar checker” for code, which may be a larger win in doing debugging work that would normally require a human.
But, okay, set that aside – let’s say that we imagine that we have an AI in 2025 that can serve as a drop-in replacement for a programmer, can translate plain English instructions into a computer program as well as a programmer could. That still doesn’t get us to the technological singularity, because that probably involves also doing a lot of research work. Like, you can find plenty of programmers who can write software…but so far, none of them have made a self-improving AGI. :-)
I agree with you, it was more of a commentary on “what would happen if we had AGI tomorrow”.
We’ve been 3 months away from AGI for a few years now and it’s debatable if we’ll ever get there with LLM’s. Looking into the results of AI tests and benchmarks show that they are heavily gamed (tbf, all benchmarks are gamed.) With AI though, there’s so much money involved, it’s ridiculous.
Fortunately it looks like reality is slowly coming back. Microsoft’s CEO said that something like “AI solutions are not addressing customer problems.” Maybe I’m in a bubble but I feel like overall, people are starting to cool on AI and the constant hype cycle.
Interesting article. It fits beyond the recent trend of AI and into slightly more traditional automation. It also covers some good concepts on job categorizations in general.
The ultimate point seems to be that AI-driven job automation is just a subtle evolution on general white-collar job automation that has been happening for 30+ years. And that makes a lot of sense. Although generative AI is expanding that automation into other areas beyond office and some manufacturing job roles.
The book referenced here, “Bullshit Jobs” by David Graeber is a great (if maddening) read, I would highly recommend it. This article is referencing Graeber’s “system” of weeding out “bullshit jobs” built upon in that book.