In our previous article about why 95% of AI projects fail, we highlighted how MIT, McKinsey, and Goldman Sachs all point to the same thing: enthusiasm for AI often turns into disappointment. Companies invest enormous sums without seeing results.
But there's another important dimension to understand about AI's impact on the labor market. This isn't about companies' internal AI projects, but about the broader narrative that Silicon Valley companies and venture capitalists are pushing: that AI will replace massive numbers of jobs, and quickly.
New research from Yale University's Budget Lab paints a very different picture.
Since ChatGPT launched in November 2022, public surveys have shown widespread concern about AI-driven job losses. But Yale researchers' analysis of 33 months of labor market data since ChatGPT's launch shows something surprising:
"The broader labor market has not experienced noticeable disruption since ChatGPT's launch 33 months ago, undermining fears that AI automation is currently eroding demand for cognitive labor in the economy."
Let that sink in. Despite three years of intense AI hype and massive investments in AI infrastructure, researchers see no significant impact on the labor market.
Yale Budget Lab used several methods to analyze AI's impact on the labor market:
Researchers compared how quickly occupational mix (the distribution of workers between different occupations) has changed since ChatGPT's launch against previous technological shifts:
Results: Occupational composition is changing somewhat faster now than before, but the difference is marginal – about 1 percentage point higher than during the internet breakthrough. And most importantly: the trend started in 2021, before ChatGPT, suggesting the change is not AI-driven.
Sectors with the highest exposure to generative AI – Information, Financial Activities, and Professional Services – have seen larger changes in occupational mix. But even here, data shows the changes started before ChatGPT's launch.
Yale researchers analyzed both OpenAI's "exposure" data (theoretical impact) and Anthropic's actual usage data from Claude. This is where it gets interesting:
The share of employees in occupations with high AI exposure has been stable at around 18% since ChatGPT's launch. There has been no increase in unemployment in these occupations.
But when they compared OpenAI's exposure data with Anthropic's actual usage data, they found limited correlation. In other words: just because an occupation can theoretically be affected by AI doesn't mean it's actually being used there.
Researchers examined whether recently unemployed people came from occupations with high AI exposure. The answer was no. Regardless of unemployment duration, there was no connection to AI exposure or AI usage.
A study by Brynjolfsson et al. had previously indicated that AI might be affecting employment of recent graduates (ages 20-24). Yale researchers found a small increase in the difference between younger and older college-educated occupational choices, but emphasize this could be due to a softening labor market that always hits younger workers harder – not necessarily AI.
Yale researchers are clear that their analysis is not predictive. They continue monitoring these trends monthly. But they point out something important:
"Historically, widespread technological disruption in workplaces tends to occur over decades, rather than months or years. Computers didn't become common in offices until nearly a decade after their public launch, and it took even longer before they transformed office workflows."
This should be familiar if you've worked with organizational change. Technology is rarely the bottleneck – it's the organization's ability to absorb the change that determines the outcome.
Yale researchers are also transparent about their data limitations:
OpenAI's "exposure" data:
Anthropic's usage data:
What's needed: Comprehensive usage data from all leading AI companies at individual and company levels, including API usage. Yale researchers urge all major AI labs to follow Anthropic's example and share transparent usage data.
Yale research confirms two important lessons for the labor market:
Tech CEOs and venture capitalists have strong incentives to exaggerate AI's transformative power – they're driving an industry built on these narratives. Goldman Sachs' own research shows that "AI adoption remains modest in most industries" while infrastructure companies like Nvidia reap record profits.
The question Goldman Sachs raises is relevant: Will this enormous investment ever pay off, or are we witnessing the buildup of a new tech bubble? And despite the lack of impact so far, investments continue: over $1 trillion is planned for coming years according to Goldman Sachs.
Just as our previous article showed that 95% of AI projects fail due to lack of structure, Yale research shows that even at the macro level, change is slower than the hype suggests.
This is good news for you as a business leader.
You don't need to rush headlong into AI out of fear of being "left behind." You have time to do it right. And "right" means the same thing as with all other organizational changes:
Clear leadership. Structure. Processes. Follow-up.
Just as building management systems requires structure, so does AI adoption. Based on both MIT's research (from our previous article) and Yale's labor market study, it's clear what works:
Not "we need an AI strategy." Not "our competitors are using AI." Start with a specific pain point where AI can actually make a difference.
Yale research shows that occupations where AI is actually used (programmers, certain administrative processes) are limited. MIT research shows that automating administrative processes often delivers better ROI than sales and marketing tools where most companies invest.
McKinsey research (which we cited in our previous article) shows that clear KPIs for AI solutions have the greatest impact on results. This isn't unique to AI – it's fundamental management.
If you can't measure the effect, you can't lead the change. Read our guide on how to get started with goal management to establish a foundation.
Yale research shows that even "high-risk occupations" (high AI exposure) aren't affected yet. But that doesn't mean the risks aren't real – they just take longer to materialize.
Goldman Sachs points to another risk: investing enormous sums in technology that doesn't deliver ROI. This is a business risk, not a technology risk.
A structured risk management process helps you identify both:
McKinsey research shows that active leadership oversight of AI governance is a key success factor. This cannot be delegated to the IT department or a "digital transformation officer."
AI transformation (when it actually happens) will affect processes, people, and business models. It requires leadership from the top, just as management system work requires leadership commitment.
MIT research showed that companies buying AI solutions from specialized vendors succeed 67% of the time, while those building their own solutions succeed only 33% as often.
Based on the research implications, one can conclude that the future belongs to specialized AI tools that solve specific business problems, not generic solutions.
One example is AmpliFlow, where we've integrated AI selectively into specific parts of the platform – not as a generic feature, but to solve concrete problems in document management and process work.
This means:
Yale Budget Lab's research on the labor market provides a sobering counterpoint to the Silicon Valley narrative. After three years of intense AI hype, we see:
This doesn't mean AI is unimportant. It means you have time to do it right.
And "right" isn't new. It's the same principles that apply to all successful organizational change:
As we concluded our previous article: MIT study authors are clear that "testing AI is easy, but making money from it is hard."
Yale research adds: It also takes time. Much more time than Silicon Valley hype makes it sound.
For you as a business leader, this is good news. You don't need to panic-implement AI. You can take a structured approach, based on real needs and measurable results.
Just like with all other business development.
Sources: