For centuries, each wave of technological progress nudged humans away from manual toil and toward “higher” work. Machines freed us from repetitive tasks so we could focus on judgement, creativity, strategy. But now, with the rise of AI, that boundary is blurring. The very cognitive layers we thought were off-limits for machines are coming under pressure. The question is: if machines can think, where do humans go next?
The long arc: automation moving upward
To see where we might head, it helps to trace how automation has already shifted human labour:
- In the industrial era, machines replaced muscle. Fewer people were needed in fields or factories; more in management, design, coordination.
- In the information age, software and robotics began eroding repetitive office tasks. Think data entry, bookkeeping, standardised reports.
- Now, AI (especially “cognitive” and “generative” AI) is targeting tasks that require analysis, synthesis, even (some degree of) “judgment.”
This isn’t just theoretical. The World Economic Forum’s Future of Jobs Report 2025 highlights that AI and information processing technologies are among the most disruptive forces to come. It estimates that many occupations will see substantial change – not necessarily total elimination, but a shifting composition of what tasks humans do and what tasks machines take on.
In the UK specifically, the Institute for Government / Institute for Future Skills (via a report, The Impact of AI on the UK Jobs and Training) suggests that somewhere between 10-30 % of jobs are automatable in principle. Meanwhile, KPMG’s “Generative AI and the UK labour market” report gives more nuance: around 2.5 % of tasks could be handled by generative AI per se, with about 40 % of jobs seeing some impact.
So what used to be “safe” — tasks that required thought — are gradually being redefined.
The current threat: mind work under siege
What’s different now is that AI isn’t just mimicking rote logic. It’s getting good at pattern recognition, language, abstraction. That means some traditionally human domains are no longer immune:
- Routine cognitive work like summarising documents, drafting standard contracts, even generating reports is increasingly in scope.
- Entry-level knowledge roles are particularly exposed. A recent Harvard Business Review article titled The Perils of Using AI to Replace Entry-Level Jobs argues that early-career workers may find fewer footholds in fields where machines can already replicate many tasks.
- In the UK, an IPPR study suggests that in a “first wave” of AI adoption, 11 % of tasks across the economy are already exposed. If adoption deepens, that number could climb to 59 % in a second wave. This reaches beyond clerical jobs to what were once seen as higher cognitive roles.
- The Institute Global’s “Impact of AI on the Labour Market” paper estimates that UK firms adopting AI fully could “save” nearly a quarter of private sector workforce time — the equivalent output of some 6 million workers.
These aren’t minor tweaks. They’re signals that the walls protecting what’s “human work” are eroding.
But humans still bring what machines can’t (yet)
If thought becomes automatable, can we still stake out territory? I believe yes – though how wide that territory is will depend on how we redefine “human contribution.” Some qualities are harder (though not impossible) for machines to replicate:
- Empathy, ethics, human judgment in messy contexts: Legal disputes, leadership, conflict resolution, counselling – these often require nuance, moral weighting, context, even vulnerability.
- Creativity untethered from data: AI can remix, recombine, but truly novel leaps, radical reframing, or deeply personal narratives are harder for machines to originate.
- Social intelligence, influence, trust: The ability to lead a team, understand unspoken dynamics, persuade, mentor – these involve deep human connection.
- Meta thinking and reflection: Thinking about thinking, questioning frameworks, reframing entire models – these might remain more human territory (though we can’t be naive).
- Oversight, arbitration, boundary-setting: AI systems will need supervision, governance, ethical constraints, error checking. That calls for human oversight roles.
In fact, a recent academic paper Complement or Substitute? How AI increases the demand for human skills found that while AI does substitute some tasks, the demand for complementary human skills (like digital literacy, teamwork, resilience) is rising – sometimes by more than the substitution effect.
So the future might not be “humans or machines” but “humans in different modes of work.”
The risk: displacement, division, identity crisis
This transition is going to be messy. Some of the dangers we need to acknowledge:
- Widening inequality: Those who can adapt (learning, flexibility, cross-disciplinary skills) may thrive; others may be left behind.
- Loss of early entry points: If entry roles vanish, career progression funnels could narrow.
- Skill mismatch: As tasks shift rapidly, training and reskilling may lag demand.
- Psychological dislocation: If people feel their labour is undervalued or replaced, there’s risk of alienation, identity crisis.
- Overtrust in algorithmic fairness: If hiring, performance review, promotion decisions are handed to opaque AI systems, bias or flaws may propagate invisibly.
History suggests that disruptive transitions tend to widen gaps before new stability emerges. We need to guard against that.
A few scenarios (and what we should watch for)
Let me sketch out possible futures and what we should keep an eye on:
Scenario | What Work Looks Like | Role of Humans |
---|---|---|
Augmented future | AI handles low to mid tasks; humans focus on higher judgement, oversight, creativity | Humans steer, refine, govern, imagine new problems |
Blended teams | Hybrid systems: humans and AI working together in loop | Humans pick strategic goals, interpret edge cases, inject values |
Segregated roles | Many “cognitive” tasks fully automated; humans limited to niche areas like ethics, social roles | Humans in “last mile” or unique, high-touch domains |
Ubiquitous automation | AI increasingly autonomous across domains, even meta-decisions | Humans as supervisors, regulators, philosophical architects, or in non-work roles (leisure, care, arts) |
If I were advising clients or candidates, here’s what I’d urge them to watch for:
- The pace of AI adoption (will it happen steadily or in leaps?)
- Which tasks (not jobs) are most vulnerable in their domain
- Signals from leadership and investment – where are firms putting R&D, recruitment, training money
- Policy and regulation – these could slow or channel the transition
- Labour market shifts – what new roles are emerging, in oversight, governance, hybrid human-machine work
What to do now (for candidates, clients, recruiters)
Because this isn’t a distant problem – it’s unfolding now – there’s practical agency in how we respond.
- For candidates: Don’t just deepen legal or domain expertise; layer in transversal skills – ethics, system thinking, digital fluency, narrative, leadership. Seek roles that combine human and machine work, not ones purely replaced by machine.
- For clients/firms: When hiring, think beyond “machine-resistant” roles. Design roles that optimise human+AI collaboration. Invest in reskilling, redesigning workflows.
- For recruiters and agencies: Update your frameworks for evaluation. What used to be a “plus” (tech curiosity, adaptability) may become a baseline. Help clients see the value in human skills that AI cannot easily replicate.
Also, encourage a mindset shift: value transformation over displacement. Rather than asking “which roles vanish?”, ask “how can humans lead the transition to new forms of value?”
Final thoughts: humans are not redundant (yet), but the frontier is shifting
This moment feels different. It’s not just about machines doing heavy or routine tasks. It’s about machines encroaching on thought, analysis, even creativity. That changes the rules of engagement.
Yet I’m not convinced humans are being written out. What’s more likely is that we’ll be asked to re-imagine what “work” means – less about performing tasks and more about curating, governing, imagining, caring.
It’s messy, scary, open-ended. But also potentially liberating. If we lean into the ambiguity, we might get to design roles that feel more human, not less.