The Future of Work Isn’t Autonomous. It’s Agency-Centred: Stanford's Latest Research
- Yoshi Soornack
- Jun 28
- 3 min read

A shift is happening in how we think about AI at work. It’s no longer just a question of what can be automated, but how people want it to be. A new Stanford study introduces a much-needed correction to automation discourse: worker preference. Rather than assuming full AI takeover is the endgame, it shows that most people want partnership, not replacement.
This is a shift from capability-led futures to agency-led ones.
The Study
Conducted by Stanford HAI, the research evaluated over 800 tasks across 100+ occupations, asking two questions:
Do workers want this task automated?
Do AI experts believe it can be?
It also introduced a Human Agency Scale (H1–H5), measuring the degree of human involvement workers believe should be preserved, from full automation (H1) to full human control (H5). Most picked the middle: H3 – shared control.
This is key: in the face of growing AI capability, humans are choosing co-agency over surrender.
Key Findings
Not all automation is welcome
46% of tasks show a positive desire for automation.
These mostly cluster around data entry, scheduling, and transactional processes.
The remaining 54% are either neutral or negatively scored, not because they can’t be automated, but because workers don’t want them to be.
The gap between capability and desire is more important than either in isolation.
Automation zones: where to build, where to pause
The researchers mapped tasks into four zones:
Green-light zone: high desire + high capability.
Red-light zone: low desire + high capability.
R&D zone: high desire, but low current capability.
Low-priority zone: low desire, low capability.
Critically, many current startups are building in the wrong zones. 41% of companies in Y Combinator’s AI cohort focus on tasks workers don’t actually want automated.
Just because something can be done doesn’t mean it should be.
Skill demand is rebalancing
The study’s skill mapping reveals a deep signal:
Skills like “Data Processing” are sliding in agency and relevance.
Human-heavy capabilities: mentoring, resolving conflict, collaboration are climbing.
If AI systems take over low-agency, repeatable work, the remaining tasks in a job are increasingly relational, interpretive, and dynamic. That means future job descriptions won’t just shift in what they do, but in how they do it. This isn’t just about preference. It hints at a revaluation of skills. Traditional labour markets reward skills that are quantifiable and easily benchmarked - like data manipulation or analysis. But this study shows that what’s hard to automate becomes what’s valuable.
What this Means
Rethink automation narratives
AI isn’t replacing work. It’s reshaping who holds control over which parts of work.
If we treat people as passive passengers, we’ll get brittle systems and pushback. But if we embed human agency into the loop, we get tools that stick, because they align with values, not just speed.
Reframe product roadmaps
Teams building agentic AI need to target the Green-light and R&D zones, and stop pouring money into automating things people resist. The future isn’t about AI autonomy. It’s about symmetry between human needs and system design.
Final Word
The researchers are releasing WORKBank, a live dataset tracking task-level automation sentiment and feasibility. Over time, this could become a policy and product compass. But it starts with one simple provocation:
Do we want AI to do it, or just help?
That question may shape the next decade of work more than any benchmark or LLM leaderboard.



Comments