Project risk management is entering a structural transition. For years, AI has been framed as an analytical assistant — a faster spreadsheet, a smarter forecasting engine, a probabilistic co-pilot. That framing is no longer sufficient. AI systems are now crossing a threshold: from augmenting analysis to participating in coordination, interpretation, and even decision authority.

This shift forces us to confront a question we have largely avoided: Who has agency when humans and AI work together?

The Frontier Is Shifting Rapidly

A recent study by METR (Model Evaluation & Threat Research) illustrates how quickly model capability is expanding. Over a short period, models have moved from solving tasks measured in minutes to tasks measured in hours. In practical terms, that means models are handling increasingly complex, multi-step problems that previously required sustained expert attention.

This is not simply about speed. It means that for certain classes of problems, it now takes human experts longer to solve them than it takes the most advanced models. The relative cognitive advantage is no longer stable.

Project risk management is now facing a disruption — but without a robust research infrastructure guiding the transition. Organisations may be starting to consider deploying AI-enabled systems into planning, estimation, scenario modelling, and portfolio decisions. Yet we lack evidence-based frameworks for redesigning professional roles, authority structures, and accountability mechanisms around these systems.

The Real Issue: How Work Must Be Restacked

Our existing standards — PMI, PRINCE2, AACE — assume relatively stable human roles, human-centred judgement, and linear accountability chains. They do not yet address distributed cognition between human and machine agents.

The real issue is not whether AI can produce a better Monte Carlo simulation or draft a better risk register. The real issue is how work must be restacked.

Sangeet Paul Choudary, in Reshuffle, argues that AI does not merely augment existing roles. It redistributes cognitive tasks between humans and machines. Jobs that were once bundled together are unbundled and rebundled differently. Cognitive labour is decomposed into components — pattern recognition, synthesis, prioritisation, communication — and recomposed across human and machine actors.

In project risk management, this could mean:

But even this division is unstable. As models improve, the boundary shifts again. Agency becomes fluid.

Capability Is Scaling Faster Than Governance

Max Tegmark has warned that increasingly capable AI systems challenge traditional assumptions about control and autonomy. Mustafa Suleyman, in The Coming Wave, argues that the speed and diffusion of these systems will outpace regulatory and institutional adaptation. Whether one agrees with their degree of concern or not, the core observation is valid: capability is scaling faster than governance.

From a practitioner's vantage point, this is visible daily. AI is making inroads in cost modelling, forecasting, document analysis, and decision support. Teams are experimenting informally. Authority structures are adjusting implicitly, not explicitly.

Yet we have almost no empirical research examining:

Two Extremes to Avoid

Without rigorous research, we risk two extremes. Either we over-trust AI and abdicate responsibility. Or we under-utilise it and cling to legacy structures that reduce competitiveness. The objective is neither resistance nor blind adoption. It is redesign.

We need frameworks that treat AI not as a tool, nor as an autonomous actor, but as a participant in distributed cognitive systems. That requires clarifying where human agency must remain non-transferable — ethical judgement, accountability, value alignment — and where machine agency can be deliberately expanded.

The Question We Must Answer Deliberately

The METR data is not just a technical benchmark. It is a signal. As models extend their problem-solving horizon, the comparative terrain between human and machine cognition changes. The question is no longer: Can AI support project risk analysis and controls?

The question is: How do we restack work so that humans and AI cooperate effectively in project management — without eroding responsibility, clarity, or trust?

If we do not answer that deliberately, the reorganisation will happen anyway — by default, not by design.

References
Choudary, S. P. (2025). Reshuffle: Who Wins When AI Restacks the Knowledge Economy.
Tegmark, M. (2017). Life 3.0.
Suleyman, M. & Bhaskar, M. (2023). The Coming Wave.
METR (2024). Time-horizon of software engineering tasks different LLMs can complete 50% of the time.