Teams adopt AI tools. They move faster. Reports get written in minutes. Risk registers get populated in an afternoon. Everyone feels productive. And then something goes wrong.
A risk that was flagged but never truly understood. A cost estimate that looked rigorous but wasn't. A decision that felt data-driven but was essentially delegated to a chatbot.
I wrote in a recent post that the AI model doesn't matter much — the skills do. I want to push that argument further, because a piece of research I came across this week makes the stakes clearer than I've seen them stated before.
What's happening to professionals who rely on AI too heavily
Cognitive neuroscientist Dr. Sahar Yousef from UC Berkeley has been studying what AI dependency actually does to human thinking. Her findings are not comfortable reading. Students who outsource their work to AI are not just losing marks — they're losing the cognitive capacity to think independently. The brain, like any other muscle, atrophies when it stops being used. Every time AI handles a task a person should struggle through, it removes a repetition that would have built capability.
For project risk management, this is not a peripheral concern. It is a professional survival issue.
The equation that actually works
Process + Tools + AI = Augmented Results.
Let me be direct about what I observe with practitioners. The people who get genuine value from AI in project work are those who arrive at the AI conversation already knowing something. They've thought about the problem. They've sketched the structure of the analysis. They have a point of view, however rough. AI then amplifies that thinking — it runs scenarios faster, surfaces references they'd miss, structures outputs they'd otherwise spend hours drafting.
The people who get poor results treat AI as the starting point. They type the question, accept the output, and move on. What they produce is generic, unvalidated, and — most dangerously — convincing-looking.
A beautifully formatted Monte Carlo narrative written by an LLM that has no idea about your project's specific conditions is worse than no analysis at all. It creates the illusion of rigour.
The framework that I teach, and that underpins my forthcoming book, is simple: Process + Tools + AI = Augmented Results. Not AI alone. Not AI instead of process. AI within a structured workflow where human judgement does the work that matters — defining distributions, challenging assumptions, interpreting outputs in context, and owning the decision.
Think of it this way. A skilled QRA analyst using software platforms with good AI support can now do what previously required a full team. But only if they actually understand what a P80 cost estimate means, why correlation matters in schedule risk, and when to push back on a model result that looks too clean. Remove that understanding and you just have faster nonsense.
The passenger problem
The research I mentioned describes a distinction between AI "drivers" and "passengers." Drivers manage AI — they prompt deliberately, verify outputs, and own the decision. Passengers defer — they copy, paste, and ship. The visible symptom of a passenger is someone who is suddenly calm and ahead of schedule when they used to scramble. That's not efficiency. That's offloading.
In project risk management, the accountability for a risk assessment doesn't move to the AI when it turns out to be wrong. It stays with the practitioner who signed off on it. The PMO Director who approved the risk register owns it. The Head of Risk who presented the cost confidence interval to the board owns it. AI doesn't attend the debrief when the project overruns.
The question isn't whether AI changes how project risk work gets done — it clearly does. The question is whether practitioners maintain their agency, their professional accountability, and their strategic value in that new configuration, or whether they allow themselves to become a conduit between a client and an algorithm.
What correct AI use actually looks like
Struggle before you prompt. Before you open the AI tool, spend time with the problem. Write the key risks you'd expect to see. Sketch the structure of the analysis. Decide what the output needs to answer. Then use AI to go further, faster.
Validate everything that matters. AI-generated risk registers, cost estimates, and scenario narratives all require the same rigour you'd apply to any other source. Cross-check distributions. Challenge assumptions. Ask yourself this simple question: would I stake my professional reputation on this output?
Design your workflow first. AI is most powerful when embedded in a repeatable process — identify, structure, quantify, challenge, communicate. It is least useful as a random productivity hack applied to isolated tasks.
Own the decision. Every risk insight, every probabilistic estimate, every communication to a client or board should pass through your professional judgement before it leaves your hands.
The bottom line
AI will not make you a better risk practitioner. Your skills will. AI will make a skilled practitioner dramatically more effective — able to process more information, model more scenarios, and communicate more clearly. That is the correct value proposition, and it is significant.
But the skill comes first. The process comes first. AI amplifies what is already there. If what's there is shallow, AI amplifies the shallowness — just faster and with better formatting.
The research now makes it explicit: outsource the thinking, and you lose the capacity to think.
That is a risk no register will capture until it's too late.