ChatGPT has become a household name so quickly that it already feels like a generic word. The way people say "Google it" or "pass me the Hoover," many now say "ask ChatGPT" even when they mean "use an AI tool." That kind of cultural dominance is real — but in project terms it can be misleading.

A brand can lead the conversation without leading the market for long. And in AI, leadership is unusually fragile. For project risk management, that fragility is a gift, not a threat. It pushes us back to the only thing that has ever mattered: decision quality under uncertainty.

Because if there is one mistake organisations keep repeating, it is this: they confuse the choice of tool with the building of capability. They treat the model as the asset. In practice, the asset is the skill to work with AI — safely, fluently, and repeatedly — inside real workflows.

Don't Bet Your Risk Posture on a Moving Target

Today it might be OpenAI. Tomorrow it might be Google, Anthropic, Amazon, or someone we do not even know yet. The point is not which name you prefer; the point is that switching between tools is often easy, and the underlying capabilities are trending toward commodity status. Competition will make systems better, faster, and cheaper. It will also make the "best model" a constantly moving target.

That is why focusing on the model is a poor risk strategy. The dependency you should worry about is not "Model A versus Model B." The dependency is high adoption with low AI literacy. When teams use AI outputs as if they were reliable facts — without clear prompts, without validation, without knowing where errors hide — you do not have a technology advantage. You have a new risk pathway.

The risk is not the tool. The risk is the workflow.

Bubbles Are Loud. Infrastructure Is Quiet.

Every transformative technology arrives wearing two costumes: infrastructure and speculation. Railways reshaped economies — but they also produced a mania. Fortunes were made and lost, newspapers ran daily drama, and then the bubble burst. The rails remained and someone figured out how to make the system work.

The internet did the same. The dot-com bubble inflated, collapsed, and left wreckage across balance sheets. The internet remained — and then quietly changed the world.

AI is following the same arc. The economic theatre will be noisy: valuations, rivalries, IPO rumours, doom narratives, breathless announcements. But for most project professionals it is a distraction. Projects do not succeed because a team picked the winning vendor in an AI horse race. Projects succeed because teams learn how to use whatever tools exist to reduce uncertainty, surface assumptions, and make better decisions. When the dust settles, it will settle on capability.

Stop Trying to Outrun AI. Learn to Weave with It.

There is an anxious story circulating in every knowledge industry: "Upskill NOW so AI doesn't take your job." It sounds sensible, but it is the wrong centre of gravity. If your work is knowledge work — analysis, synthesis, drafting, reporting, pattern spotting — you will not consistently outperform AI at the raw production of those outputs.

Organisations do not need perfect automation to reorganise work. They need sufficient automation to shift the economics of attention. So the question becomes less defensive and more architectural: How do you combine what you know with what AI can scale?

This is where Sangeet Paul Choudary's "re-stacking" idea becomes useful. AI does not simply replace tasks. It rearranges them. Work that used to live inside one role gets unbundled and redistributed — some to tools, some to new specialised roles, some to redesigned decision points.

Project risk management is already being re-stacked in front of us. Risk identification no longer needs to rely only on what gets said in a workshop. AI can trawl through meeting minutes, claims logs, scope changes, and lessons-learned repositories and surface candidate risks that humans might miss. Risk analysis begins to shift too — not away from probabilistic thinking, but away from labour-intensive mechanics. The craft moves upward: from "can we build the model?" to "can we define the right distributions, constraints, and scenarios — and can we challenge them properly?"

What Risk Leaders Should Build Now

If you want to treat this as a project risk problem — because it is — then the priority is not model selection. It is capability design. You want a team that can work with AI like professionals, not AI tourists. A team that can specify, validate, and govern.

That capability has a few recognisable traits:

Prompt discipline: Treating prompts as specifications, not casual requests. Clear definitions. Clear constraints. Clear assumptions. Clear acceptance criteria.

Validation: Cross-checking against sources, domain knowledge, and alternative methods. Not because AI is "bad," but because uncertainty demands triangulation.

Workflow integration: Creating repeatable chains such as extract → structure → challenge → quantify → decide → document. Not one-off experiments, but habits.

Governance: Clear decision rights, traceability, and a paper trail that can survive scrutiny. The goal is not only to be right; it is to be defensible when reality contests your assumptions.

The Durable Advantage

In a year, today's model leaderboard will probably look different. In five years, people will barely remember which release was "best" in which month. But the organisations that build AI skill into the muscles of project work — quietly, methodically, without fetishising brands — will still have something tangible: a decision system that learns faster than uncertainty grows.

So let the market argue about who is winning. Let the bubble do what bubbles do. In project risk management, the winning move is simpler: stop chasing models, and start building skill.