Project risk management has long relied on quantitative models whose internal workings are understood by a limited number of specialists. The growing adoption of AI-driven and learning algorithms marks a deeper shift: from complex but often explainable tools to black-boxed systems whose reasoning is opaque even to those who deploy them.
Drawing on the work of Faraj, Pachidi, and Sayegh (2018), this article examines how black-boxed algorithm performance, comprehensive digitisation, anticipatory quantification, and hidden politics could impact and reshape project risk practices. The argument is simple but consequential: as learning algorithms become embedded in processes such as cost forecasting, risk prioritisation, and portfolio decisions, they can subtly alter authority, accountability, and professional judgment.
Project Risk Management and Model Opacity
Project risk management has always involved a degree of opacity. Monte Carlo simulations, correlation matrices, and portfolio optimisation models are rarely transparent to senior decision-makers. In training courses, the concept of GIGO (Garbage In, Garbage Out) is routinely mentioned. This opacity has traditionally been acceptable because the underlying assumptions could be reasonably explained, reviewed, and challenged when required.
Learning algorithms change this balance. Unlike traditional probabilistic models, they do not rely on stable, predefined rules. Their outputs emerge from evolving statistical relationships trained on large datasets. As a result, even when a model performs well, the reasoning behind a specific output may be inaccessible. Faraj et al. describe this effect as black-boxed performance: systems that produce authoritative results without offering a clear account of how those results were reached.
In project environments, where risk models may inform funding approvals, contract strategies, and investment sequencing, this form of opacity can have practical consequences.
From Decision Support to Decision Authority
Risk models rarely operate as neutral inputs. In practice, they authorise decisions. A P80 cost estimate can unlock capital. A downside percentile can delay or cancel a project. A portfolio optimisation output can exclude initiatives entirely.
When such decisions are supported by black-boxed systems, authority gradually shifts from professional judgment to algorithmic output. The issue is not that the model is necessarily wrong, but that its conclusions become difficult to contest. As Faraj et al. note, black-boxed performance creates accountability gaps: when outcomes diverge from forecasts, it is unclear whether responsibility lies with the data, the model, the assumptions, or the decision-makers who relied on the output.
Comprehensive Digitisation and Selective Visibility
Learning algorithms depend on large, comprehensive datasets. This encourages project organisations to digitise everything that can be measured: historical costs, schedules, risk registers, contractor performance indicators, and financial outcomes. Faraj et al. describe this as comprehensive digitisation, with an important caveat: algorithms are sensitive to what data is available, not necessarily to what matters most.
In project risk management, this creates a structural bias. Quantifiable risks become highly visible and influential, while contextual, political, or organisational uncertainties remain underrepresented or excluded altogether. The result is not simply incomplete models, but a narrowing of managerial attention. AI-enabled risk systems may produce increasingly refined forecasts while systematically overlooking the factors that most often derail complex projects.
Anticipatory Quantification in Project Decisions
A defining feature of learning algorithms is anticipatory quantification: the production of predictions about future behaviour rather than evaluations of past performance. In project risk management, this logic is familiar, but learning algorithms intensify it by enabling continuous ranking, scoring, and optimisation.
Projects are increasingly assessed not on what they are, but on what they are predicted to become. As Faraj et al. observe, this can lead to individuals and organisations being judged on propensity rather than action. For risk professionals, this creates a tension. Predictive insights can support better planning, but they can also displace deliberation. Decisions become framed as statistically inevitable, reducing space for challenge, contextual interpretation, or alternative courses of action.
The Hidden Politics of Risk Models
A further implication highlighted by Faraj et al. is that algorithms are never neutral. Design choices, data selection, variable weighting, and optimisation objectives embed values and priorities into technical systems. These hidden politics are rarely explicit, yet they shape outcomes in material ways.
In project risk management, such politics may appear in the prioritisation of cost certainty over long-term value, schedule adherence over adaptability, or financial risk over social or strategic considerations. When these preferences are encoded in black-boxed systems, they become harder to surface and debate. The black box does not remove judgment from decision-making; it relocates it into design choices that are often invisible to users.
Working Responsibly with Black Boxes
The growing presence of learning algorithms in project risk management does not call for rejection of advanced analytics. It calls for more deliberate use. Professionals must recognise that black-boxed systems reshape how risk is defined, prioritised, and acted upon.
This requires clarity about the role assigned to algorithms: whether they are exploratory tools, decision aids, or de facto decision-makers. It also requires explicit recognition of what these systems do not capture, and of the organisational and political assumptions embedded within them.
Final Reflection
Project risk management has always balanced uncertainty, judgment, and analysis. What is new is the rise of systems that promise superior prediction while obscuring their own logic. In the age of the black box, the most significant risk may not be forecast error, but the gradual erosion of professional scrutiny in favour of outputs that cannot be meaningfully questioned.
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization.