Early in a transaction, before the elicitation workshops have happened and before any consultant report has landed on your desk, you know two things about a particular variable: a floor and a ceiling. You have no view on the shape. You have no reason to believe the middle is more likely than the edges. You have a range, and nothing else.
Some modellers, at that point, reach for a triangular distribution and invent a mode. Others reach for PERT and invent a most-likely value. Both of those choices add information the model does not actually have. The triangular concentrates probability mass at a point the modeller picked, not at a point the world gave them. PERT does the same, more smoothly. The numbers that come out of the simulation inherit that invented central tendency, and the reader of the model has no way to tell which part of the answer came from the data and which part came from a guess about shape.
There is a distribution that says nothing beyond the bounds. It is called RiskUniform. Every value between the minimum and the maximum is equally likely. No peak, no tail, no hidden assumption about where the mass concentrates. This article explains what it does, when it is the right choice, where it is the wrong choice, and how to use it in a worked case study on the timing of a regulatory approval.
What RiskUniform does
RiskUniform takes two parameters: a minimum and a maximum. Every value between them is drawn with equal probability density. Outside those bounds, the probability is zero.
// Example — regulatory approval arriving between Day 60 and Day 180:
=RiskUniform(60, 180)
The expected value is the midpoint. The variance is a fixed function of the range, equal to the range squared divided by twelve. The distribution is symmetric. Those three facts determine everything else. There are no extra parameters to tune and no shape to pick.
When RiskUniform is the right choice
Three situations call for it. The first is genuine ignorance between bounds. You know a quantity must lie between two values — from physics, from a contract, from a statutory window — and you have no further information. A uniform is the neutral position. It adds nothing the data has not told you.
The second is an early-stage placeholder. Before the elicitation is scheduled, before the consultant report is commissioned, you still need a model that runs. A uniform lets you build the downstream structure, test the logic and sense-check the cash flow without pretending you know more than you do. When the percentile data arrives, you replace the RiskUniform call with a RiskCumul or an alternate-parameter distribution, and everything downstream continues to work.
The third is when the underlying mechanism genuinely produces flat draws. Examples are rare but do exist — the position of a manufacturing defect along a batch of identical units processed at constant speed, or the time of day a retail transaction lands in a uniformly sampled logging window. If the physics of the process says every value in the range is equally likely, a uniform is not a placeholder; it is the correct model.
RiskUniform versus triangular and PERT
Both RiskTriang and RiskPert ask for a most-likely value. That is exactly the parameter you are refusing to invent when you use RiskUniform. A triangular with minimum, most-likely equal to the midpoint and maximum is sometimes used as a "neutral triangular". It is not neutral. The triangular still concentrates mass at the centre and tapers toward the edges. Compared to a uniform over the same range, the triangular under-samples the extremes and over-samples the middle. If your downstream calculation is non-linear in the input — a leverage ratio, an IRR near a threshold, an option payoff — that mis-weighting of the tails moves the answer.
The rule of thumb is simple: if you have a view on where the probability is concentrated, use a triangular or a PERT and write that view down. If you do not, use RiskUniform and let the bounds do the work. Don't use a triangular to pretend you have information you don't.
"A uniform distribution is the most honest placeholder you have. It says: here are the bounds, and nothing else. The answer it gives is the answer the data actually supports."
Case study: timing of a grid-connection approval
A 200 MW solar farm in the regulatory queue
Setting. A 200 MW solar project has submitted its grid-connection application to the system operator. The operator has a statutory obligation to issue a determination between Day 60 and Day 180 from the submission date. Project revenue cannot begin until approval is granted and commissioning (a fixed 30-day process) is complete.
Commercial structure. A 15-year PPA at £85/MWh covers the full project life. Annual energy production is 350 GWh. Project CAPEX is £200M. Operating costs are £2M fixed per year plus £5/MWh variable.
Question. What is the distribution of Year-1 revenue, and what is the impact on net project NPV, under the regulator's statutory window? No backlog data is available from the operator; the analyst has only the statutory bounds.
The analyst has two bounds and nothing in between. No data on the regulator's backlog, no historical timing study, no information to prefer the middle of the window over the edges. RiskUniform over (60, 180) days is the honest encoding of what is known. If the project team later obtains backlog data — for example, that the regulator is currently running at Day 140 on similar applications — the distribution can be replaced with a RiskCumul anchored on that central value, and the downstream model continues to run.
Building the model
The workbook follows the same layout as the previous article. The first sheet is a README. The second sheet holds all inputs, one variable per row, with an ID, a name, a value, a unit and a plain-English description. The third sheet is the simulation engine and opens with a variable glossary so the reader knows what every symbol represents before reading a single formula. A cross-industry sheet shows how the same RiskUniform pattern applies in oil and gas well integrity, in construction permitting, and in insurance claim timing. A final sheet holds the instructions.
The core formula is one line:
MODEL sheet cell D50 (the primary output — NPV) =RiskOutput("Net_NPV")+(-D26+D47+D48)
RiskName("Approval_Day") gives the cell a readable label so that every @RISK report — the Inputs pane, the histogram, the tornado — refers to it by that name rather than by its cell address. It sits inside the RiskUniform call as an extra argument, alongside the min and the max. RiskOutput("Net_NPV") is added to the downstream NPV cell so that @RISK saves the full simulated NPV distribution for reporting at the end of the run.
Everything else is arithmetic. The revenue-start day equals the approval day plus the commissioning period. The Year-1 revenue fraction equals the number of operating days remaining in Year 1 divided by 365, capped at zero if commissioning overruns the calendar year. Year-2 through Year-15 revenue is a full 350 GWh at £85/MWh. The net cash flows are discounted at 7 percent and CAPEX is subtracted to give the primary output, net project NPV.
Verifying the output
Three things to verify. First, the simulated distribution of the approval day variable should be flat between 60 and 180 and zero elsewhere. Open the @RISK output histogram on cell C6 and confirm it is rectangular, not bell-shaped. Any visible peak signals either a typo or the wrong function.
Second, assess the relationship between the approval day and the Year-1 revenue. Because the arithmetic is linear, the Year-1 revenue distribution should be a scaled-and-shifted uniform — still flat, but over a smaller range and a lower mean. If you see a triangular or a bell-shaped curve on Year-1 revenue, a downstream formula is introducing non-linearity that you did not intend.
Third, read the tornado on the net NPV output. The approval day should be the dominant variance contributor because every other input in this model is deterministic. If another variable shows up above the approval day on the tornado, check that you have not accidentally left a RiskPert or RiskCumul on a cost or price input that was meant to be fixed for this case study.
When RiskUniform is wrong
A uniform overstates uncertainty the moment you have any information about shape. If the regulator publishes backlog statistics showing a median at Day 135 and a 90th percentile at Day 170, you now know the distribution is not flat — it is skewed toward the upper half of the window. Keeping RiskUniform at that point is no longer honest.
The tell is straightforward. If a subject matter expert can articulate, in a single sentence, a reason why one part of the range is more likely than another, Uniform is the wrong function. Move to RiskCumul if the expert speaks in percentiles, to RiskPert if the expert speaks in min / most-likely / max, or to an alternate-parameter distribution if a known shape is appropriate.
Translating to other sectors
The cross-industry sheet in the workbook reads across four columns — renewables, oil and gas, construction, and insurance — showing how the same RiskUniform pattern applies in each. In renewables the variable is approval day. In oil and gas it is the depth of a corrosion defect along a pipeline segment between two inspection locations. In construction it is the day a permit is issued within a statutory window. In insurance it is the timing of a reported claim within a coverage year before the actuarial curve is fitted. The same two-parameter function, the same neutral-prior logic, the same downstream structure.
When the data gives you bounds and nothing else, use the function that says bounds and nothing else. RiskUniform is the shortest, most honest way to encode "I do not know where in this range it lands, but I know it lands in this range." Replace it the moment the data tells you more.