34 research outputs found
How (In)accurate Are Demand Forecasts in Public Works Projects? The Case of Transportation
This article presents results from the first statistically significant study
of traffic forecasts in transportation infrastructure projects. The sample used
is the largest of its kind, covering 210 projects in 14 nations worth US$59
billion. The study shows with very high statistical significance that
forecasters generally do a poor job of estimating the demand for transportation
infrastructure projects. The result is substantial downside financial and
economic risks. Such risks are typically ignored or downplayed by planners and
decision makers, to the detriment of social and economic welfare. For nine out
of ten rail projects passenger forecasts are overestimated; average
overestimation is 106 percent. This results in large benefit shortfalls for
rail projects. For half of all road projects the difference between actual and
forecasted traffic is more than plus/minus 20 percent. Forecasts have not
become more accurate over the 30-year period studied. If techniques and skills
for arriving at accurate demand forecasts have improved over time, as often
claimed by forecasters, this does not show in the data. The causes of
inaccuracy in forecasts are different for rail and road projects, with
political causes playing a larger role for rail than for road. The cure is
transparency, accountability, and new forecasting methods. The challenge is to
change the governance structures for forecasting and project development. The
article shows how planners may help achieve this.Comment: arXiv admin note: text overlap with arXiv:1302.2544, arXiv:1303.6571,
arXiv:1302.364
Five things you should know about cost overrun
This paper gives an overview of good and bad practice for understanding and curbing cost overrun in large capital investment projects, with a critique of Love and Ahiaga-Dagbui (2018) as point of departure. Good practice entails: (a) Consistent definition and measurement of overrun; in contrast to mixing inconsistent baselines, price levels, etc. (b) Data collection that includes all valid and reliable data; as opposed to including idiosyncratically sampled data, data with removed outliers, non-valid data from consultancies, etc. (c) Recognition that cost overrun is systemically fat-tailed; in contrast to understanding overrun in terms of error and randomness. (d) Acknowledgment that the root cause of cost overrun is behavioral bias; in contrast to explanations in terms of scope changes, complexity, etc. (e) De-biasing cost estimates with reference class forecasting or similar methods based in behavioral science; as opposed to conventional methods of estimation, with their century-long track record of inaccuracy and systemic bias. Bad practice is characterized by violating at least one of these five points. Love and Ahiaga-Dagbui violate all five. In so doing, they produce an exceptionally useful and comprehensive catalog of the many pitfalls that exist, and must be avoided, for properly understanding and curbing cost overrun
Once the shovel hits the ground : Evaluating the management of complex implementation processes of public-private partnership infrastructure projects with qualitative comparative analysis
Much attention is being paid to the planning of public-private partnership (PPP) infrastructure projects. The subsequent implementation phase â when the contract has been signed and the project âstarts rollingâ â has received less attention. However, sound agreements and good intentions in project planning can easily fail in project implementation. Implementing PPP infrastructure projects is complex, but what does this complexity entail? How are projects managed, and how do public and private partners cooperate in implementation? What are effective management strategies to achieve satisfactory outcomes? This is the fi rst set of questions addressed in this thesis. Importantly, the complexity of PPP infrastructure development imposes requirements on the evaluation methods that can be applied for studying these questions. Evaluation methods that ignore complexity do not create a realistic understanding of PPP implementation processes, with the consequence that evaluations tell us little about what works and what does not, in which contexts, and why. This hampers learning from evaluations. What are the requirements for a complexity-informed evaluation method? And how does qualitative comparative analysis (QCA) meet these requirements? This is the second set of questions addressed in this thesis
Executive session âLadeinfrastrukturâ: Referat
I denne session ser vi pĂĽ Ladeinfrastruktur i en version 2.0, og vi skal ikke diskutere om vi skal etablere ladeinfrastruktur. Vi skal høre om og drøfte hvad har vi lĂŚrt indtil nu og hvilke dilemmaer og udfordringer ligger der fortsat foran os set fra forskellige perspektiver â Hvad skal vi sammen have fokus pĂĽ, sĂĽ vi kommer i mĂĽl