Megafragility

Do industrial megaprojects over-run because they are badly managed, or because they are big?

It’s mainly because they are big.

In brief:

There is increasing evidence for the substantial performance problems with large scale energy megaprojects being mostly structural and fundamental, rather than due to management practice and lack of learning.

Recent work by academic and industry analysts has shown that when it comes to megaprojects, economies of big are different from economies of scale. In fact the larger and single-sized a project is, the more fragile (how it suffers in response to disorder) it may become, and the more prone to major cost and project over-run. Substantial historical data indicates the problem has not changed in the last 50-60 years, and seems inherent in any single-sized megaproject design.

Megaprojects expose themselves to various dimensions of complexity, and the non-linear behavior of errors and uncertainties which always add a negative impact to cost and schedule. Average cost over-runs are 40%, and delays over 2 years, with 20% of megaprojects over-running by 100%. With energy industry estimated annual expenditure reaching $400-500bn pa, potentially $80-100 pa of capital invested is wasted via very large project delivery. Megaprojects are often big single designs, but big is not scalable, and turns out to be fragile, and prone to breaking due to errors magnifying and causing investments to be irrecoverable. They are big, but megafragile.

Energy companies need to recognize bigness and complexity not as potential advantages as a primary route to growth, but instead impose greater stress testing to screen megaproject proposals, and look to alternative broken-down designs that are smaller, less complex and more genuinely scalable. A heavy reliance on megaprojects will continue to reduce returns on capital, consume cashflow, and delay the delivery of energy to customers and stakeholders. The recent cancellation of over $400bn of megaprojects by the industry may be an opportunity to recycle this high-risk capital into more effective, smaller scale and less complex alternatives.

Technological energy solutions such as shale, solar and wind, after decades of development, have less complex characteristics of size and scalability and they should play a larger part in company and energy industry portfolios to create more diverse, dependable, efficient and responsive energy sources.

Megafragility

Two propositions.

If very large projects are prone to major over-runs irrespective of specific project factors, then companies wishing to use them to drive business growth need to understand the risks in detail and factor them into better estimates. That may entail, as we have discussed, the filtering out of very large projects in general.

Note – we’ll use a slightly modified definition of energy megaproject to the norm here: greater than $2bn investment and activity in at least three countries (normally its just $1bn, which is rather broad).

If, however, even the largest projects undertaken can be managed effectively and to schedule, because of learnings from past failures, and improved management and contractor performance processes, then we can expect a new phase of major project delivery to be far more effective then the previous era, which was universally poor and ineffective, to the extent that megaproject over-run has been labelled an iron law.

Lets label these two broad positions the structural and the managerial schools of thought.

The structural school believe there are broad economic and underlying forces that govern the performance of very large projects, that lead to the law that they often if not always over-run and play havoc with even meticulous plans, and experienced teams. These forces are inherent, and will appear in any big enough project, and unless understood and factored into cost and schedule estimates, projects that ignore or overlook them are doomed to failure, yet again. Good management practice can alleviate these forces, or make companies relatively competitive, but they cannot overcome the fundamental complexity issues, and will always underestimate costs and schedule, often badly.

The managerial school notes and admits the previous poor project performance, over many years, in for example the infrastructure and energy industry, but asserts this is only due to not implementing good established practices leading to inadequate front-end engineering, inexperienced, disorganised or understaffed teams, lack of standardization in equipment and design and not adopting latest agreed management or engineering processes or standards. In addition, poor contractor selection processes and general industry capability are cited as additional factors that need to be addressed more generally, due to previous disinvestment cycles.

Let me set out a stall here then to prompt the debate between the schools.

The structuralist school has all the numbers on their side, most of it stretching over 50 years or more. Very large projects across many different sectors and countries have over-run, often dramatically, much moreso then smaller ones. They are now developing more theories as to why (which we’ll discuss), but the data is well in their favour. No matter what management effort, experience or learning has been applied, large projects are propelled to blow cost and schedule estimates. The school is favoured by externals – academics and some industry analysts.

The managerialist school has all the literature on their side. Practically every industry publication and textbook propounds the view that better project management principles, and supplier management process will eventually improve megaproject performance from its current state of chronic underperformance. The school’s adherents are the majority of participants, who note empirical evidence that many projects made errors they did not have to, so improvements will occur in the future.

This post will favour the structuralists. For three specific reasons:

  1. Most industry internal analysis ignores the structural issues of megaproject delivery, even though the data is mounting up on widespread poor performance. If the industry wishes to improve, it has to look as an external and confront the data – remember, its always difficult to read a label from inside the bottle.
  2. The structural theories and insights have developed over time and participants are now proposing ways in which to avoid the megaproject sinkhole of capital. These should be known much more widely and factored into future project assessments.
  3. Both the fossil fuel and renewable industries are going through periods of change and disruption and need to adapt quickly with better project management techniques. For the oil industry, the latest wave of project cancellation and staff resource reductions will mean they cannot afford to get big projects wrong again going forward – resources and techniques will be in even shorter supply next time round. For renewable firms, the pace of growth and ambition will cause their average project size to grow quickly, exposing them rapidly to megaproject risks.

The post also favours the structuralist view more generally because when the majority of the industry is led by the managerial ideas, then project teams and their suppliers and contractors tend to become the focus of blame for performance concerns when things go awry.

This is not just inaccurate and unfair, it also plays down the importance of the project selection process in the first place, and blunts any future learning. And it prepares the industry for yet another era of major project under-performance, which it may not be able to afford next time.

A lot is at stake

Megaprojects are the only way in which non-OPEC oil and gas companies of large scale believe they can replace declining reserves of production. The industry has sunk over 200bn per year into these projects over the last decade as a result. But the project outcomes have been poor, the impact on balance sheets awful, and production has hardly improved.

For projects undertaken, depending on the analysis, some 50-80% have over-run by over 40%, with a significant proportion over-running by 100% or more. For those abandoned or postponed, impairment charges of over $30bn were taken in 2015 alone, and over $400bn worth of future production has been put at risk by mass cancellations. In 2015 returns on capital for major oil companies was in low single digits, as cashflow moved steeply negative.

In addition, non-OPEC oil production outside shale has been flat over the past decade. The megaproject model has delivered flat growth, poor financial performance and now contributes to wider economic stress by threatening oil and gas production growth and hence pricing spikes and energy shortages.

Simply put, it is not a sustainable business model when oil prices are at historic average levels of $45-55/bbl.

So – why are megaprojects so prone to megafailure, and what is to be done to get at the root?

As always, the painting of a spectrum of approaches means there is a middle ground – a hybrid between the two camps. That would be a start. Today, the managerialist school dominates megaproject thinking.

The internal industry therefore has reams of publications devoted to what makes a great project, and I leave the reader to go explore it.

Here, we head off in the external direction.

Bias, Black Swans, Big is Fragile, Big is Blunder

If there is a structural school of megaproject thinking, who represents it? Labels are useful, but they can be confusing. What I mean by this body of thought is writers who believe there are deep underlying principles we have to be aware of as we enter any major project – no matter how unique, local or specific the issues seem to be. These issues are broad enough that they transcend such matters as choice of supplier, level of front-end design or organizational model.

I have noted four who figure prominently in the field.

  • Bias – Tversky and Kahneman first brought to light the ideas of optimism bias, loss aversion and outside view which feature prominently in some recent views of the megaproject performance. They held that fundamental elements of human reasoning assert themselves in many activities – and in megaprojects most prominently in our optimism bias that leads to the planning fallacy which propels us into taking on board projects with a false sense of knowledge of all the risks, and to disregard outcomes of similar ventures.
  • Black Swans – Nassim Taleb is hugely famous for the phrase and book the Black Swan, which has been now been well used in many situations. But his work also included important concepts such a tunneling (avoiding external facts), and non-linear risks. His later work Anti-fragility is also important as we’ll see. Taleb in fact goes one step further than Tversky and Kahneman regarding the planning fallacy in projects: its not a psychological problem, but a mathematical feature of large projects themselves.
  • Big is Fragile – Professor Bent Flyvbjerg has written extensively on megaprojects, and was one of the first to point out their almost universal poor performance – he labeled it a law because it has such robust data to support it. Flyvbjerg has been on record suggesting that most megaprojects fail because many of the actors involved underplay the risks and complexity involved –either through limited knowledge or deliberate cooking of the numbers – strategic misrepresentation is the euphemism, but we know what he means. Lately, Flyvbjerg along with colleagues has also taken on board a Taleb concept of fragility: how a project suffers when exposed to disorder. Megaproject fragility arises from their size being too big and unscalable, and so prone to a whole range of uncertainties. Their key insight: Bigness and scalability are two very different things.
  • Big is Blunder – Benjamin Sovacool: Professor Sovacool, a US policy analyst on energy, has written several major works on megaproject delivery. His view is the energy megaprojects are practically not researched at all as they have been assumed too complex to analyse. He attempts to fill this gap with case studies and an overarching set of propositions that point to the failure mode of all megaprojects.

The skeptical reader is encouraged to go to the references I have provided in the book-shelf section below, or summaries online.

Here, I want to try a brief synthesis of key findings to test the large empirical literature on projects.

I have also attempted to focus on what I think are fresh areas of insight rather than reinforcing accepted wisdom (we will take Ed Merrow’s Industrial Megaprojects as a repository of these (see also here).

Given this, there are three key areas that distinguish megaprojects from merely conventional ones:

  • They are Complex, and complexity is different from complicated
  • They are non-linear, rather than linear
  • Big is not the same as Scale, and Big is Fragile.

They are Complex – and complexity is different from complicated

Complicated projects have a large number of linked elements that require meticulous planning and organisation, dedicated IT systems and procedures to ensure they are delivered effectively. Risks are various but manageable, and planning tends to be linear based on time with inputs then outputs, overseen by governance controls at various stage gates. Think aircraft engine or manufacture.

This is not megaprojects.

They are complex systems – inherently unpredictable with emergent issues and problems that arise from the vast array of activities their scope covers. Political, geographic, governance and investment issues compete with complicated technology to create unique blends of matters that emerge and couple with each other, and which no risk management system can sufficiently cover. As the scope spreads outward – to access economies of scale, which we’ll review also – megaprojects expose themselves to regulatory, jurisdictional, long time-frame and multiple stakeholder issues which can combine in a huge variety of ways to create novel risks and (negative) impacts. As Sovacool puts it:

Despite these (megaproject) failures, industry analysts continue to attribute these problems to failed management strategies rather than structural deficiencies in the economics of large-scale projects. Perversely, megaproject failures are presented as justifications for investment in additional projects. After all, how better to prove that the project would have been successful had it only been managed correctly than to address underlying, systemic deficiencies.”

The data on megaproject failure indicate that the present methods of linear stage-gate controls, governance structures and risk management are insufficient to control megaprojects. Kovacool’s and Taleb’s assessment is that given their bigness, its unsuitable to assume existing risk and planning processes can cope with them. Fixing one element often causes others to fail. Management should plan for failure, assume events unthought of will overcome the project, and factor that into the assessment of project viability.

This leads to the numbers.

They are Non-linear rather than linear

As noted, there is plenty literature noting that megaprojects perform poorly. Most, up to 90% or over fail, with a majority costing over 40-50% more than scheduled.

Flyvbjerg, and main co-author Atif Ansar,  go much further with the data.

Their review of dams, trains, rail and road shown in Big is Fragile reveals more dimensions to this failure.

Ansar and Flyvbjerg use the example of (245) dams, because they are archetypal megaprojects: bespoke, site-specific, take a huge amount of input (eg labour, land, materials) and output and have most of their life-cycle cash spent up-front, take a long time and are spatially fixed. They are also complex – with high interdependencies of all the parts, and the organisations and the environment they interact with. Their results are therefore generalizable to most energy megaventures (with less standardised designs even more exposed to the findings).

Importantly, they are also indivisible and discrete (a 95% complete dam is as valueless as a dam not built at all). Keep this in mind when we compare oil and gas megaprojects with technology-based energy projects based on solar and wind.

They present two key statistical findings:

  • Megaprojects are always subject to a long-tail outcome: this is the non-linear effect, promoted by Taleb and confirmed in the data. Taleb puts it succinctly:”There is an asymmetry in the way errors hit you…it is inherent to the non-linear structure of projects. On a timeline going left to right, errors add to the right end, not to the left. If uncertainty were linear we would observe some projects completed extremely early. But this is not the case.” This has a critical outcome – the statistical distribution of megaproject performance has a long-tail, ie a relatively high probability of huge runaway costs. Although 75% of dams over-ran, and half did so by 40% or more, in 20% of dams the costs doubled, and in 10% (or P90) costs tripled. If cost outcomes followed a more typical linear or Gaussian distribution, the P90 outcome would be typically a 20-30% over-run, not 300%.
  • Over-run doesn’t improve over time: the poor performance of megaprojects dates back to the 1950s, and is not improving. Either learning is not occurring, or more likely it cannot occur because of “systemic deficiencies” in megaprojects, or as Sovacool suggests in one of his key propositions project sponsors continually oversell their virtues. Optimism bias is rooted in assuming that key risks can be identified, possibly having reviewed other projects in the past. But that does not seem to be the case, and planning and estimates continue to ignore the complexity and non-linearity of megaprojects, and overlooking the major uncertainties they are exposed to.

Big is not the same as Scale, and Big is Fragile

In one of the sharpest insight on megaprojects so far, Ansar and Flyvbjerg challenge the notion of economies of scale, and show clearly how they are often illusory for megaprojects.

They do this by forcing themselves and us to think clearly what we mean by big and scale, and then links it to the concept of fragility.

Big is not a simple term: Ansar and Flyvbjerg consider size, timescale, complexity, demand, inputs, customers, finance and so on – the most typical energy-related definition being unit of outputs produced.

Scale is not simple either: in practice it means ability to scale up to a larger (more economic) size. But scalability should also consider the ability to – easily – move down as well as up in scale. That is not often the case for large or big projects for energy plants. There is a degree of slack or ullage in energy plants, but their flexibility to scale down or up effortlessly is very low. They are fixed, big entities.

Fragile – fragility as defined by Taleb is how a “thing or system suffers” when it encounters disorder. It is typically irreversible, and increases with size, there being more elements of sub-elements to break-down and cause damage to the whole system when it is very large.

Linking them together, Ansar and Flyvbjerg note “fragility arises when big is forced to into doing what was best left to scalable”.

Ultimately, they state,

Oversizing a system increases its complexity disproportionately due to the greater number of permutations of interactions now possible amongst more sub-components, and this leads to fragility. Despite their Goliath appearance, big capital investments break easily (deliver negative net present value) when faced with disturbances. A greater propensity to fragility is intrinsic to big investments.”

Megaprojects are therefore complex and inherently unpredictable, non-linear leading to potentially vast over-runs, and fragile and unscalable, being inflexible to changes in demand, and vulnerable to errors which magnify and make the whole system vulnerable to break and ruin investments.

Economies of Scale are not the same as Economies of Big

A key insight from this theory, if correct, is the prediction that economies of scale for large energy projects will be unlikely due to complexity effects – negative errors wiping out and over-running any assumed benefits of bigness and large “scale”.

This seems to be borne out in recent megaprojects. A study by the Oxford Institute for Energy Studies shows the exponential increase in LNG project costs as they have grown larger in scale. At the start of the century LNG projects were estimated at a development cost of around $500-800/million tonnes per annum (mtpa) production capacity, which allowed economic investments at prevailing long-term contracts sales contracts rates. This was based on projects of typical size of 1-3mtpa.

During the 2000s, and supported by high oil prices, larger individual plant sizes of 4-8mtpa were designed for megaprojects where they were combined together, and assumed to gain from the economies of Big.

The chart below shows that the precise opposite occurred – as train sizes (and project scopes) grew, overall unit costs actually increased quickly from $500-800/mtpa to over $2000/mtpa estimated in 2014 and over $4000/mtpa today based on latest out-turns of the Gorgon project – $60billon for a mammoth 15mtpa triple-train facility.

LNG costs Oxford

Gorgon is a classic megaproject: very big, international scope led by US and Australian teams and three vast interconnected production plants. Many analysts will claim that local Australian cost inflation, and project logistical difficulties in Barrow Island are unique factors that make Gorgon a “one-off”. These contributed, but they are second order.

The fact that the vast scope of Gorgon gathered in the complexities of Barrow Island, or Australian labour issues are exactly the complexity and uncertainty factors that Ansar and Flyvbjerg, Taleb, Sovacool et al are pointing to that cause megaprojects to run out of control. Designing Big increases uncertainties, errors and unforeseen interactions.

Gorgon was megafragile by design – because all Big Projects are far more prone to cost (and schedule) over-runs due to complexity, non-linearity and fragility wiping out estimated benefits to “scale”, however well-defined from a purely draft engineering perspective.

These factors will always be at play as projects grow in size beyond a certain range.

A stylized generic version of the above chart can be sketched as follows:

As projects grow toward big, are located internationally and are designed around a single, discrete indivisible investment, they expose themselves to the complex, non-linear and fragility issues we have reviewed. These will tend to grow disproportionately quickly, overcoming potential size economies, cause major over-runs, and render investments irrecoverable.

I have also tried here to indicate that megaprojects are not binary – as projects grow in size, they do not always expose themselves to the complexity and fragility issues discussed.

As they transition from modest size to larger some factors can mitigate their negative exposures – for example, geographic location in a single country, or an inherently scalable rather than single-size nature.

This means that large-scale investment projects in for example the US or China with sophisticated local supply chains, or Mid-East countries with huge energy resources and imported capability, can deliver effective investments. The recent effective delivery of mega-scale projects in the US eg Sabine Pass LNG, or KSA and Iraq major oil field extensions are testament to this. Recent upscaling of commercial-size wind and solar project investments may also avoid the megaproject downsides due to inherent scaleability and standardised technology availability.

Implications

As international energy firms continue to use large scale projects as a major growth option even in a lower oil price future, they need to consider the major risks they are undertaking with these massive investment “static bets”.

On an annual basis, the oil and gas industry invests in the region of $400-500bn capital. Over the past 10-15 years most of this has gone into maintaining existing production and investing in megaprojects to attempt production growth. Kovacool estimates that over 40% of major infrastructure projects cost over $10bn, and a large majority are over $1bn, showing the huge reliance on megaprojects for energy and oil and gas industry development.

Ansar and Flyvbjerg’s data indicates that a conservative estimate is that 50% of the investments overspent against original plans by at least 40%, with 20% over 100% over budget, and that schedule slippage averaged over 2 years. This means that on annual basis the use of megaprojects for oil and gas development potentially wastes $80-100bn before late schedules are taken into account.

For vital energy needs this is a “very late coming”, and a very ineffective tool.

Much has been talked about regarding the postponement of megaprojects leading to oil price spikes in the near future. Its clear that continued reliance on megaprojects to such a degree creates this dynamic in any event, with their great uncertainty, lateness, and fragility based around a single design concept and static large-scale developments.

Three questions are therefore important to review here.

Are megaprojects set to continue to such a scale?

Short answer: yes. And probably irrespective of oil price.

Yes, because despite the overwhelming evidence of their high risks, inefficiency and fragility most oil companies will expect to overcome these deficiencies with their very next project, having learnt from the previous failures. The dominant school of thought is the managerial one, which assumes all prior failures are management and process errors that can be eliminated this time. It a procyclical industry that needs to develop resources to maintain production, or accept decline.

That seven decades of analysis shows that the out-turn of megaprojects continue to fail at the same rate will not be seen as a fundamental reason to reassess or revise current investment plans. In a way, there is no alternative. That should be pause for thought for all investors and stakeholders in oil and gas energy development.

Sovacool cites seven reasons why megaprojects are popular: they range from presumed scale economies, through dealing with regulation, an aging workforce, competition with national oil companies and a seduction by standardization. All of them reinforce the desire to grow larger across all energy forms, fuel-based and technology-based. So yes, they are set to continue.

And irrespective of oil price because as price-takers, and the reasons above, most oil and gas companies will need to restart the project engine soon before its idleness causes it to lose its capability. If prices respond, then this will be quick. If they do not, lower break-even levels and lower input costs will likely be cited as major improvements, along with standardization and innovation.

This may indeed be the case. But the fundamental issues of big, scale and fragility will remain for all large-scale complex projects.

The recent lull in capital investment should be seen as an opportunity to recalibrate our expectations and use of megaprojects for energy delivery.

If Ansar and Flyvbjerg, Sovacool and the industry studies from EY and McKinsey are right, then the latest $400bn cancellation of marginal megaprojects may have saved the industry, stakeholders and customers about $150-200bn in wasted capital, and several years in excess delays for energy fulfillment.

Recycling that surplus capital into less complex, more scalable, faster and more robust projects for energy delivery may be more of an opportunity than threat to the industry’s future project delivery plans.

What can oil and gas firms do to improve megaprojects, and what can other energy providers learn?

The authors have provided some ideas, and they have disagreed with one another: this is not a simple fix as you would expect. I have attempted a brief summary here. We’ll explore some of these more in future posts, for now a high level review aimed at high-level decision-makers at all levels in oil firms (leadership is everywhere):

  • Where possible – pursue smaller, more flexible, scalable, faster, simpler alternatives. Not every mega-project can or should be cancelled or recycled, even if the likely outcome is very uncertain. Strategic considerations such as country entry or technological proof may offset the downsides. But often, very often, they will not. Alternative scale and complexity options should be sought, to avoid up-front costs, improve learning cycles, reduce complexity and costs. If the portfolio needs to be weighted toward megaprojects, the risk-adjusted capital value will decline significantly. Use surplus capital that was heading the way of a static megaproject bet to redesign on this basis.
  • Change megaproject philosophy from mega-engineering to efficiency: be skeptical of bigger is always better – on a risk-adjusted basis as projects grow in size they decline rapidly in value. If this is the only way in which to achieve volume or production goals, factor the likely outcomes into the portfolio analysis, and look for alternative routes. Engineers are fantastic innovators and problem solvers – but the “above ground risks”, uncertainties of stakeholders and so on that engineering solutions draw in will overcome the draft “scale benefits”. A tougher commercial mindset to counterbalance the engineering proposals is required.
  • Introduce more extreme stress tests: Exxon already do this, they claim, having assessed portfolio robustness at $40/bbl even when oil was over $100/bbl. Their impairment costs in the past two years have been negligible which may indicate the effectiveness of this method. Extreme tests should also include assessments of what if projects are runaways to over 100% or greater increase. Remember, the data show 20% of all megaprojects end up in this zone (P80 in industry vernacular).
  • Adding padding is no solution: just adding a 50% buffer to project estimates does not solve the issue. It likely cancels all projects, or if not, causes project management to use up the contingency inefficiently. The stress tests should be used to filter the portfolio and force redesigns, and not just pad estimates.
  • Genuinely challenge the design selection: many projects alight on a single concept early (sometimes with several variants which are really just versions of the same idea) and supposed objective assessments soon transform into multi-appendixed rationalisations. Use the early selection discussion to challenge more fundamental assumptions around size, partner involvement, major uncertainties, and other sources of complexity being introduced via bigness and international locations.
  • Avoid the mind-set of complexity being an advantage. Its true only a few companies can confidently manage complexity –  hence its control would be advantageous. But maybe not significantly in the oil industry, as effort will always be high, yet the price of complex-to-build molecules is in the long run the same as simple-to-supply (hard oil v NOC oil). Don’t embrace or admire complexity, avoid it.
  • Avoid razor thin margins: many investment cases get eroded over time as more assumptions are challenged. Recall Taleb’s caution that all errors and uncertainties add to the right (foreign exchange benefits are essentially a flip of the coin, we are discussing local errors here), If the megaproject is based on a large series of things going right and sunny assumptions, and margins are getting lower, step back and reassess with fresh stress scenarios again.
  • Pursue standardization at the project level, not initiative level: standardization delivered via smaller scale repeatable units can start to unlock the benefits of learning curves and technology improvements – Sovacool is particularly insightful in this area, highlighting research that indicates the learning curve over time for large projects can go negative, i.e. costs increase, not decrease. So, standardizing a megaproject at a sub-process level eg equipment or a planning process will not address the bigger fundamental issues of complexity, and one should avoid being seduced by it. Citing standardized methods as a way of accelerating and improving megaprojects is a key element in their continued failure to deliver: limited standardization efforts do not address the complexity that emerges in megaprojects, but it does allow management to believe they have acted enough to overcome it – the data says otherwise.
  • Don’t put all the effort into second order issues: supply chain costs and contractors will not deliver vast savings or reduce complexity. More dense contracts or contract strategies will add complexity and create supplier reactions. Always reduce costs where possible through supplier negotiations and demand management. But be realistic on the magnitude versus effort. Even a dramatic 40% saving across the board on all major project equipment translates to about 6-8% overall, which will easily be lost as complexity increases in contract governance.
  • Focus on early macro governance, not later micro-governance: as soon as projects over-run, new layers of management oversight are installed often with governance precipitated via smaller and smaller margins of overspend. Many megaprojects have gone through this cycle, not realising that such an effort, whilst seeming to take action, actually creates more complexity and delay, as more elements and features of the project are reviewed and reshaped, leading to knock-on effects. As noted, the key challenges should be around the scale, scope and complexity up front, and stress testing and hard challenges at this point. Letting projects pass, and then trying to stem the outcomes via more execute-phase governance may not improve the outcomes, and has been shown to move things the other way as stakeholders impose various conflicting views.

What are the wider industry issues of megaproject performance?

  • Megaprojects are high risk for energy diversity: Any energy megaproject that is big, single-sized and structurally complex is likely to over-run cost and budget substantially. As long as oil and gas, or any other energy sector, e.g. nuclear, use this as a primary route of development, energy from these sources will be a late coming in order to satisfy current energy needs. Projects which are less complex and scalable from alternative sources such as modular LNG, shale, wind or solar provide technological diversity to any energy portfolio. Recent examples from eg Germany indicate that even for relatively large windfarm projects, cost and schedule out-turns have been much more effective with 0-20% over-runs the typical range. Shale break-even costs have dropped by 50% over the past 5 years.
  • Scalable technology costs decrease with time, megaproject extraction costs increase: international oil and gas extraction follow the general rule that extraction costs increase over time as the largest, easiest to mine fields are utilized first, leaving complex and marginal ones for later, ie now. Any energy source based on a technological solution eg solar or wind, or shale oil and gas, follows the exact opposite route, with unit costs decreasing rapidly through time via learning curve effects and rapid gains in technology via continuous improvements. This is a fundamental feature of scalable, technological energy, not a bug. Any wider energy policy needs to recognize that whatever the costs technological forms of energy are today, they are relentlessly decreasing, in the range of 20% every 2-4 years, and becoming increasingly commercially mainstream.
  • There is now increasing choice in (apolitical)) energy provision: the growing capability of scalable, simpler technology-based energy such as wind and solar provides a new source of energy for power and heat. This is an apolitical development as the merits of these newer technologies should rely far less on subsidy than previously, and have lower carbon emissions as an outcome, not as the primary driver. This is one of the reasons that they are being adopted rapidly both in Texas and in Beijing. They also require long-term, multi-skilled workforces and so present attractive long-term employment sectors. Energy reliability and local control from these forms of energy, as opposed to the higher risks from remote single-sized megaprojects, needs also to be considered.

Conclusion

There is increasing evidence for the substantial performance problems with large scale energy megaprojects being mostly structural and fundamental, rather than due to management practice and lack of learning.

Recent work by academic and industry analysts has shown that when it comes to megaprojects, economies of big are different from economies of scale. In fact the larger and single-sized a project is, the more fragile (how it suffers in response to disorder) it may become, and the more prone to major cost and project over-run. Substantial historical data indicates the problem has not changed in the last 50-60 years, and seems inherent in any single-sized megaproject design.

Megaprojects expose themselves to various dimensions of complexity, and the non-linear behavior of errors and uncertainties which always add a negative impact to cost and schedule. Average cost over-runs are 40%, and delays over 2 years, with 20% of megaprojects over-running by 100%. With energy industry estimated annual expenditure of $400-500bn pa, potentially $80-100bn pa of capital invested is wasted in very large projects. Megaprojects are often big single designs, but big is not scalable, and turns out to be fragile, and prone to breaking due to errors magnifying and causing investments to be irrecoverable. They are big, but megafragile.

Energy companies need to recognize bigness and complexity not as potential advantages as a primary route to growth, but instead impose greater stress testing to screen megaproject proposals, and look to alternative broken-down designs that are smaller, less complex and more genuinely scalable. A heavy reliance on megaprojects will continue to reduce returns on capital, consume cashflow, and delay the delivery of energy to customers and stakeholders. The recent cancellation of over $400bn of megaprojects by the industry may be an opportunity to recycle this high-risk capital into more effective, smaller scale and less complex alternatives.

Technological energy solutions such as shale, gas, solar and wind, after decades of development, have less complex characteristics of size and scalability and they should play a larger part in company and energy industry portfolios to create more diverse, dependable, efficient and responsive energy sources.

BOOKSHELF

Amos Tversky, Daniel Kahneman; Judgment under Uncertainty: Heuristics and Biases – Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131.

Daniel Kahneman: Thinking Fast and Slow, 2011 (Allen Lane)

Nassim Taleb:  Fooled by Randomness 2001, The Black Swan, 2007, Antifragile, 2011

Bent Flyvbjerg: Megaprojects and Risk, 2003;

Atif Ansar, Bent Flyvbjerg, Alexander Budzier, and Daniel Lunn: Big is Fragile, An Attempt at Theorizing Scale: from the forthcoming Oxford Handbook of Megaproject Management, 2016 (available as pdf at arXiv.org)

Benjamin Sovacool: The Governance of Energy Megaprojects, 2013