Exploring Hydrocarbon Depletion
Page added on March 29, 2017
“Production in a region rarely follows a bell curve nor do regions necessarily experience a single peak. As a result, this method repeatedly predicted premature peaks for many countries and for the world itself.”
“Production patterns are determined by the geology and chemistry of the deposit, plus the engineering decisions on how to produce it, plus the fiscal regime in place.”
Although the insistence that “peak oil” was imminent has largely faded from public view, it remains a valuable illustration of how poorly developed theories can nonetheless catch the public’s imagination, including those who should know better. So what were the theories and methods that were employed to support peak oil, arguments that a library of articles and books repeated to create a false narrative (and, undoubtedly, a ‘97% consensus’).
The original claim underlying peak oil was that resource scarcity would cause oil production to decline in the near future and that nothing could be done to alter that trajectory. Two retired oil geologists—Colin Campbell and Jean Laherrère—justified this idea by making their estimates of recoverable resources using a private database of oil field sizes fitted to the so-called Hubbert curve, a bell curve said to represent production for a region.
Their theory was that since production followed a bell curve, fitting production data for a country or region to a curve would demonstrate the entire trajectory of supply and yield an estimate of the total resource. Also, once half the resource was produced, production would decline; and conversely, if production was declining, then the peak had been reached and half the resource produced.
Actually, though, production in a region rarely follows a bell curve nor do regions necessarily experience a single peak. As a result, this method repeatedly predicted premature peaks for many countries and for the world itself.
Laherrère attempted to reinforce his claims by the use of so-called creaming curves, ordering discoveries by date to show how their sizes decline over time; the asymptote of the curve would then represent the total resource. This method is employed by conventional petroleum geologists, but with this understanding: It works only for a given basin, not a combination of them; it cannot predict the discovery of new basins; and it requires stable estimates of field size.
The peak-oil theorists ignored the first argument, insisted both that no new basins remain to be discovered and that their field size data was stable. (However, they elsewhere chided economists for not recognizing that field size data was often revised upwards.)
The shortcoming was made worse by the insistence that the results were robust, which they were not; as regards the Middle East, for example, creaming curves yielded an estimate that was revised upwards three times. It was simply asserted that the final estimate was correct, and earlier ones not, without recognizing the implication that the method did not yield a stable estimate but one which evolved over time.
Another freshman mistake was to rely on graphs of cumulative data, specifically discoveries and production, which Laherrère noted seem to resemble each other. The first thing taught in freshman statistics is that cumulative numbers are meaningless: next year’s GDP may change substantially compared to this year’s, but if you put a century’s cumulative GDP on a graph you can see no difference.
However, Laherrère in particular believed he has created a ‘model’ whereby he could predict a country’s production by looking at its cumulative discovery trend, although his graphs showing individual discoveries in a country made it clear that they were highly variable and related poorly to subsequent production trends.
The one thing in common with these methods was that they represented curve-fitting, just extrapolating discovery and production trends (and sometimes not accurately). Because some of the proponents are geologists, they claimed that the work was “scientific” and derided their opponents as economists, even though many petroleum engineers and geologists disagreed with their work.
Kjell Aleklett, who took over the leadership of the Association for the Study of Peak Oil despite having little experience in the analysis of resources, insists that his work is “natural science” even though there is no real scientific content: he and his colleagues observe trends and assume they are determined by physical factors.
Which is obviously wrong, given that the so-called scientific behavior is often violated. As mentioned, few countries exhibit a bell-curve shaped production trend, and many of the fields that are said to follow a mathematically precise behavior later violate it. Laherrère has noted that the Forties field production followed a declining trend for years, suggesting that the field’s total resource could be estimated by extrapolating it to the intersection with the x-axis.
The addition of gas-lift caused production to differ from the trend briefly, but then the trend resumed to his great delight—proving, he insisted, that geology determined the profile of a field’s production.
Nonsense. Since he published his graphs, the Forties oil production trend has changed, going flat instead of declining for roughly ten years, with an increase in the field’s proved reserves of 150 million barrels. Production patterns are determined by the geology and chemistry of the deposit, plus the engineering decisions on how to produce it, plus the fiscal regime in place. The latter two can change, as was the case with the Forties field and many others. New investment regularly adds reserves to mature fields, and the trade press is full of articles describing such additions.
More Peak Oil Fallacy
A certain amount of circular, reality-defying logic was also employed in peak-oil theory. Aside from the bizarre suggestion that only geology affected supply, not politics or economics, the insistence that estimates of field sizes did not change and that technology could not increase the recoverable portion of oil was nonsensical from the beginning. Recovery rates have been growing gradually over time, and numerous new methods and inventions have greatly increased the amount of oil that can be extracted.
But for the creaming-curve method to function this had not to be so to function, and in response, peak oil advocates like Jean Laherrère would claim that overall field size increases occurred only in the United States, owing to its industry’s reliance on a more restrictive and conservative definition of reserves. Yet various other sources all noted field size increases in other international settings. And when asked about new technologies, peak-oil theorists claimed that they only increased production rates, not recovery. Again, all the evidence is to the contrary.
Lastly, in a move that should puzzle the typical high school math student, Princeton geologist Kenneth S. Deffeyes developed the “Hubbert Linearization” method. This involves graphing annual production divided by cumulative production on the y-axis against cumulative wells, production, or time on the x-axis.
Naturally, the result is a declining curve because the y-axis denominator, cumulative production, is growing over time. After a century, you get a curve that looks like it’s heading towards zero, an inevitable result that Deffeyes used to predict world oil production would peak in November 2005.
Obviously, the very structure of this “equation” means that any data series will yield the same results, whether it’s global oil production, U.S. GDP, or sales of Hostess Twinkies. All that is being demonstrated is that the annual level becomes small compared to the total historical production as time passes.
Numerous articles about neo-Malthusian theorists, including Paul Ehrlich’s The Population Bomb and the Club of Rome’s The Limits to Growth, point to the failure of these predictions without clearly explaining why they failed, leaving many to argue that the theory was sound and the error was only in the calculation of the date of peak production.
Such a rationalization is hardly new: Sixteenth century London astrologers predicted the date when the Thames would flood and destroy London; when it failed to do so, they pacified the angry crowd by assuring them that “…by an error (a very slight one) of a little figure, they had fixed the date of this awful inundation a whole century too early. The stars were right after all, and they, erring mortals, were wrong.” 
In other words, the model was correct; a bad piece of data was to blame.
The same is true with neo-Malthusians. Colin Campbell first predicted that world oil production would peak in 1989, a date he repeatedly delayed without admitting to error. Ehrlich has never admitted that the conceptual model underlying his prediction of looming mass starvation was simply wrong, and The Limits to Growth authors, revisiting their 1972 work thirty years later, insisted that the great increase in oil supply simply meant the world was that much closer to the end.
A realist academic would say: No, your model was underspecified and thus yielded incorrect conclusions.
Decision-makers are rarely able to analyze claims and research in any depth, owing to constraints on their time. But it is truly bizarre that such superficial work, based on simplistic and obviously flawed theories and math, should rate such lengthy attention. It behooves us all to pay more attention to the details.