Friday, February 29, 2008

Malcolm Greave’s THAI toe and heel air injection heats up

It is rare to see a professor come out and cheerlead a new technology, even if he is the father of the process. In this case it seems a well earned right. THAI or toe and heel air injection is proving to be far better than anticipated and making the projected improvements much more probable.

This article was published a couple of months ago and this last week we have also seen the THAI story been told on the national news. I have been keeping a watchful brief on the pilot test from before it got funded. This is actually a remarkable discovery, and its real success infield is wonderful. Personal experience kept me cautious until I saw the early production results. We all can now throw that caution to the wind.

I believe that this will also access huge reserves of conventional oil that is now classified as dead oil. Of course a lot of those fields will have to be dried out. However, any sandstone based reservoir that is reasonably thick should be exploitable. You will notice particularly that they are quoting an amazing 70% to 80% recovery and in well catalytic upgrading to as high as 26 api. Of course, they are now getting a bit over the top. However, I certainly can anticipate resources for which these levels may be possible.

In fact, I suspect that a lot of oil companies will be rather sorry that they ever used water floods at all. For the layman, natural flow will deliver up to perhaps 40% of in place oil. The truth is that this is more like 30 to 35%. Water floods will sweep out maybe another 15%. This means that generally, half of the original oil remains behind. And it is rather unlikely that a wet formation can be made to work with this method, although I reserve the right to be surprised.

The power of this method comes from the fact that the air is under pressure permitting the development of a 600 degree burn zone. This is hot enough to encourage reforming of the oil, to say nothing about its liquefaction. On top of that the combustion product is CO2, CO and steam (H2O) as well as entrained nitrogen. All these gases except H2O dissolve into the oil itself helping to improve its viscosity. These gases also dissolve into the water helping to break the oil free of the sand itself.

The only escape for all this heat is with the production fluid itself or through a very slow leaking into the surrounding non porous sediments. This is also true of the production gases which will tend to penetrate the formation ahead of the burn front speeding the process up.

I expect that it will be possible to set up a 100 well burn front within the formation that will obviate any need for pillars or untreated zones between burns. It also seems that as the burn front gets a fair distance down the formation, it will be good practice to place additional injection wells at the burn front and seal the older wells.

In fact there is little reason not attempt to treat one hundred percent of the formation with a closely managed burn front that is moved slowly along with additional production and injection wells placed as needed.

I am particularly encouraged with the experiments starting with using 3d seismic mapping to follow the location of the burn front. If this works, then it will be possible to almost micro manage the system.

The real payoff with this system is that it uses drilling industry resources which are sufficient and fully in place in Alberta to swiftly add a million new barrels of production every year. Each well pair will pump out 1000 barrels per day. With air injection and a full sand handling system, they are not cheap, but they are not unusual and will certainly meet the industry standard of a three year payback with absolutely no discovery risk.

Since this all works best on oil that is likely below the mining zone, we have likely added all of Canada’s oil sands to the world’s oil inventory. I believe that this will easily exceed one trillion barrels, though up to now measurement has never been much of a priority. I also remember seeing a map once in which the tar sands were shown to extend far north along the McKenzie Valley. I think every one just gave up once they found a trillion. I suspect accurate measurement just became important again.

Toe-to-Heel Air Injection (THAI™) System

Published Thu, 2007-11-29 16:08 Energy

A new method developed in Britain over the past 17 years for extracting oil is now at the forefront of plans to exploit a massive heavy oilfield in Canada.

Duvernay Petroleum is to use the revolutionary Toe-to-Heel Air Injection (THAI™) system developed at the University of Bath at its site at Peace River in Alberta, Canada.

Unlike conventional light oil, heavy oil is very viscous, like syrup, or even solid in its natural state underground, making it very difficult to extract. But heavy oil reserves that could keep the planet’s oil-dependent economy going for a hundred years lie beneath the surface in many countries, especially in Canada.

Although heavy oil extraction has steadily increased over the last ten years, the processes used are very energy intensive, especially of natural gas and water. But the THAI™ system is more efficient, and this, and the increasing cost of conventional light oil, could lead to the widespread exploitation of heavy oil.

“The world needs to switch to cleaner ways of using energy such as fuel cells,” said Professor Malcolm Greaves, who developed the THAI™ process.

“But we are decades away from creating a full-blown hydrogen economy, and until then we need oil and gas to run our economies.

“Conventional light oil such as that in the North Sea or Saudi Arabia is running out and getting more expensive to extract.

“That’s why the pressure is on to find an efficient way of extracting heavy oil.”

THAI™ uses a system where air is injected into the oil deposit down a vertical well and is ignited. The heat generated in the reservoir reduces the viscosity of the heavy oil, allowing it to drain into a second, horizontal well from where it rises to the surface.

THAI™ is very efficient, recovering about 70 to 80 per cent of the oil, compared to only 10 to 40 per cent using other technologies.

Duvernay Petroleum’s heavy oil field in Peace River contains 100 million barrels and this will be a first test of THAI™ on heavy oil, for which THAI™ was originally developed. Duvernay Petroleum has signed a contract with the Canadian firm Petrobank, which owns THAI™, to use the process.

The THAI™ process was first used by Petrobank at its Christina Lake site in the Athabasca Oil Sands, Canada, in June 2006 in a pilot operation which is currently producing 3,000 barrels of oil a day. This was on deposits of bitumen - similar to the surface coating of roads - rather than heavy oil.

Petrobank is applying for permission to expand this to 10,000 barrels a day though there is a potential for this to rise to 100,000.

The 50,000 acre site owned by Petrobank contains an estimated 2.6 billion barrels of bitumen. The Athabasca Oil Sands region is the single largest petroleum deposit on earth, bigger than that of Saudi Arabia.

Professor Greaves, of the University’s Department of Chemical Engineering, said: “When the Canadian engineers at the Christina Lake site turned on the new system, in three separate sections, it worked amazingly well and oil is being produced at twice the amount that they thought could be extracted.

“It’s been quite a struggle to get the invention from an idea to a prototype and into use, over the last 17 years. For most of the time people weren’t very interested because heavy oil was so much more difficult and expensive to produce than conventional light oil.

“But with light oil now hitting around 100 dollars a barrel, it’s economic to think of using heavy oil, especially since THAI™ can produce oil for less than 10 dollars a barrel.

“We’ve seen this project go from something that many people said would not work into something we can have confidence in, all in the space of the last 18 months.”

Professor Greaves, who was previously Assistant Professor at the University of Saskatchewan in Canada, and who also worked with Shell and ICI in the UK, is looking at making THAI™ even more efficient using a catalyst add-on process called CAPRI™.

This process was also developed by Professor Greaves’ team at Bath and is intended to turn heavy oil into light while still in the reservoir underground. The CAPRI™ research has recently been awarded funding of £800,000 from Engineering and Physical Sciences Research Council, including £60,000 from Petrobank. The project collaborators are Dr Sean Rigby, from the Department of Chemical Engineering at Bath, and Dr Joe Wood of the University of Birmingham.

Source: University of Bath

Thursday, February 28, 2008

Noah's Flood circa 6000 BCE

One outcome of yesterday’s posting on the Laurentide collapse is that it becomes feasible to address the historicity of the legend of Noah. Eight thousand years or circa 6000 BCE is short enough to allow successful oral transmission of major events in human history. These types of oral transmissions are extremely valuable as a conforming data point often leading to much harder data. Many such tales have driven much real discovery. Who would even look for massive flood evidence in the first place without inspiration from Noah? That such evidence turned out to be confounding more often than not does not take away from the hard evidence gathered.

The abrupt release of water and the resultant rise of the sea level was very quick. The rise is estimated to be 45 feet. That is a bit too precise. It could just as easily been twice as large. It was also followed by a slower and steady rise as the balance of the ice melted out. The main event certainly played out in weeks, however.

It certainly explains one anomalous feature of the legend rather nicely. Noah built a boat; it floated for forty days and then was grounded. A rising sea that was slowly driving inland on a plain or delta would most likely give this experience. This is not true for a normal river flood.

Maybe Noah was the local mathematician - scholar who got word that the great ice dam was ready to break and did the basic calculations and acted on them.

It also explains the global distribution of this flood motif. Everybody was affected who had access to a coast. This is another important point. The majority of the global population has always been concentrated on the coast and often exploiting the sea itself. Remember the Pacific Northwest were huge Stone Age populations built up based on seafood. The inland was difficult and only modestly used and could never support any real population even today. Nice to look at though.

We also know that prior to the climate stabilizing, that the temperate interior was only suitable for small hunter gatherer bands with an attendant low population. They would have penetrated every valley on Earth but they lived a circumscribed life way. At best, they had begun cattle husbandry which really lends itself to that type of small band society.

Another direct result of a rapid rise of the sea would be the actual destruction of a large portion of the coastal population, if only through a loss of their livelihoods. Shell beds would be lost for a generation and this was a major staple as the fisheries were very seasonal. This means that the repopulation of the coasts would have come from the inland and highland populations conforming to the archeological and other evidence.

We could now run amok and make all sorts of other suggestions. However a real global coastal flood confirms the one salient global fact. There was a global flood. There were also eye witnesses who were seriously impressed. We now know when it happened. The rest of the legend can be safely dismissed as embroidery. Great story though.

Some of this embroidery could also be material folded in from unrelated events such as forty days of rain. That is not impossible by itself in the aftermath of a volcano perhaps. The tale of the dove is so clearly part and parcel of the religious symbolic tool kit as need no further comment.

During the four to six thousand years that it took to melt the northern Ice Cap, the sea was always persistently rising by on average about half an inch per year. This is barely faster than geological motion and totally unnoticeable. Thus the only event that could ever have been noticed was the release of Lake Agassiz into the Atlantic Ocean. It is right to call it Noah’s Flood.

Wednesday, February 27, 2008

Laurentide Collapse

I am posting this news story because it establishes a couple of dates rather more closely than previously. I think readers could read my postings back in July of last year to understand my ideas on the Pleistocene nonconformity. This has also been published in Viewzone (Google it).

The reduction of the Laurentide ice sheet was the final act in the total collapse of the Northern Ice Cap. The first act was the swift collapse of the Scandinavian sheet. What this makes very clear is that the final collapse was an escapement of a huge amount of pent up lake water a mere 8200 years ago.

The drop in global temperatures may have lasted a couple of hundred years, but the reduction of the balance of the ice would be then steady and uneventful. This means that the global climate finally stabilized only 7800 years ago. This all coincides with the rise of agricultural man in the Northern Hemisphere. Mind you conditions had been improving during the previous 5000 years as the climate regime of the Holocene established itself.

This also means that the sea would have swiftly risen a total of 45 feet, almost certainly driving out coastal populations from fertile deltas in particular. This is keeping in mind that the loss of the Northern Ice Cap over the past 5,000 years had driven all populations off the continental shelf itself. We know that the total rise in sea levels was around 300 feet.

I find this late date for the collapse of the Laurentide to be intriguing. The actual climate became warm through to about 3,000 years ago ending with the demise of the Bronze Age.

In any event the apparent settling of the temperate zones appears to have happened hot on the heels of any climatic improvement. It is as if we were ready and waiting to go. Cattle culture in particular was established in England as early as 9,000 years ago. Obviously the Gulf Stream was hard at work.

I cannot emphasize strongly enough how utterly recent the rise of man in the temperate climes is. In the meantime, the possible antiquity of man in the tropics is not even been truly investigated if it can be. Did some form of agricultural man arise, say thirty thousand years ago? The sea has covered the traces of maritime man in those same waters.

How it happened: The catastrophic flood that cooled the Earth

PARIS (AFP) — Canadian geologists say they can shed light on how a vast lake, trapped under the ice sheet that once smothered much of North America, drained into the sea, an event that cooled Earth's climate for hundreds of years.

During the last ice age, the Laurentide Ice Sheet once covered most of Canada and parts of the northern United States with a frozen crust that in some places was three kilometres (two miles) thick.

As the temperature gradually rose some 10,000 years ago, the ice receded, gouging out the hollows that would be called the Great Lakes.

Beneath the ice's thinning surface, an extraordinary mass of water built up -- the glacial lake Agassiz-Ojibway, a body so vast that it covered parts of Manitoba, Saskatchewan, North Dakota, Ontario and Minnesota.

And then, around 8,200 years ago, Agassiz-Ojibway massively drained, sending a flow of water into the Hudson Strait and into the Labrador Sea that was 15 times greater than the present discharge of the Amazon River.

By some estimates, sea levels rose 14 metres (45 feet) as a result.

How the great flood was unleashed has been a matter of debate.

Some experts suggest an ice dam was smashed down, or the gushing water spewed out over the top of the icy lid.

Quebec researchers Patrick Lajeunesse and Guillaume Saint-Onge believe, though, that the outburst happened under the ice sheet, rather than above it or through it.

In a study appearing on Sunday in the journal Nature Geoscience, the pair describe how they criss-crossed Hudson Bay on a research vessel, using sonar to scan more than 10,500 kilometres (6,000 miles) to get a picture of the bay floor.

In the south of the bay, they found lines of deep waves in the sandy bed, stretching more than 900 kilometres (562 miles) in length and some 1.7 metres (5.5 feet) deep.

These are signs that the bay's floor, protected by the mighty lid of ice, was swept by a mighty current many years ago but has been still ever since, they say.

In the west of the bay, they found curious marks in the shape of parabolas twisting around to the northeast.

The arcs were chiselled as much as three metres (10 feet) into the sea bed and found at depths of between 80 and 205 metres (260 and 666 feet).

The duo believe that this part of the bay had icebergs that were swept by the massive current.

The bergs' jagged tips were trapped in the sea bed and acted like a pivot. As the icebergs swung around, other protruding tips ripped arc-like tracks on the bay floor.

Also presented as evidence are deep submarine channels and deposits of red sediment that stretch from land west of Hudson Bay right across the northwestern floor of the bay itself -- both point to a current that swept all before it.

"Laurentide ice was lifted buoyantly, enabling the flood to traverse southern Hudson Bay under the ice sheet," the study suggests.

Previous work suggests the flood was so huge that it affected climate around the world.

The influx of freshwater into the North Atlantic reduced ocean salinity so much that this braked the transport of heat flowing from the tropics to temperate regions.

Temperatures dropped by more than three degrees Celsius (5.4 degrees Fahrenheit) in Western Europe for 200-400 years -- a mini-Ice Age in itself.

Tuesday, February 26, 2008

Lorne Gunter on the Cold Winter

This was posted today by Lorne Gunter in the National post. The real shock to me is the admission that our climatic model was deliberately corrected for the X factor and that this factor has essentially acted as a smoke screen for the CO2 hypothesis. Instead of saying that we have a previously unexplained climatic anomaly happening that we do not understand, it was allowed to essentially get completely out of hand.

Now we have located natural physical phenomena that conform to the data and wipes out the need for any linkage between CO2 and global warming.

Otherwise, the forty year wind and ocean cycle fits the data very nicely and the sun spot delay is still rather minor and will hopefully kick in shortly.

Snow cover over North America and much of Siberia, Mongolia and China is greater than at any time since 1966.

The U.S. National Climatic Data Center (NCDC) reported that many American cities and towns suffered record cold temperatures in January and early February. According to the NCDC, the average temperature in January "was -0.3 F cooler than the 1901-2000 (20th century) average."

How spoiled are we? Half of all entries into an average are often below average. This is a long overdue cold winter.

China is surviving its most brutal winter in a century. Temperatures in the normally balmy south were so low for so long that some middle-sized cities went days and even weeks without electricity because once power lines had toppled it was too cold or too icy to repair them.

There have been so many snow and ice storms in Ontario and Quebec in the past two months that the real estate market has felt the pinch as home buyers have stayed home rather than venturing out looking for new houses.

In just the first two weeks of February, Toronto received 70 cm of snow, smashing the record of 66.6 cm for the entire month set back in the pre-SUV, pre-Kyoto, pre-carbon footprint days of 1950.

And remember the Arctic Sea ice? The ice we were told so hysterically last fall had melted to its "lowest levels on record? Never mind that those records only date back as far as 1972 and that there is anthropological and geological evidence of much greater melts in the past.

The fact that either source has to be called on is a pretty good indication that it was centuries ago perhaps coincident with the little ice age.

The ice is back. – and it will also be back every winter so long as the earth maintains its tilt.

Gilles Langis, a senior forecaster with the Canadian Ice Service in Ottawa, says the Arctic winter has been so severe the ice has not only recovered, it is actually 10 to 20 cm thicker in many places than at this time last year.

This is not a problem per se unless we have a repeat next year. The added thickness conforms to a normal winter freeze up. 2007 was exceptional in every way. A string of normal winters will certainly rebuild the perennial sea ice.

OK, so one winter does not a climate make. It would be premature to claim an Ice Age is looming just because we have had one of our most brutal winters in decades.

But if environmentalists and environment reporters can run around shrieking about the manmade destruction of the natural order every time a robin shows up on Georgian Bay two weeks early, then it is at least fair game to use this winter's weather stories to wonder whether the alarmist are being a tad premature.

And it's not just anecdotal evidence that is piling up against the climate-change dogma.

According to Robert Toggweiler of the Geophysical Fluid Dynamics Laboratory at Princeton University and Joellen Russell, assistant professor of biogeochemical dynamics at the University of Arizona -- two prominent climate modellers -- the computer models that show polar ice-melt cooling the oceans, stopping the circulation of warm equatorial water to northern latitudes and triggering another Ice Age (a la the movie The Day After Tomorrow) are all wrong.

"We missed what was right in front of our eyes," says Prof. Russell. It's not ice melt but rather wind circulation that drives ocean currents northward from the tropics. Climate models until now have not properly accounted for the wind's effects on ocean circulation, so researchers have compensated by over-emphasizing the role of manmade warming on polar ice melt.

But when Profs. Toggweiler and Russell rejigged their model to include the 40-year cycle of winds away from the equator (then back towards it again), the role of ocean currents bringing warm southern waters to the north was obvious in the current Arctic warming.

How in hell could a climate model be generated that failed to recognize the linkage between wind and water movement utterly escapes me. Perhaps the same modeling experts that swore by a linear model for sea ice loss that happily projected an end game one hundred years from now. We are now discovering what a piece of modeling rubbish these folks have been relying on. So far we have three fudge factors having a non linear handshake. I sure hope they do not blame the mathematicians they never hired.

Last month, Oleg Sorokhtin, a fellow of the Russian Academy of Natural Sciences, shrugged off manmade climate change as "a drop in the bucket." Showing that solar activity has entered an inactive phase, Prof. Sorokhtin advised people to "stock up on fur coats."

He is not alone. Kenneth Tapping of our own National Research Council, who oversees a giant radio telescope focused on the sun, is convinced we are in for a long period of severely cold weather if sunspot activity does not pick up soon.

The last time the sun was this inactive, Earth suffered the Little Ice Age that lasted about five centuries and ended in 1850. Crops failed through killer frosts and drought. Famine, plague and war were widespread. Harbours froze, so did rivers, and trade ceased.

How the devil can we say that? Our knowledge of sunspot inactivity was utterly non existent for most of the time period described. We have attempted to piece a bit of it together using a proxy or two but our knowledge is no better than conjecture. Current low activity is cyclical and we are now supposedly slightly late in the renewal of activity. This is not a phenomena known to operate on a precise time clock.

It's way too early to claim the same is about to happen again, but then it's way too early for the hysteria of the global warmers, too.

The real question that needs to be answered is where are we in this forty year wind cycle that looks a lot like the forty year hurricane cycle? And is the change over precipitous? It is not to early to know this.

Monday, February 25, 2008

Pending Climate Change

Without question, the summer of 2007 in the high Arctic has shocked the scientific community. Glibly telling each other that the slow visible erosion of the sea ice had decades to play out, they were presented with a fait accompli of massive loss in just one season. It also lacked any particular warning because the surface climate was much as had already been previously experienced.

Now they have had a chance to re - cook their numbers, they are all realizing that sea ice will be possibly gone from the Arctic in the summer anytime toward the end of the next five years. This means during their watch.

Of course there are a few commentators who grasp at the fact that current winter ice has once again reached its normal fullest extent. As my readers know, this is not meaningful. It is all about the major loss of perennial ice throughout the Arctic and no season by itself can seriously change that.

Now that we are seeing the onset of the end game, I want to remind readers that this is all about a very slight increase in the available heat in the Northern Hemisphere for the past forty or so years. We are talking about as little as a half of a degree. However that half degree is sufficient to clear the Arctic Ocean of sea ice in the summer.

Once we are past this ice clearing period after perhaps 2012, then we can expect a climate regime similar to the medieval warm period. I am expecting Northern Europe to warm by several degrees with positive effects on the growing season. We may even see the Greenland permafrost disappear and perhaps the return of dairy farming there.

In other words, there are a lot of microclimates that will go through a protracted adjustment. Most of these will actually be beneficial.

Of course the CO2 enthusiasts have a lot to say, but it is mostly about how the ending of CO2 production will actually matter. I remain extremely skeptical on that one, and am more concerned that this particular idea will lead to truly wrongheaded public policies that we will wear for decades. It also promises to consume resources badly.

There is plenty of evidence to hand telling us that we are dealing with a very natural cycle that has not been disturbed for a long time. In fact, a very good question is to ask what will lower global temperatures a couple of degrees as happened to trigger the little ice age. The cooling cause is vastly more important than the warming.

I posit that left undisturbed, the climate of the Northern Hemisphere will be at least as warm as present. We have the exact long periods of both the medieval warm period lasting a couple of hundred years and the Bronze Age lasting at least a thousand years. These periods were remarkably stable from what we can determine. This was certainly because the Arctic did clear every summer, and even the coldest single winter had little impact.

Yet a sudden drop in global temperatures by at least a couple of degrees whose effect lasted for say ten years would reestablish the perennial sea ice that would then take hundreds of years to overcome.

I have also posited several causation ideas for this cooling. But I must admit that the one type of event really able to do the trick so abruptly is a prolonged major volcanic event. We remember the impact of Krakatau in 1883 on the global climate.

I do not think that the time frames properly match up, or maybe we are just wrong, but if the Thera blast ended the Northern European Bronze Age, it did it by dropping the temperature there by several degrees, precipitating a large movement of peoples into the Mediterranean perhaps already adding to related clans already in place. Recall that the geography of the Iliad maps the Baltic, not the Aegean and these sea peoples immediately populated coastal areas throughout the Mediterranean.

That merely leaves one remaining question. What went bang and ended the medieval warm period? I should also mention that it need not have been a single event.

I would like a global mapping of volcanic events for the past 10,000 years against the tree ring record which is also a fair proxy for climate. A lot of information still needs to be collected there.

Friday, February 22, 2008

Hunter - Prey on the Oceans

Global public enthusiasm for the cause of global warming has certainly kept the pot boiling. It has also shown that it is possible to get a global consensus in support of public policy that represents common benefit. For instance, my own province has seen fit to impose a fairly well thought out carbon tax that appears to be well accepted. This was possible only because of the overwhelming consensus in place.

There is one other major global ongoing crisis that we dearly need to address on an emergency basis. I am speaking to the unstoppable fishing pressure been put on our global fishing stocks. I have already showed my readers how the application of a simple hunter predator model to the issue of Arctic sea ice very nicely predicts the catastrophic collapse we are now witnessing.

The same effect has wiped out fishery after fishery and now threatens the tuna fishery. And our mathematically illiterate participants will babble platitudes to the last guppy to protect their right to the fishing status quo. Yet managed global practice could maximize global production on a sustainable basis.

It is a great tragedy that the Grand Banks cod fishery is extinct. Yet it could have been averted, but every player fought tooth and nail to bring in that last boat load. Now they can wait a thousand years in the hope that a recovery is possible. There are still idiots who want to sail out at every rumor a slight increase in stocks.

The principal lesson that can be drawn from a hunter gatherer model is that the take from hunting must absolutely be brought down below the level of theoretical maintenance. In fact if the stock is not at an optimal level in the first place, then the take must be much lower still.

An optimal level is that level attained which does not overload the carrying capacity of the environment itself. This flaw was demonstrated by the caribou herds of Northern Quebec in particular. They would eventually reach the point of eating themselves out of house and home, resulting in their decimation.

Simple herd management could easily prevent that while producing a large annual harvest of surplus males.

The best thing that ever happened to the salmon fishery was the onset of fish farming. A lot of the wild fishery has been curtailed in the process while market demand is now been satisfied. Something like that is working into the tuna industry. I personally would like to see all wild fisheries ended where farm replacement is possible. This would allow a recovery process. Of course, the farmed fish business is also evolving into an efficient and ecologically neutral system, though problems have of course appeared.

The only proper harvesting strategy is that which sees full annual recovery. Any approach other than that will start a depletion cycle that is driven by the insistent demand of the harvesters to maintain a constant take.

That is what is impossible once you even briefly allow the stock to drop slightly below the twelve month recovery point.

With fish there are plenty of variations of harvesting strategies that affect final outcomes that will need to be studied over time. My favorite is the establishment of sanctuary zones that naturally push excess from a healthy core stock out into the fishing zone. This can be applied for a range of species. An example would be the broad reestablishment of the sea otter on the west coast (and perhaps the east coast were it was never native) allowing the rebuilding of huge kelp forests that act as fish nurseries and havens.

Remember that stock recovery works on the basis of compound interest in its natural state until it outruns its resource base. This is often described in terms of the time required to double the size of the stock. If that time frame is say ten years, then reducing the stock by even a small amount is trouble.

This also means that recovery from a collapse becomes extremely problematic. There used to be millions of bison on the Great Plains until we hunted them to near extinction. It has taken most of fifty years of effort to restore the herd back to around 500,000. We could now easily use a herd of 10,000,000 since we have discovered that they are a better deal out on the open prairie. This will take a lot of time still but not nearly as much as the first 500,000. At least the full resources of human ingenuity and capital are working on it.

We have yet to figure out how to do this for the poor cod. Yet I have witnessed the work done in the salmon farm business. I knew guys who pioneered the first facilities and know how unpromising those beginnings were. Today the industry is financially stable and highly productive. We already have salmon available for the price of chicken which was the industries initial promise. It could expand ten fold easily if the market were able to support it.

In fact, the farmed salmon industry surely surpasses the greatest years of the old wild fisheries. And the recovery of that old wild fishery starts with restoration of the spawning rivers. Sewage and fisheries do not coexist.

It will require transnational organizations tasked with ocean management with the authority to enforce agreements similar to the law of the sea. A good start could be the combining of satellite ship identification with aggressive ship arrest protocols. Non compliance would be sufficient to define piracy.

Thursday, February 21, 2008

Biological Oxygen

We take the various components of our biosphere very much for granted. We are creatures of air and water and carbon and damn little else. Our cycle of life uses sunlight to split carbon dioxide into biologically usable carbon and oxygen. Put that way it all seems so simple that we delude ourselves into thinking we understand it all.

Each core ingredient has its own natural sink, the most obvious been the ocean. If we dug up every poison on earth and threw it into the ocean, I suspect that the ocean could handle it far quicker than we could ever imagine. My only reservation comes when some genius creates a new molecule that is both toxic and biologically inert. I prefer not to have to wait for geological processes to cure the problem.

The other big sink is the carbon sink and no, that is not the biosphere. I mean that stack of rock over eighty miles thick that we call the crust. It is loaded with carbon throughout and at its deepest it is in elemental form. In the sediments it is held as both coal and hydrocarbons were conditions are right. Far more are held in the form of carbonates in which silicas and the like have combined with good old CO2 to form materials able to withstand the conditions of the deep crust.

The carbon in the biosphere pales in comparison to this carbon sink. This should also give you a glimpse of just how removed the crustal rock is from the primordial rock environment like Mars or Venus. Water and carbon is an incredibly powerful mortar and pestle when animated by life.

The last great sink is of course the atmosphere. It is primarily inert nitrogen and oxygen which is critical to the living biosphere that produces the oxygen. The drivers of these components are still poorly understood as far as I have been able to determine, or at least poorly explained.

In practice, the biomass pumps out surplus oxygen into the atmosphere successfully supporting a plus seventeen percent level. I suspect this figure reflects areas of high human population in which some level of oxygen depletion. A more likely figure of 21% should prevail outside and upwind of human activity. Of course, burning 85 million barrels of oil every day must cause depletion somewhere.

The measure of our lack of knowledge hits the wall when we discuss the oxygen content of biologically active water. The literature maintains that all oxygen is supplied by dissolved oxygen from the atmosphere, except that the maximum oxygen that can be carried in water is seven parts per million. It stretches credulity to think that this can do the job. This is one of those conundrums that were postponed for future resolution a long time ago and that future never came.

In the final analysis the literature and text books merely became quiet on this subject. How many other problems have been so treated? Remember, had the problem been explained by simple analysis, you would have been taught it in high school.

I have come to the conclusion that biological oxygen is instead carried by the H3O3 ring molecule which can reach a stable one to two percent concentration level. This flattish molecule naturally forms needle crystals and is stable over all relevant temperature ranges. The molecular weight coincides with the unexplained background noise level of a spectroscope and has thus eluded direct discovery. The natural angles of the ring are also within the range of error for water.

I have had access to samples of this biologically produced solution and have used it to restore deficient blood oxygen levels and to sharply reduce burn damage. A glass of it allowed me to increase my breath holding capacity by a good fifty percent, as well as with many others. Of course the folks who produced the sample are clueless as to its likely genesis.

Thus if I am correct, biological oxygen is produced in situ by specialized bacteria as a waste product. Some of this will also escape into the atmosphere. I do not know if such bacteria need to use chlorophyll, but I rather doubt it. After all we know that chlorophyll was a late arrival to the primordial sea. Oxygen production and transport was necessary for life to utilize the available feed stocks. It is just that little of it was needed in the atmosphere.

It may even be possible to breathe this stuff if concentrated.

In any event, it is a good bet that the real oxygen sink is in the water rather than the atmosphere and that they are in some form of equilibrium with each other.

These simple sinks set the parameters for our Biosphere and our worst excesses are likely to never do more than inconvenience it and ourselves more so.

Wednesday, February 20, 2008

Bacteria as Carbon King

Picking up on a comment made earlier this week that the carbon content of bacteria et al is similar in magnitude to the plant content. More simply put, woodland holding thirty tons of plant carbon will have an additional thirty tons of bacteria content. In one way this explains a lot.

The collapse of soils after the advent of the plow was never quite explained by the erosion model. And the argument that soil growth itself was imperceptible taking centuries to achieve seemed a little extreme.

It is much more satisfactory to have soil act as an agency of the plant life that quickly moves to fill an available niche that once filled stabilizes. Bacteria would be large part of such a process.

It still takes decades to fully establish a new soil in this protocol and it is still very hard to discern. What is different is that we are no longer calling on only the medium of plant destruction to form our soils. Aeration caused by the plant root systems and the accumulation of the remains of single celled organisms contribute as much.

I think that we all sort of knew this but had to be reminded of the actual magnitudes involved.

This begs yet another subject. Single celled organisms able to utilize methane and sulphides are found deep within the rocks of the earth. Even rocks formed by the cooling of molten magma develop extensive fracturing quite capable of holding bacteria. Sandstones have porosities approaching even thirty percent. In any event most have no problem holding oil and gas.

This represents a bacteria holding capacity that hugely exceeds the thin coating of soil on top of miles of rock. I do not think anyone has ever tried to put a number on any of this. I only note that the actual biological carbon in existence may in fact be hugely under estimated.

The deep stuff may have little impact on surface conditions but I don’t think that we should ignore it.

This brings me back to the subject of terra preta. It is clear that the presence of finely divided elemental carbon in the soil contributes in some manner to the creation and maintenance of fertile soils. This is also true for zeolites. This suggests that the interaction between bacteria and elemental carbon in the soil is instrumental in the manufacture of high quality soils and their preservation.

Recall that normal tropical soils are able to maintain only a semblance of fertility in the face of heavy rainfall. With nothing else changing, we have terra preta maintaining top fertility year after year with no help except the return of crop wastes. The question is what is the mechanism that retains nutrients in this soil?

I have been quick to point to the super acid nature of the carbon but had little to add regarding the means. It would make sense for the bacteria to interact with such carbon to retain the nutrients. How this could happen is not known to me.

However, remembering the biosphere as ecology of single celled organisms first is an important initial step to avoid a misleading focus on the more visible plants.

Tuesday, February 19, 2008

One hundred million pre Columbian Indian Population

I came across this item on terra preta that adds to our knowledge while describing its importance.

The first fact that jumps out is the estimate of the soil’s distribution and scope. The fertility of this soil means that we can deduct a population base associated with this environment of possibly as high as one hundred million. Certainly a magnitude of between twenty to thirty million is very conservative. I make this conclusion based on the fact that a family can operate a five acre farm of corn and cassava, producing sufficient staple foodstuffs to support them.

The high population density, not unlike the rice fields of South Asia, was terribly vulnerable to contagious diseases and a swift collapse would have brought the human wolves down on top of the survivors. Our own experience with the effect of such onslaughts and more importantly their swift repetition, informs us of the reasons or total disappearance of the indigenous populations.

The only survivors would have been those who took to the forest in small separated bands and avoided contact. I think that there was some remnant survival although I am at a loss to explain the lack of informants with access to the underlying knowledge.

The second fact is the idea that of leaving a layer of the terra preta soil so that it might restore itself. Nice thought, but there is no way that the char will be replicated by biological action. However, it is quite plausible that the rapid addition of new material to the soil could be stabilized by the remaining char in the soil through the normal mixing caused by biological action. The percentage of char would decline, ultimately to a minimum level.

This is an issue well worth doing a lot of research on since it tells us that the creation of terra preta will get a huge hand from Mother Nature. My suggestion of creating biochar seed hills is more beneficial than at first thought. A concentrated hill occupies only twenty five percent of the space available, yet biological mixing will possibly build up the hills and cause the zone of influence to expand rather than contract as one would at first expect.

It is also noteworthy that the suggestion was made of something called ‘slash and char’, whatever that could be. It is clear however that the source material was the crop residues as I have been stating in previous postings. Thinking that Stone Age equipped man was going to hack down trees to make biochar is ludicrous. Girdle and kill and burn down to get started are things he could do. After that he relied on his crops for feedstock.

I have already posted on the viability of using corn stover to make an earthen field kiln that produces biochar in abundance. This article simply adds detail to the picture.

It is becoming clearer that the real population of the Americas was far greater than we have ever guessed. Corn culture certainly supported huge populations, including large expanding populations in the Mississippi Valley. Their collapse began a lot earlier than 1492. European visitors were able to make one way trips long before Columbus figured out how to return. And a boatload of your typical common cold and flu would actually decimate any population concentrations as could still happen on isolated communities.

It would not surprise me that legends of the Indians rising and destroying visiting traders before 1492 was driven by the obvious relationship between strangers and fatal disease.

Anyway, enjoy this article that quotes extensively from Charles C Mann’s article in Atlantic Monthly.

___________________________________________________

Amazonian Terra Preta

Once in a while you run across something that challenges just about everything you thought you knew. “Terra preta” (Portuguese for “black earth”) are anomalous deposits of deep, rich soil found in large pockets of land throughout the Amazon. Once thought to be 100% comprised of thin, fragile soil that would immediately desertify if the trees were removed, it now turns out there are significant sections of Amazonia where this terra preta is abundant. But the biggest mystery is this: The Amazon’s best soil, terra preta, possibly was deliberately created by Native Americans.

As put forth in 2002 in a lengthy article in the Atlantic Monthly entitled “1491” by Charles C. Mann, there is a growing body of evidence that the indigenous population of the Americas in pre-colombian times was far greater than is typically estimated.

In Mann’s report several thought provoking bits of evidence are presented: The great mass of carrier pigeons that filled the skies and the great masses of bison that dominated the endless prairies in the 18th century were not always there - if they had always been there, in archeological sites we would see their bones in far greater abundance. Instead they were “outbreak species,” whose numbers mushroomed in the wake of human demographic collapse. Read the article for more arguments supporting this new theory - which basically says the impact of European disease on Native American populations was far, far greater than previously conjectured, and in fact abruptly destroyed a network of complex urban civilizations numbering well over 100 million people.

The presence of Amazonian terra preta is another piece of evidence allegedly supporting this theory, because the placement of these deposits of charcoal rich black earth are not explained without human intervention. The theory holds that this black earth was created by a process called “slash and char,” something very distinct from slash and burn. In this process the seasonal crop residue was not burned, but charred and turned into the earth. Doing this sequestered most of the carbon in the crop residue, and created an extremely hospitable amendment to the otherwise thin and fragile soil - something that in turn nurtured beneficial microorganisms that broke down the poor native soil and transformed it in to extraordinarily rich humus. Read this from “1491″:

“Landscape” in this case is meant exactly—Amazonian Indians literally created the ground beneath their feet. According to William I. Woods, a soil geographer at Southern Illinois University, ecologists’ claims about terrible Amazonian land were based on very little data. In the late 1990s Woods and others began careful measurements in the lower Amazon. They indeed found lots of inhospitable terrain. But they also discovered swaths of terra preta—rich, fertile “black earth” that anthropologists increasingly believe was created by human beings.

Terra preta, Woods guesses, covers at least 10 percent of Amazonia, an area the size of France. It has amazing properties, he says. Tropical rain doesn’t leach nutrients from terra preta fields; instead the soil, so to speak, fights back. Not far from Painted Rock Cave is a 300-acre area with a two-foot layer of terra preta quarried by locals for potting soil. The bottom third of the layer is never removed, workers there explain, because over time it will re-create the original soil layer in its initial thickness. The reason, scientists suspect, is that terra preta is generated by a special suite of microorganisms that resists depletion. Apparently at some threshold level … dark earth attains the capacity to perpetuate—even regenerate itself—thus behaving more like a living ’super’-organism than an inert material.

In as yet unpublished research the archaeologists Eduardo Neves, of the University of São Paulo; Michael Heckenberger, of the University of Florida; and their colleagues examined terra preta in the upper Xingu, a huge southern tributary of the Amazon. Not all Xingu cultures left behind this living earth, they discovered. But the ones that did generated it rapidly—suggesting to Woods that terra preta was created deliberately. In a process reminiscent of dropping microorganism-rich starter into plain dough to create sourdough bread, Amazonian peoples, he believes, inoculated bad soil with a transforming bacterial charge. Not every group of Indians there did this, but quite a few did, and over an extended period of time.”

If rich topsoil was literally engineered by humans on this scale, this is an encouraging possibility to address the today’s challenges of depleted soils and desertification. Organizations have sprung up to study the potential of employing similar techniques today, creating what is now referred to as biochar (or agrichar), such as the International Biochar Initiative. And the notion that Native Americans manipulated and nurtured the ecosystems of the Amazon over 500 years ago also challenges today’s conventional definitions of what is pristine - indeed by taking away one of our most reliable archetypes of living without a footprint - perhaps shakes the whole idea of pristine wilderness to its roots. And needless to say, if carbon sequestration is truly an imperative for our species, creating biochar could hold more potential - and side benefits - than virtually any other scheme.

Monday, February 18, 2008

New Sea Ice Concensus Emerging

I admit that I reasonably assumed that with winter upon us that there was little going on in the world of sea ice discussion. Boy was I wrong. The quoted news story reflects the huge jolt last summer gave the experts. The wind shift has also not let up and we can expect a repeat of last year’s sea clearing conditions. And they have figured out just how much the perennial ice has already collapsed. I even saw an astounding admission that the reporting regulatory agency imposed a linear model only protocol on their scientists for reporting purposes.

If we assume merely that the perennial sea ice is been attacked by a constant heat surplus each and every year, the result is a very nonlinear collapse of the ice after a long weakening period. I had no difficulty predicting a rapid collapse once it became clear that we had lost 60% of the perennial sea ice over a forty year span for which we had only the end points.

We how have lost most of the perennial sea ice, and the winds are now shifting the remainder out of the Arctic.

In the meantime, we are having a very good freeze this winter and the maximum extent has already been reached at a level larger than last year – no surprise – and it has extended by Greenland in particular to a point not reached in fifteen years. Again this is new ice that should disappear next summer.

What is particularly clear is the confusion and surprise been experienced by the experts. A lot of this has been brought on by the very unusual reordering of the wind system which is accelerating the clearing process.

Even the obvious fact of a cold winter right on the heels of unexpectedly warm summer is not yet explained. What are the true cause and effect relationships or are we reading way too much into this?

In any event, this cold winter should slow the onset of spring this year. We need to watch by how much and what effect this has on the summer sea ice melt. My own expectation is that we will have a neutral year for the annual sea ice but if the ongoing removal of perennial sea ice continues by wind action, then the real result will still be negative.

As the news report makes clear, this process looks irreversible for anything short of a multiyear stretch of below average cold. My own sense is that this year, cold as it was, merely hit the average. Thirty years ago, it would have been considered a fairly good winter.

The ongoing direct removal of perennial sea ice means that the next warm summer or so will clear the Arctic of sea ice. I still think that it will likely occur around 2013, but conditions have been hugely accelerated and the experts are now establishing a new time frame coincident with that date.

The only question that I have is whether or not this rapid collapse is driven by the ongoing surplus heat coming into the Arctic for the past few decades or alternatively driven by the recent onset of a very new wind system. In either case we have leapt ahead of any accepted projections and are now accepting the probability of clear seas within five years. And believe me that even a linear projection today gives us five years.

(ANCHORAGE, Alaska) — Arctic sea ice next summer may shrink below the record low last year, according to a University of Washington climatologist. Ignatius Rigor spoke Monday at the Alaska Forum on the Environment and said global warming combined with natural cyclical changes likely will continue to push ice into the North Atlantic Ocean.

The last remnants of thick, old sea ice are dispersing and the unusual weather cycles that contributed to sea ice loss last year are continuing, he said.

"The buoys are streaming out," Rigor said, referring to the markers used to monitor the flushing of ice into the North Atlantic.

A similar pattern preceded sea ice loss last summer was not expected to continue so strongly.

Scientists are watching Arctic sea ice closely, trying to sort out the effects of global warming and natural cyclical changes.

Formal projections of sea ice loss will be made for another month or so but all indications are that ice loss will equal or exceed last year's "unless the winds turn around," Rigor said.

New ice now covering the polar seas is not like older, thicker sea ice that once covered the region in winter, Rigor said. In 1989, 80 percent of the ice in the Arctic was at least 10 years old, he said. Today, only about 3 percent of the ice is that old.

New ice melts more quickly, and then open water absorbs more sunlight, warming the seas and making the fall freeze-up come even later, he said.

"Have we passed the tipping point?" he said. "It's hard to see how the system may come back."

The prospect of a mostly ice-free Arctic could mean a boom in shipping through the Bering Strait, several speakers said, but is bad news for polar bears and other animals.

Polar bears prefer ice over the shallow continental shelf north of Alaska because it supports a rich food chain, said Steve Amstrup, a leading polar bear biologist with the U.S. Geological Survey. With melting last summer, some Alaska bears were on ice as much as 600 miles north of Barrow, far from their preferred habitat, Amstrup said.

Amstrup was lead federal biologist in studies released last year depicting the Alaska bear as likely to disappear by 2050 because of global warming. A decision by the Department of the Interior on whether to list the polar bear as "threatened" under the Endangered Species Act was due in January but has been postponed.

The state of Alaska, among others, opposes the listing, arguing the forecasts of declining sea ice are too speculative.

Scientists Monday said that the forecasts were, if anything, too cautious. None foresaw the shrinkage of 2007.

"Five of the 10 studies we used projected more sea ice at mid-century than we had this summer," Amstrup said.

The shrinkage is related to higher temperatures, scientists said, but also to shifts in a weather pattern known as the Arctic oscillation. When the Arctic oscillation is in a "high" cycle, as it has been recently, more ice is pushed past Greenland into the North Atlantic, Rigor said.

Climate models have linked a higher Arctic oscillation to increases in greenhouse gases, but that relationship is the subject of much study, Rigor said.

"All these changes are very consistent with a climate system trying to cool itself off from greenhouse gases," Rigor said.

Thursday, February 14, 2008

Roy Spenser on CO2 anthropogenic assumption

I am posting this long article by Roy Spenser who brings into question the received wisdom of the linkage of increasing CO2 with human activity. This linkage is commonly taken as a natural outcome of our burning of fossil fuels and is simply common sense. I will also admit that I have been uncritically inclined to make the same obvious logical argument.

It is completely conceivable that the CO2 cycle is vastly more robust than has ever been contemplated and that as argued its own variation is primarily driven by ocean related processes for which we lack proper understanding. Here we get a well thought out review of the data that certainly opens the door on this issue.

I personally want there to be a direct simple linkage between CO2 levels and the burning of fossil fuels. This is my own bias. It would be a blessing to know that our earth has no difficulty sucking up all that CO2 anyway, and probably has even less difficulty providing it if we decide to sequester huge amounts as a good husbandry practice.

I do not know how well the charts will survive the posting process, but the text is self explanatory. I pulled this material from another commentator who makes the first comments, also apropos.

UPDATED: Roy Spencer on how Oceans are Driving CO2

25 01 2008

NOTE: Earlier today I posted a paper from Joe D’Aleo on how he has found strong correlations between the oceans multidecadal oscillations, PDO and AMO, and surface temperature, followed by finding no strong correlation between CO2 and surface temperatures. See that article here:

Warming Trend: PDO And Solar Correlate Better Than CO2

Now within hours of that, Roy Spencer of the National Space Science and Technology Center at University of Alabama, Huntsville, sends me and others this paper where he postulates that the ocean may be the main driver of CO2.

In the flurry of emails that followed, Joe D’Aleo provided this graph of CO2 variations correlated by El Nino/La Nina /Volcanic event years which is relevant to the discussion. Additionally for my laymen readers, a graph of CO2 solubility in water versus temperature is also relevant and both are shown below:

daleo-co2-ppmchange.png co2-h2o_solubility.png
Click for full size images

Additionally, I’d like to point out that former California State Climatologist Jim Goodridge posted a short essay on this blog, Atmospheric Carbon Dioxide Variation, that postulated something similar.

UPDATE: This from Roy on Monday 1/28/08 see new post on C12 to C13 ratio here

I want to (1) clarify the major point of my post, and (2) report some new (C13/C12 isotope) results:

1. The interannual relationship between SST and dCO2/dt is more than enough to explain the long term increase in CO2 since 1958. I’m not claiming that ALL of the Mauna Loa increase is all natural…some of it HAS to be anthropogenic…. but this evidence suggests that SST-related effects could be a big part of the CO2 increase.

2. NEW RESULTS: I’ve been analyzing the C13/C12 ratio data from Mauna Loa. Just as others have found, the decrease in that ratio with time (over the 1990-2005 period anyway) is almost exactly what is expected from the depleted C13 source of fossil fuels. But guess what? If you detrend the data, then the annual cycle and interannual variability shows the EXACT SAME SIGNATURE. So, how can decreasing C13/C12 ratio be the signal of HUMAN emissions, when the NATURAL emissions have the same signal???

-Roy

Here is Roy Spencer’s essay, without any editing or commentary:


Atmospheric CO2 Increases:

Could the Ocean, Rather Than Mankind, Be the Reason?

by

Roy W. Spencer

1/25/2008

This is probably the most provocative hypothesis I have ever (and will ever) advance: The long-term increases in carbon dioxide concentration that have been observed at Mauna Loa since 1958 could be driven more than by the ocean than by mankind’s burning of fossil fuels.

Most, if not all, experts in the global carbon cycle will at this point think I am totally off my rocker. Not being an expert in the global carbon cycle, I am admittedly sticking my neck out here. But, at a minimum, the results I will show make for a fascinating story - even if my hypothesis is wrong. While the evidence I will show is admittedly empirical, I believe that a physically based case can be made to support it.

But first, some acknowledgements. Even though I have been playing with the CO2 and global temperature data for about a year, it was the persistent queries from a Canadian engineer, Allan MacRae, who made me recently revisit this issue in more detail. Also, the writings of Tom V. Segalstad, a Norwegian geochemist, were also a source of information and ideas about the carbon cycle.

First, let’s start with what everyone knows: that atmospheric carbon dioxide concentrations, and global-averaged surface temperature, have risen since the Mauna Loa CO2 record began. These are illustrated in the next two figures.

spencer-012508-fig1.png

spencer-012508-fig2.png

Both are on the increase, an empirical observation that is qualitatively consistent with the “consensus” view that increasing anthropogenic CO2 emissions are causing the warming. Note also that they both have a “bend” in them that looks similar, which might also lead one to speculate that there is a physical connection between them.

Now, let’s ask: “What is the empirical evidence that CO2 is driving surface temperature, and not the other way around?” If we ask that question, then we are no longer trying to explain the change in temperature with time (a heat budget issue), but instead we are dealing with what is causing the change in CO2 concentration with time (a carbon budget issue). The distinction is important. In mathematical terms, we need to analyze the sources and sinks contributing to dCO2/dt, not dT/dt.

So, let us look at the yearly CO2 input into the atmosphere based upon the Mauna Loa record, that is, the change in CO2 concentration with time (Fig. 3).

spencer-012508-fig3.png

Here I have expressed the Mauna Loa CO2 concentration changes in million metric tons of carbon (mmtC) per year so that they can be compared to the human emissions, also shown in the graph.

Now, compare the surface temperature variations in Fig. 2 with the Mauna Loa-derived carbon emissions in Fig. 3. They look pretty similar, don’t they? In fact, the CO2 changes look a lot more like the temperature changes than the human emissions do. The large interannual fluctuations in Mauna Loa-derived CO2 “emissions” roughly coincide with El Nino and La Nina events, which are also periods of globally-averaged warmth and coolness, respectively. I’ll address the lag between them soon.

Of some additional interest is the 1992 event. In that case, cooling from Mt. Pinatubo has caused the surface cooling, and it coincides in a dip in the CO2 change rate at Mauna Loa.

These results beg the question: are surface temperature variations a surrogate for changes in CO2 sources and/or sinks?

First, let’s look at the strength of the trends in temperature and CO2-inferred “emissions”. If we compare the slopes of the regression lines in Figs. 2 and 3, we get an increase of about 4300 mmt of carbon at Mauna Loa for every degree C. of surface warming. Please remember that ratio (4,300 mmtC/deg. C), because we are now going to look at the same relationship for the interannual variability seen in Figs. 2 and 3.

In Fig. 4 I have detrended the time series in Figs. 2 and 3, and plotted the residuals against each other. We see that the interannual temperature-versus-Mauna Loa-inferred emissions relationship has a regression slope of about 5,100 mmtC/deg. C.

There is little evidence of any time lag between the two time series, give or take a couple of months.

spencer-012508-fig4.png

So, what does this all show? A comparison of the two slope relationships (5100 mmtC/yr for interannual variability, versus 4,700 mmtC/yr for the trends) shows, at least empirically, that whatever mechanism is causing El Nino and La Nina to modulate CO2 concentrations in the atmosphere is more than strong enough to explain the long-term increase in CO2 concentration at Mauna Loa. So, at least based upon this empirical evidence, invoking mankind’s CO2 emissions is not even necessary. (I will address how this might happen physically, below).

In fact, if we look at several different temperature averaging areas (global, N. H. land, N.H. ocean, N.H. land + ocean, and S.H. ocean), the highest correlation occurs for the Southern Hemisphere ocean , and with a larger regression slope of 7,100 mmtC/deg. C. This suggests that the oceans, rather than land, could be the main driver of the interannual fluctuations in CO2 emissions that are being picked up at Mauna Loa — especially the Southern Ocean.

Now, here’s where I’m really going to stick my neck out — into the mysterious discipline of the global carbon cycle. My postulated physical explanation will involve both fast and slow processes of exchange of CO2 between the atmosphere and the surface.

The evidence for rapid exchange of CO2 between the ocean and atmosphere comes from the fact that current carbon cycle flux estimates show that the annual CO2 exchange between surface and atmosphere amounts to 20% to 30% of the total amount in the atmosphere. This means that most of the carbon in the atmosphere is recycled through the surface every five years or so. From Segalstad’s writings, the rate of exchange could even be faster than this. For instance, how do we know what the turbulent fluxes in and out of the wind-driven ocean are? How would one measure such a thing locally, let alone globally?

Now, this globally averaged situation is made up of some regions emitting more CO2 than they absorb, and some regions absorbing more than they emit. What if there is a region where there has been a long-term change in the net carbon flux that is at least as big as the human source?

After all, the human source represents only 3% (or less) the size of the natural fluxes in and out of the surface. This means that we would need to know the natural upward and downward fluxes to much better than 3% to say that humans are responsible for the current upward trend in atmospheric CO2. Are measurements of the global carbon fluxes much better than 3% in accuracy?? I doubt it.

So, one possibility would be a long-term change in the El Nino / La Nina cycle, which would include fluctuations in the ocean upwelling areas off the west coasts of the continents. Since these areas represent semi-direct connections to deep-ocean carbon storage, this could be one possible source of the extra carbon (or, maybe I should say a decreasing sink for atmospheric carbon?).

Let’s say the oceans are producing an extra 1 unit of CO2, mankind is producing 1 unit, and nature is absorbing an extra 1.5 units. Then we get the situation we have today, with CO2 rising at about 50% the rate of human emissions.

If nothing else, Fig. 3 illustrates how large the natural interannual changes in CO2 are compared to the human emissions. In Fig. 5 we see that the yearly-average CO2 increase at Mauna Loa ends up being anywhere from 0% of the human source, to 130%.

It seems to me that this is proof that natural net flux imbalances are at least as big as the human source.

spencer-012508-fig5.png

Could the long-term increase in El Nino conditions observed in recent decades (and whatever change in the carbon budget of the ocean that entails) be more responsible for increasing CO2 concentrations than mankind? At this point, I think that question is a valid one.