Friday, July 31, 2009

The Problem of Future Food Price

The civil unrest arising in a number of countries last year finds its origins in the sharp increases (approximately 75% in the U.S. since 2000)1 in food prices. It should be a concern that this unrest can be viewed as just a fraction of what could occur in the future if steps are not taken to solve the problem of increased food costs and food shortages. The economic slowdown reduced the severity of the price increase, but did not end them, instead simply pushing their consequences off of the front page. The growing concern is that the threat of food shortages and increased costs are trend driven, not event driven like temporary price spikes in the past as 6 of the last 9 years have seen consumption rates exceed production rates.2 In fact at the beginning of the 2008 harvest season world carryover stocks were only at 62 days, near a record low.2

Solving this problem is imperative from a humanitarian standpoint, an economic standpoint and a national security standpoint. Four separate elements play a role in the continuing relative escalation of food prices: increased oil and fertilizer price, climate change, land use division and changes in dietary demands. It can be argued that a fifth sub-cause, market manipulation, also plays a smaller role. Each of these elements plays a role in for both short-term spikes and the longer-term trend of food price increase and has its own cause and solution. The first element of influence, climate change, will not be addressed here because it is a problem unto itself and needs to be discussed in such context. However, it must be noted that crop failures due to changes in weather patterns have already occurred on a consistent basis and are continuing to occur in Australia and other typically high yield producing counties and it should be a concern that these past ‘supplying’ nations will eventually become future ‘demanding’ nations.1

The second element of influence, the dramatic rise in oil prices, is unfortunate because it has a very simple cause, but complex solution. The primary cause can be traced to the greater industrialization of China and India, which used 3.13x and 2.25x more oil in 2006 than in 1996 vs. only a 1.22x increase in use in the United States over the same time period.3 Make no mistake that the United States still consumes the most oil of any individual country, but the increased use in China and India place even greater demands on oil supplies and there is no reason to expect a significant decline in this demand in the future. Following one of the oldest and truest economic rules, when demand for a good increases, so shall price.

In addition to oil prices, another portion of the second element of influence is an increase in fertilizer price. Currently it is difficult to correlate a relationship between a rise in oil price and a rise in fertilizer price. Although fertilizer production is typically carried out through the Haber-Bosch process, which requires a considerable amount of energy, that energy is almost exclusively derived from natural gas not oil (approximately 33,500 cubic feet of natural gas is required to produce 1 ton of anhydrous ammonia fertilizer). Despite an inconclusive tie in to fertilizer price, higher oil prices do increase gasoline and diesel prices, which increase planting and harvesting costs as well as transportation costs, a significant component to the overall food cost.

Therefore, to survive or continue making a profit the farmers and agro-businesses need to increase their sell price, which in turn force suppliers like supermarkets to increase prices shifting the final burden to the consumers. If price increase is not a viable strategy (buyers refuse to pay), then the food will not be produced thus reducing the total amount of food in the available market, which will also influence market price. The resultant increase transportation costs are especially hard on exporters and food aid-based charities. In the case of the charities, the inability to purchase and/or transport food means that less food can be delivered to developing nations, which may lead to shortages there. Note that these increased costs from this element have greater influence in regions that utilize fertilizer and heavy machinery in agriculture.

The problem with the increase in oil demand and possibly later in fertilizer, depending on any agricultural strategy changes in Africa, is there is no easy solution; one cannot simply request that certain nations hedge their demand for oil, thus stifling economic growth. Also although there is a fierce debate regarding when the influences of ‘peak oil’ will actually begin to take affect, there is no debate that oil is only a limited resource and eventually demand will outpace supply. Therefore, oil alternatives need to be developed. Unfortunately those alternatives are still in their infancy and, despite their potential, are years away from being able to absorb a significant percentage of transport and/or energy demands fulfilled by oil, so relief from high oil prices is not just around the corner.

For example for years now biofuel has been the heir apparent substitute for oil in the transport sector. However, despite years of subsidies and research, food-derived biofuels have proven to have significant potential to be more detrimental than beneficial both to the environment and to food stocks.4,5,6 Cellulous biofuels have had some recent problems with fraud and overestimation of their ability to produce significant quantities of oil.7 Algae-based biofuels are still just getting started and have almost no real productive capacity at this point. Overall non-food derived biofuels have currently attained an average production rate of approximately 39-40 million gallons a year.7 Those production rates only account for approximately 0.00307% of the total world consumption of oil in 2006.3 It will be a long time before non-food derived biofuels are actually able to make a significant contribution to the world transportation/energy market. Adding insult to injury the political structure of the United States seems to refuse to strongly act on the available information pertaining to the net detrimental effects of producing biofuels from food based derivatives continuing to fund subsidies for the corn ethanol industry as well as provide purchase quotas.8

The original idea for biofuels was to replace a non-renewable resource with a renewable resource; however, in the zeal to christen a new fuel and energy option, especially those derived from food stock staples such as corn and soybean, the inherent problems with biofuels were ignored and have become the third element of influence. Not only does biofuel production from food sources lead to improper deforestation due to land clearance in effort to make more land available for cultivation, it also reduces the prospective food supply. Thus, instead of providing the necessary foundation for reducing oil use, this strategy increases pollution by releasing the carbon stored in the newly cut-down trees and removes food from the supply chain leading to price increases.

This is not to say that biofuels cannot be useful, but only if derived from the proper source material. For example Brazil and other nations have already demonstrated that high quality biofuels can be derived from sugar cane and beets, crops that are not utilized as a staple food crop.9 In addition algae should be able to supply an ample amount of biofuels at some point in the future. So the solution to the influence of the biofuels as a price modifier of common food staples would be to change the base derivative of the most biofuels that are currently produced to more diverse sources. A powerful first step would be to end all subsidies supporting the synthesis of biofuels derived from food sources such as corn, soybean, wheat, etc. and instead redirect these funds to focus on improving the technology behind biofuels derived from sugar cane/beets, algae and cellulous.

The fourth element of influence, dietary change, is similar to the cost of oil in that the origins of the problem are easy to understand, but the ability to generate a solution is difficult. In the past most third-world societies, including China and India did not have thriving middle-classes, instead were divided into rich and poor classes with much of the population on the poor side. Being poor, most individuals could only afford certain food staples, mainly corn, rice and other low priced grains and a one-in-a-while meat dish. With increasing earning power in these developing nations, especially China and India, a new middle-class has formed that can afford more expensive food items, including high quality meat products. With more people able to and partaking in meat consumption the demand to supply that meat increases as well. Therein lies the source of the problem. Siphoning off grain products from the supply chain for use as feed for cattle and other meat-providing animals is problematic as well as controversial.

There is debate over how many pounds of grain are actually required to product 1 pound of meat. Most of the discussion surrounds the production of beef with the common contention that it takes 16 pounds of grain to produce 1 pound of beef. However, it is unlikely that this number is correct because some to most (depending on who you talk to) of the food consumed by livestock consist of items that would not be consumed by humans in the first place. In contrast to the 16 pound estimation, it is not surprising that the pro-meat lobby proposes that this number is as low as 0.3-2 pounds of grain per pound of beef. The correct estimate is probably somewhere in the middle, 5-10 pounds of grain per 1 pound of future beef, but it is probable unreasonable to try to generate a single average number because of all of the variables involved in meat production. That said, despite the meat lobby’s contention, it is extremely likely that it requires more than 1 pound of grain to produce 1 pound of beef. Another issue that most do not address is there is little research on food consumption requirements for chickens and pigs, which also place additional stress on food stock, although such stressors are smaller than that of cattle.

There is no easy solution to this problem because similar to oil it is unreasonable to forcibly select and/or limit food options for a group of people. Instead the best possible solution may be to develop alternatives to these foods that do not require the resource expenditure. For example, a new formula for a product like Spam that rival certain meats in taste, which could be produced without utilizing grain stocks would reduce the demand for cattle and other animals reducing the strain on supply lines. Another option has been to produce certain meat products in vitro, but right now prospects for cost-effective and significant quantities of in vitro meat do not look promising, although this may change in the future.10,11 Overall this problem will require that adaptation neutralize the detriments associated with increased demand instead of relying on command and control procedures.

The potential fifth element of influence, the practice of price speculation in the global commodity markets, is exceptionally sad because it is the most correctable and easily controlled element of influence. It is interesting, although not surprising that individuals would have the gall to allow people to starve in order to make money, especially when most of those involved already have a significant amount. The subsidies and tariffs that riddle global trade in agricultural commodities are frequently unregulated, too complex and self-serving. Although there would be a lot of crying from ‘free-market capitalists’ it seems imperative to foster a special set of rules for food-based commodity markets in times of dramatic price increase to lessen the probability of the occurrence of famines in the future, reducing the probability that these famines are driven by a lack of food, not a lack of access to it. Note that speculation should not be viewed as a main cause of price increase, but a meaningful influencing factor.

Despite all of the information presented above, there is a lingering question. The elements that influence food price seem to only have limited influence on the agricultural infrastructure of the developing world, instead these elements have a much more significant influence on food price in the developed world. In fact four of the five influencing elements (if market speculation is included) do not appear to have direct influence on prices in the developing world.

When looking at the increase in oil, fertilizer and transportation prices none of those increases should have a meaningful direct impact on local markets in various African or Caribbean countries. Most of the developing nations in these regions are not prominent volume exporters (most exportation is high value low volume cash crops) or are not so climate variant that food must be transported over long distances between regions. Instead in these countries most of the food is produced and consumed locally. Therefore, excessive transport costs are somewhat immaterial to local farmers in these countries. Fertilizer is also in short supply, so fertilizer price fluctuations have almost no influence on price. For example China uses 1296 pounds of fertilizer per hectare of cultivated land vs. only 15.9 pounds of fertilizer per hectare in Kenya and Kenya is one of the more prevalent users of fertilizer for crop production.12

The situation is similar when considering the influence of food-derived biofuels in developing nations. Due to insignificant oil consumption in most of these countries when compared against the consumption rates of more developed countries there is limited motivation for conversion of some of the planted crop from food consumption to biofuel synthesis. Therefore, a very small percentage, if any, of the food grown in these countries is utilized in the synthesis of biofuel in-house.

To suggest that food prices in the developing world are significantly influenced by changes in dietary consumption seems questionable in that most of the people that are starving in the world today live in the developing world. Also despite globalization, there has not been enough economic development in most of the developing world in recent decades for a new large number of individuals to make new dietary choices. Therefore, with the exception of China and India, it is highly improbable that a greater demand for meat products in the developed world has driven any considerable price increase. Finally commodity markets are underdeveloped in most developing nations to the point that it is extremely unlikely that they would have any significant influence. Overall the only one of the above five factors that could have direct influence on the price of food in a developing country is global warming (reduction of water tables, shorter and/or more inconsistent growing season, greater insect infestations, etc.)

There are two possible explanations for the price increases seen in 2008 affecting the developing world due to the limited influence of the above price influencing elements. First, the price/cost ratio in developing countries is much higher than it is in developed countries. Such an explanation could be valid in the context that developed countries typically have more safeguards, both political and market-based (more competition) to ward against radical price increases even when production costs increase. Second, the price of food in developing nations is influenced by these factors indirectly by their influence on prices in developed nations. However, such a circumstance could only come about in a significant way if the level of food aid donated to these countries was significant enough that it represented a meaningful percentage of the eventual supply. In order better identify which of the above explanations have greater influence, if any influence, it is imperative to address the role and structure of food aid, which will be discussed in the next post.

----------------------------------------------
1. Caldwell, Jake. “Food Price Crisis 101.” American Progress Institute. May 1, 2008.
http://www.americanprogress.org/issues/2008/05/food_crisis.html/print.html.

2. Brown, Lester. “Could Food Shortages Bring Down Civilization?” Scientific American Magazine. May 2009.

3. “International Energy Outlook 2009.” Table A5. World Liquids Consumption by Region, Reference Case, 1990-2030. Energy Information Administration. May 2009.

4. Crutzen, P.J., et, Al. “N2O release from agro-biofuel production negates global warming reduction by replacing fossil fuels.” Atmos. Chem. Phys. Discuss. 2007. 7: 11191–11205.

5. Fargione, Joseph, et, Al. “Land Clearing and the Biofuel Carbon Debt.” Science. Feb 2008. 319(5867): pp. 1235-1238.
6. Runge, Ford, and Senauer, Benjamin. “How Biofuels Could Starve the Poor.” Foreign Affairs. May/June 2007. http://www.foreignaffairs.com/print/62609
7. Borrel, Brendan. “Biofuel Fraud Case Could Leave the EPA Running on Fumes.” Scientific American Magazine July 2009. http://www.scientificamerican.com/article.cfm?id=cello-biofuel-fraud-case

8. Clayton, Mark. “High Gas Prices and Politics Push Companies toward the ‘Holy Grail’ of Biofuel: Cellulosic Ethanol. Christian Science Monitor. June 4, 2008. http://features.csmonitor.com/environment/2008/06/04/the-race-for-nonfood-biofuel/

9. Biofuels for Transport: Global Potential and Implications for Sustainable Agriculture and Energy in the 21st Century. Worldwatch Institute. 2007. pp 8-16. ISBN 978-1-84407-422-8

10. Edelman, P.D., et, Al. “In Vitro Cultured Meat Production.” Tissue Engineering. May/June 2005, 11(5-6): 659-662.

11. Bittman, Mark. “Rethinking the Meat-Guzzler.” New York Times. January 27, 2008. http://www.nytimes.com/2008/01/27/weekinreview/27bittman.html.

12. Vitousek, P.M, et, Al. “Nutrient Imbalances in Agricultural Development.” Science. June 2009. 324(5934): pp. 1519-1520.

Monday, July 20, 2009

The Future of the United Nations

The United Nations was created out of the ashes of the League of Nations, an organization originally established to perpetuate world peace by creating a global forum where conflict could be mediated with words instead of weapons. Unfortunately at present time with regards to this goal the United Nations is nothing but an organization that receives a large amount of distain and reticule. Soon legitimate consideration and discussion, instead of just passing commentary concerning the future of the United Nations and its role in the world will be required. Of the issues that would encompass such a discussion two important questions take center stage. First, has the United Nations actually met its designated purpose? Second, if the United Nations has not lived up to its expectations, what steps need to be taken to change the status quo?

Addressing the first question, it is important to properly define the role of the United Nations envisioned by both its founders and its present day actors. The overarching goal of the United Nations was to prevent the onset of a military conflict equal to or greater in scale to that of World War I or World War II and so far no conflict has occurred that can be characterized as World War III. However, the propagation of nuclear weapons could be given more credit as a deterrent against World War III than the United Nations. In addition, some would argue that the real purpose of the United Nations was to reduce the number of significant conflicts or atrocities in the world, not simply neutralize the occurrence of large-scale war. If this is the case then the United Nations has performed poorly, but the cause of this failure stems from a lack of engagement not a lack of success after engagement.

Fortunately the reason for the failure of the United Nations is rather simple to identify, unfortunately it is very difficult to solve because it is ingrained in the very foundation of the United Nations. The failure is derived from too much concentrated power among diverse sources. Getting nations with diverse cultural, economical and ideological viewpoints to agree is remarkably difficult. Such a difficulty is only enhanced when one of five specific nations has the ability to submarine any legitimate binding action which it may perceive as a threat to its own personal interests. Think of it this way – imagine that for any piece of legislation to pass through the House of Representatives both the House Majority and House Minority leaders along with the respective house whips would have to unanimously vote in favor of the legislation. Take a wild guess how many pieces of non-mandatory legislation would pass? Such is the situation that exists in the United Nations. No wonder it is such a ‘shock’ why the United Nations does not accomplish more with regards to its world peace initiative.

Unfortunately from the beginning the United Nations seemed doomed as a meaningful entity in shaping global events and acting as a global peacekeeper. The negotiations that created the United Nations steadily moved from the idyllic goal of uniting the world against a single group that elected to cause trouble to the disjointed sphere of influence power structure seen now. One of the reasons for this transition was the role of the Soviet Union. After World War II the United States and the Soviet Union emerged as the two dominant power bases in the world. However, both sides were wary of one another and their corresponding roles in the United Nations. This fact was especially true for the Soviet Union, believing that the United Nations would simply be a pawn of the United States and the West, utilized as an avenue to destabilize the Soviet Union through indirect means. Overall due to the feelings regarding communism at the time, such a concern was warranted. Therefore, the Soviet Union demanded the ability to negate any decisions made by the United Nations that it did not agree to, thereby eliminating the possibility of the United Nations being used to destabilize the Soviet Union or hindering its global activities. The lack of Soviet Union participation in the United Nations would strip it of any legitimacy, thus the request was honored. Clearly the United States could not allow the Soviet Union to be the only powerbroker in the United Nations and nationalistic fervor from its allies, Britain and France despite waning influence on the global stage, lead to their demand for a seat at the proverbial table. This mindset resulted in the creation of the Security Council and the veto. It is ironic that the very thing that was created to make sure the United Nations would have some level of significance precludes it from having genuine significance.

In past years it was ideology that limited the influence of the United Nations largely stemmed through the debate between capitalism vs. communism. In the present the economic philosophy of capitalism vs. communism is no longer the political football it once was and now simple economics handcuff the United Nations. Expansive national economical interests certain countries have in ‘trouble’ countries influence the United Nations to inaction in effort to protect those interests. This complicated linked economic relationship network derails the peer-pressure tactics of the United Nations where every other country in the world threatens some form of economic or military consequences against the offending country.

In addition the United Nations can exemplify the worst in international cooperation due to the tiered system generated by the existence of the Security Council. Nations that are members with no access to the powers of the Security Council are akin to little children that have no authority to defy the wishes of their parents and little to any influence on the decision-making process. It is true that there are 10 rotating seats on the Security Council, but none of those seats have veto power. In some respects the creation of the United Nations can be related back to the Constitution of the United States and the debate between large population states/commonwealths and small population states/commonwealths regarding the structure of the federal government.

Obviously in the debate the representatives for the large population states wanted a unicameral legislative branch under the rational that states with the larger populations should have the most say in the federal government because its residents would be most affected by federal policy. Of course the representatives for the small population states objected to a unicameral legislature because, and rightly so, they felt that such a system would confine their states to a 'second-class' standing instead of being on equal footing with the larger population states. Such a system would be difficult to change because few individuals would move from a position of greater power to one of less power, thus generating a high probability that large states would always remain relatively large and small states would always remain relatively small. The founding fathers realized the small population states to be essential to the formulation of the United States, so a bicameral legislature was created to facilitate the wishes of both the large population states and the small population states.

In the formation of the United Nations the input and opinions of the smaller less powerful countries were not considered important to the functionality of the United Nations, so no system was established that allowed for the smaller countries to properly voice their opinions where they would be legitimately considered. One could argue that the ‘one country’, ‘one vote’ system in the General Assembly is the fairest possible, but because nothing the General Assembly does, outside of monetary issues, is binding, its level of fairness is moot. Basically the real decisions in the United Nations are made by the Security Council and some other more powerful non-Security Council members and everyone else is just window-dressing for the illusion of cosmetic fair play.

Both of the aforementioned problems are compounded by the fact that the United Nations relies on its member nations for everything; it has no permanent stand-alone status in the global community. The United Nations does not have its own standing army, but instead is reliant on the combined forces volunteered by certain members to neutralize conflict. The United Nations has limited economic power or influence because it does not participate in the global economy outside of its charitable arm, so economic sanctions and influence is again dependent on member nations. For the most part the more powerful countries simply do whatever they want regardless of the 'official' position of the United Nations. In their eyes if the United Nations approves of their action(s) that is great, but if it does not, whatever, the opinion of the United Nations will not stop them. The funny hypocrisy of this attitude by the stronger countries is that they expect the weaker countries to fall in line with the 'official' position of the United Nations even when they themselves do not.

These two major problems with the United Nations need to be addressed if the United Nations is going to evolve and be relevant in the coming decades as a means to end a majority of significant conflict, not just World War sized conflict. The problem of inaction due to the Security Council veto is easily addressed by removing the significance of a single veto from United Nation edicts. Simply change the policy of the United Nations so if an issue only receives a single veto from a permanent Security Council member, the issue still becomes the official position of the United Nations and a binding resolution from the Security Council. If a certain position receives two or more vetoes then the veto is upheld. Installing this updated system would ensure that at least two powerful nations object to the action or position of the United Nations; thus its ability to act cannot be handcuffed by the special interests of a single country. Unfortunately the probability of the nations making up the Security Council accepting this change is small because it enforces the very thing that these nations most feared when establishing the United Nations, the global community acting against or opposed to the wishes of that particular country.

Any opposition would be odd though, for with the dilapidated state of the United Nations regarding military and strategic issues in the modern era, one wonders why the Security Council nations would be bothered by this counter action. As previously indicated whether or not the United Nations favors a given nation is treated as a passing thought to the leaders of the nation in question, not as a significant factor that would prevent action. Perhaps the lack of respect given to the United Nations stems directly from the power of the single veto, remove that power and the United Nations becomes a much more significant threat.

Addressing the second concern requires more thought, tact and negotiation. Regarding imbalance, little significant change has occurred in narrowing the divide between the strong countries and the weak countries, with only a few notable exceptions, despite the advent of globalization and advances in technology in the last half century. Therefore, it would be hard to argue that nations that have no significant influence from an economic or military level on the world should have equal input regarding the actions of the United Nations.

For instance the United Nations system can be related to the following example. Suppose you have 3 individuals, A, B and C that are deciding between various solutions to solve a problem. Person A is responsible for supplying 85% of the physical and financial resources to the solution and person B and C only supply 7.5% each; however, each person has one vote when deciding on the actual solution. This situation does not appear to be fair in that each individual gets an equal say in the decision-making process, but person A has to contribute the bulk of the resources to execute the decided upon solution regardless of whether or not he agrees with it. Although the resource expenditure distribution should not matter if the optimal solution is attained, typically the complexities of the decision-making process when many different parties are involved make it difficult to uncover that ideal solution, especially when countries tend to look after their own interests. One rational for reform could suggest that the stronger countries have been screwing over the weaker countries for decades, so it is time for the stronger countries to ‘take one for the global team’, but it is unlikely that such reasoning would be positively received.

Also there is the question of what is to be done about nations that once had significant influence, but now no longer do? Clearly it is difficult for a given country to acknowledge that their influence and relevance on the world stage is no longer significant. For instance besides the fact that is possess nuclear armaments what real influence does France still have on the world? Once the era of oil and fossil fuels ends what influence will Russia have? Would it be appropriate to replace these nations on the Security Council with more upstart nations such as Brazil and India? Should veto power on the Security Council simply be reserved to the 5 or 6 nations with the highest GDPs? During the time that Kofi Annan was the U.N. Secretary-General, one of the issues he tried to address was this difference between 1945 influence levels and present day influence levels, but little came of it.

The best option may be to invoke the representative nature of the House of Representatives. Each nation in the United Nations would be allotted a specific number of votes pursuant to their relative GDP. Under such a dynamic the more powerful and wealthy nations would still have a significant amount of power, but the smaller nation s would not be powerless in the decision-making process. Adjustments in vote allotment based on GDP position could be made every five years. However, under such a system the influence of the Security Council would have to change, a change that is currently undetermined.

Overall it appears that if the United Nations is going to make a legitimate effort at curbing all forms of global violence its member states need to move beyond the reliable staple of a short-term cost-benefit analysis. Most conflicts that are not mediated by the United Nations are regional conflicts with little benefit to outside interests. Therefore, intervention in these conflicts takes more of a humanitarian role instead of a strategic or economical one. For example outside of express reasons of morality, United Nation involvement in Darfur seems to incur greater detriment than benefit.

If the above philosophy switch is too daunting, if member nations continue to rely on cost-benefit analysis, then perhaps like so many things, the role of the United Nations must simply evolve to maintain relevance. With all that has been said about the performance of the United Nations on security issues, it does a fine job acting as a mediator or middleman for the distribution and operation of humanitarian programs that may not otherwise exist without the United Nations. Outside of the Oil for Food program, which was corrupted by a small number of individuals, one is hard-pressed to find valid criticism of United Nations lead humanitarian and charitable efforts. Therefore, should the future of the United Nations simply be that of a humanitarian entity and not one that involves itself in the broader disputes of the world stage? The answer to that question would be largely influenced by the resources and time saved by eliminating the brunt of the major military engagements participated in by the United Nations. If the number is large then it is worth looking into diverting some/most of those resources to the humanitarian division; if the number is low then transferring those resources will make little difference in enhancing humanitarian effectiveness.

The first step to United Nations reform and renewed legitimacy is to define its present day role, then based on that role define the elements that need to be enhanced, redistributed and eliminated to better achieve that role.

Wednesday, July 15, 2009

Emission Adherence in 2020 and 2030 under the American Clean Energy and Security Act (ACES)

Revisiting the previous investigation of the ACES energy gap it was concluded that although the modeling and analysis itself was correct, an unrealistically high value was selected for the anticipated electricity demand, which generated a final conclusion that did not explore the entire range of growth possibilities. Therefore, it was important to conduct a second investigation using a greater range of anticipated electricity demands to generate more accurate expectations regarding renewable energy and efficiency requirements. This secondary investigation also adds an additional level of complexity by tracking emission and electricity generation expectations from 2020 to 2030 in addition to other more specific elements.

Recall from the previous analysis that the electricity generated from various sources in United States is shown in the table below. 1



* includes both Thermal and Photovoltaic
# all values are in MW-h;

Also recall from the previous analysis that it is logical to expect that a significant majority of the emission cuts in the United States will come from the transportation sector and the energy generation/use sector. The energy sector accounted for 53.6% of the total emissions in 2007 and 64.8% of CO2 emissions (3,902.3 million tons) and the transportation sector accounted for another 27.66% total emissions and 33.45% of CO2 emissions (2,014.4 million tons).2,3 Overall it would difficult to expect significant cuts from the agricultural sector not only because it is responsible for a lower percentage of emissions (most of these emissions being other GHGs, not CO2), but the emissions associated with the agricultural sector are more difficult to control than those in the transportation or energy sectors and most do not fall under any initial Phase of the ACES. In addition since the first energy gap post, reductions in agricultural emissions have become even less probable, especially leading up to 2020, due to certain concessions given to the agricultural sector to ensure passage of the ACES in the House.

General Analysis Assumptions –

The ACES is passed by the House and Senate as is (17% reduction of 2005 emission levels by 2020 and 42% reduction in 2005 emission levels by 2030)

The reason for this assumption is that the analysis must have an emission reduction target and the one provided by the ACES makes the most logical sense to use because it currently has the highest probability of actually becoming reality.

By a given target year, carbon emissions will be reduced to 100% of the cap.

What is the point of even conducting the analysis if the emission cap is not successful at reducing emissions? On the other side it is probably unrealistic to expect a significant emission reduction beyond the cap.

Economic considerations are ignored.

Initially one might view this assumption as unrealistic and irresponsible, but the purpose of this analysis is to identify possible solutions for bridging the energy gap while meeting the emission cap, not to investigate the most economically efficient solutions. In addition it is difficult to make cost estimates for certain energy sectors over a decade into the future due to changing technology and demands.

One of the biggest problems with ACES discussion is the economic distraction. So many people are debating about the total cost of meeting the caps they seem to forget the critical question is can the caps actually be attained in the first place and if so, what are the necessary expectations to do so? Economics are meaningless if the goal is not attainable because improper decisions are made.

No offset considerations were included in this analysis.

The goal of this analysis was to develop a strategy where one would have a level of rational confidence regarding the energy requirements for both the successful acquisition of the emission cap as well as bridging the energy gap. Offsets cannot be regarded as genuine emission reductions 100% of the time (in fact no one can really define even a genuine percentage for offsets although a range from 33 to 67% has been thrown around). Clearly due to this lack of certainty, inclusion of offsets would be counter-productive to a real analysis concerning the energy gap. Would the inclusion of offsets lessen the required growth for all other energy suppliers? It is highly probable that they would; however, it is difficult to determine an accurate assessment due to the lack of a defined percentage or even an estimate to how many will be purchased from now until 2020 or 2030; therefore it is not rational to include them in the analysis.

Emission reductions from the electricity generation sector will primarily involve reducing the amount of coal burned.

Coal is commonly regarded as the ‘dirtiest’ form of energy. For every 1 MW-h of coal approximately 1 ton of CO2 is released into the atmosphere whereas natural gas and petroleum only release 0.4-0.5 and 0.75 tons of CO2 per every MW-h of energy produced.4 Therefore, an optimized emission reduction scheme would remove the highest polluting entities first. Also petroleum only produces »1.5% of the energy in the United States, thus any petroleum cuts would be merger anyways.

Any changes in atmospheric methane, sulfur dioxide and nitrogen oxides (NOx) concentrations are insignificant.

This assumption is probably not very accurate because realistically it is highly probable that concentrations of methane and various nitrogen oxides will increase, but estimating additional requirements is not easily identified and could skew the analysis. Basically this assumption hopes for a favorable outcome with regards to other GHGs. Overall under all 3 early phases of the ACES a very small percentage of CO2 equivalent GHGs are capped;5 a measly drop in the bucket.

No Carbon Capture and Sequestration/Storage (CCS) technology is implemented.

The fact is that development of CCS technology is more than likely not going to provide any real benefit in the quest to reduce CO2 emission due to its low probability of success. Even if CCS technology is successfully administered in the near future it is unlikely to be incorporated into a significant number of coal derived electricity generation systems.

All reductions in the transportation sector come from either increased fuel efficiency or use of gasoline/biofuel blends where the biofuel is derived from an algae source.

This assumption is a little stretch, but a vast majority of the early reduction in transportation emissions is going to come from increased fuel efficiency and incorporation of gas/biofuel blends. Although hybrids, plug-ins and electricity vehicles have a significant amount of attention, until an automotive infrastructure supporting them is better established, it is difficult to conclude that their impact will be significant through widespread incorporation. Overall as determined in the previous analysis regarding reductions due to the White House’s new fuel economy policy, increases in electricity demands due to electric and hybrid cars would be small.


The analysis consisted of two parts. First, re-examining the information from 2007 to 2020 using more realistic anticipated electricity demands to determine the necessary natural gas growth rate to meet required energy needs as well as adherence to the emission cap. Second, extending the analysis from 2020 to 2030 to determine the required growths in renewable energy providers, especially wind, to meet required energy needs as well as adherence to the emission cap.

Based on previous information acquired both in the first energy gap analysis and the transportation analysis certain restrictions were placed on the possible scenarios applied to both portions of the investigation. For instance instead of utilizing three scenarios of transportation emission reduction (10%, 20% and 30%) like in the first analysis, this second analysis abandoned the 30% scenario for the 2020 analysis and the 10% scenario for the 2030 analysis. Therefore, only two transportation reduction scenarios were used in the investigation 10% and 20% for 2020 and 20% and 30% for 2030. The reductions were connected in logical succession for the second linked analysis, a 10% reduction in 2020 lead to a 20% reduction in 2030 where a 20% reduction in 2020 lead to a 30% reduction in 2030.

Once again the energy providers that were explored to fill the gap consisted of nuclear, wind, solar, biomass, geothermal and natural gas. It was assumed that there would be no significant growth in the petroleum or hydroelectric sectors. Petroleum was excluded because similar to natural gas, petroleum is not a trace/zero-emission energy provider so any increases would not result in a significant enough emission reduction vs. coal. Also petroleum only makes up approximately 1.5% of the total energy generation anyways so any increase or decrease in petroleum as a electricity provider to successfully adhere to the 2020 cap would be rather insignificant vs. the other reductions that have to be made. Hydroelectric was excluded because the overall growth rate of hydroelectric stations has pretty much peaked and energy generation has largely cycled within a range of 240,000,000 MW-h to 290,000,000 MW-h since 2000.1 Any tide based hydroelectric power was considered insignificant based on its growth potential and total electricity generation potential.

The 2020 and 2030 emission caps outlined in the ACES are 17% and 42% of the total carbon dioxide equivalent of 2005 identified in Section 721, subsection e, Part 2, Section A, subsection i of the ACES as 7,206 million tons respectively. Therefore, the emission cap would be 5,980,980,000 tons of carbon dioxide equivalent for 2020 and 4,179,480,000 tons of carbon dioxide equivalent for 2030.

In addition to the energy gap that is generated from removing coal and natural gas from the electricity grid to meet emission caps, it is reasonable to anticipate that additional electricity will be demanded for both 2020 and 2030. According to the EIA the electricity demand for 2007 could be averaged to approximately 3,904,400,000 MW-h.6 The EIA estimates low, average and high additional demand scenarios for 2030 where additional demand is approximately 16%, 26% and 36% respectively. From this information, assuming an 55%/45% ratio of progression from 2007 to 2020 vs. 2020 to 2030, additional demands were calculated and are shown in the table below.

Again due to the efficiency measures outlined in the ACES in addition to increased public awareness to the importance of efficiency different efficiency scenarios were explored for both the 2020 and the 2030 analysis. Five linked efficiency scenarios were applied to the investigation: 0% to 30%, 30% to 50%, 30% to 80%, 50% to 100% and 80% to 100% representing progression from 2020 to 2030.

The EIA low, average and high electricity demand scenarios are predicated on two elements. The demand required for existing infrastructure and the demand required for future infrastructure with weighting on the demands of future infrastructure. Most of the immediate reductions in electricity demand will come from efficiency applications to existing buildings and later be distributed to new buildings. However, although increases in efficiency will reduce electricity demand, it will not eliminate all of the electricity demanded by new infrastructure; therefore, it is reasonable to believe that from 2007 to 2030 the total electricity demand will not drop below 2007 levels.

In the report “International Energy Outlook 2009” the EIA estimates various growth trends for various forms of energy and fuels up to 2030 for a variety of countries including the United States. Using the growth estimates from this report and other available EIA information, will enough energy be generated to bridge the gap? From the information an annual growth rate from 2006 to 2020 can be estimated for wind, nuclear and geothermal.7,8,9 In the first analysis a biomass growth rate was assumed instead of calculated from prognosticated data. For this analysis a biomass growth rate was calculated from EIA projection estimates.6 Solar photovoltaic and solar thermal growth rates were also calculated, but because EIA information on 2007 electricity generation does not differentiate between the two the larger of the two calculated growth rates was used to model the growth of the solar power sector.10 Calculated annual growth rates were 8.165%, 0.65%, 2.94%, 6.71% and 11.17% for wind, nuclear, geothermal, biomass and solar respectively.

However, the annual growth rates calculated above are different from the growth rates that were experienced between 2006 and 2007 as shown in the first table. Most of the growth rates between 2006 and 2007 exceed those that are calculated from the long-term EIA estimations. The electricity generation potential of trace/zero emission sources was also examined using these growth rates. Recall in the first energy gap analysis that the continuation of a 29.56% annual growth rate for wind was viewed as unreasonable and a maximum hypothesized annual growth rate of 20% was utilized. Renewable growth rate scenarios were labeled as either standard (EIA estimates) or 06-07.

Inclusion of natural gas complicates emission adherence to the cap because natural gas is not a trace/zero emission source. Natural gas does release about a ton of CO2 per about 2.5 MW-h of energy generated (assuming the most efficiency energy generation process). In order to compensate for these emissions, a necessary step to ensure adherence to the emission cap in 2020 or 2030, a coal masking reduction rate of 20% was assigned. For instance suppose natural gas produces an additional 30,000 MW-h of electricity from year x to year y. That new electricity would produce 12,000 tons of additional CO2 emissions. Utilizing the aforementioned coal masking reduction rate, 20% of those new emissions, 2,400 tons of CO2, would be masked by removing an additional 2,400 MW-h of coal-generated electricity from the grid (coal generating approximately 1 ton of CO2 per MW-h). The coal masking rate of 20% was selected to create a controlled rate of decline in coal derived electricity production in order to limit the probability of potential brownouts.

In addition to the transportation and electricity generation sectors CO2 originates from other sources as well, especially manufacturing. In fact there are still approximately 1,600 - 1,700 million tons of CO2 (due to some overlap between sectors) that can be reduced from other sectors that eventually fall under the ACES cap during one of the three phases (Phase 1 begins in 2012, Phase 2 begins in 2014 and Phase 3 begins in 2016). Based on when a particular emission element fell under the cap, estimates of reduction were calculated from this group of emission elements for 2020 and 2030. For the 2020 cap it was assumed that 17% of the emissions would be reduced by 2020 if the sector was 100% capped under Phase 1, 12.5% if 100% capped under Phase 2 and 8.5% if 100% capped under Phase 3. For the 2030 cap it was assumed that 35% of the emissions would be reduced by 2030. The reason 35% was selected over 42% is it was hypothesized that it would be easier to make emission reductions in the electricity and transportation sectors over the manufacturing and other specialty sectors. Overall an additional 210,171,166 tons of CO2 were removed from these sectors in 2020 for the 2020 cap and 596,015,000 tons of CO2 were removed from these sectors in 2030 for the 2030 cap. The table below illustrates the total emission assumptions made for each sector from available cap information.5

* Values are in tons of CO2; Also the 2020 total is off by 1 due to rounding;

One of the primary goals of the analysis was to determine the minimum required growth rate for natural gas to cover the energy gap created due to adherence to the emission cap in the assigned scenarios. The results of the 2020 analysis are shown in the table below –

* Growth Rates are listed as annual growth rates pertaining to the time period between 2007 and 2020;
** NG Increase = % Increase in energy generated by natural gas utilized for electricity from 2007 levels; 100 = multiplying the 2007 amount (896,590,000 MW-h) by 2;

First, the scenarios are categorized as followed: [Efficiency; Renewable Growth Rates; Transportation Reduction; Anticipated Electricity for 2020]. Second, only four complete scenarios are listed in the table because the 30% efficiency scenario was repeated twice, in prospect for the 2020 to 2030 investigation. (30% to 50% and 30% to 80%);

As expected the required natural gas growth rate decreases as efficiency, transportation reduction or renewable growth rates increase. The optimal scenario generates a meager natural gas growth rate of only 1.80% whereas the worst-case scenario requires a natural gas growth rate of 10.99%. From a cursorily glance at the results transportation reduction appears to be the least significant factor influencing natural gas growth rate. The most influential factor appears to be the use of 06-07 renewable growth rates vs. standard as 06-07 rates generate 2-4% lower natural gas rates vs. standard when all other factors remain the same.

The initial lack of influence from the transportation reductions was surprising, so a specific breakdown analysis was conducted to identify how single percent changes in a given factor influenced the natural gas growth rate. The comparison was made between the transportation reduction rate per percent change vs. the wind energy growth rate per percent change. Wind was selected because outside of natural gas wind typically accounted for 60 – 85% of the new electricity generation from renewable sources in the investigation due to its large initial baseline (34,450,000 MW-h) and its large annual growth rate range (typically larger than all other growth rates sans solar which has a 56 times lower baseline). The only other relevant selection for exploration of renewable influence would have been nuclear and it is difficult to expect significant growth in nuclear in the coming decade. The results of this analysis are illustrated in the graphs below, the first one representing changes in transportation reductions and the second representing changes in the wind growth rate.


From the above information when broken down to a percentage aspect, transportation is slight more or slight less influential than the wind depending on the circumstances, although most of the time transportation is more influential. The reason the renewable growth rates appear to be more influential in the 2020 analysis is because the percentage range between different transportation reductions is smaller than the percentage range between wind growth values and there are other elements contributing to the renewable influence besides wind.

A somewhat troubling factor from these results was the rate of increase in natural gas electricity generation that will be required in the next decade. With the exception of the most favorable scenarios, most scenarios anticipated at least a 100% increase in natural gas requirements. Unfortunately due to the significantly high annual growth rates utilized in most of the favorable scenarios, these scenarios are not probable in reality. Another concern is that even at an efficiency of 80%, without significantly high renewable growth rates the required natural gas growth rate still ranges from 6.22% to 8.13%. The reason that such values should be a concern will be discussed later.

Finally looking at the rate of coal loss and the total amount of coal removed from producing electricity; the reduction rate of coal is important because despite what some may want to believe, it would be difficult to rapidly remove coal from electricity production without significant economic costs and rolling brownouts. Therefore, the optimal scenario solutions involve a reasonable reduction rate. Unfortunately it is difficult to ascertain what a reasonable reduction rate is, but something in single digits seems manageable.

In this vein it is important to look at the masking rate, which has a significant influence on the rate of coal loss. The graphs below document the influence of the masking rate on the natural gas growth rate under specific scenario elements. The baseline assigned for this investigation was 50% efficiency, 10% transportation reduction, Standard renewable growth rates and average anticipated future electricity demand.




* Note that the small hump before the equilibrium point is a visual error in the creation of the graph. The equilibrium point is the highest natural gas growth rate generated from the data;

The minimum required annual natural gas growth rate increases almost linearly with the masking rate until equilibrium. The equilibrium point occurs when all existing coal reserves utilized for electricity production have been expended. The reason the natural gas growth rate increases with an increasing masking factor is when coal is removed from electricity generation it increases the existing electricity shortfall, thus more natural gas needs to be burned for electricity to cover that portion of the gap. Granted the more natural gas that is substituted for coal the greater amount of CO2 emission reduction occurs, but the size of the emission cushion is irrelevant for this portion of the analysis. However, the cushion would play a role in aiding any future required emission reductions. Overall for the masking rate it was important to avoid generating an equilibrium value, but also select a value that would result in the reduction of enough coal to cover the natural gas emissions to adhere to the cap, thus why 20% was selected in the first place.

The second portion of the investigation was to identify the necessary elements to adhere to the emission cap of 58% for 2030 (or a 42% reduction from 2005 levels) and bridge the resultant energy gap. The analysis was conducted by stringing together a 2020 analysis to a 2030 analysis while assuming a natural progression in both efficiency and transportation. For example suppose a 2020 analysis scenario consisted of a 30% efficiency and 20% transportation reduction. In the linked 2030 analysis the efficiency would either be 50% or 80% with a 30% transportation reduction. In order to optimize the ability to meet the emission cap, all increased growth in natural gas stopped after 2020. An additional wind growth rate was assigned for the 2020 to 2030 time period. Assigned natural gas growth rates carried over from the 2020 investigation to the appropriate linked 2030 investigation. The design was facilitated so that all coal use in electricity production was eliminated by 2030. Note that realistically a 30% reduction in transportation 2007 level emissions is fairly optimistic.

The table below outlines the results of the 2030 portion of the analysis.

* Relates to the specific wind growth rate assigned between 2020 and 2030;
** Relates to the difference between the maximum amount of electricity generated from natural gas in 2020 to the amount of electricity generated from natural gas in 2030;

One thing that can be immediately recognized when looking at the results is the considerable difference in the natural gas reduction rate and the total remaining natural gas levels used for electricity production between the 20% and 30% transportation reduction scenarios. The primary reason that the transportation reduction factor has such a dramatically pronounced influence on the 2020-2030 reduction over the 2007-2020 reduction is the general lack of available coal.

During the 2007-2020 investigation there was plenty of coal that could be removed from the grid to contribute to the emission reductions required to meet the cap. In fact there was a total of 2.016 billion tons of CO2 that could be removed to meet a cap that demanded a reduction of approximately 1.301 billion tons (recall 2007 emission data is being used because it is the most relevant data to use). Replacing all of that coal with natural gas would generate a net savings of 1.2096 billion tons of CO2 meeting approximately 92.9% of the cap. Add in the 200+ million CO2 cut from manufacturing and any transportation reductions were technically not required, although they were important in the sense of complimenting the level of natural gas and renewable growth to cover the electricity lost from coal loss. However, during the 2020-2030 investigation 40-90% of the coal had already been removed from the equation, leaving most of the electricity-based reductions revolving around the reduction of natural gas. The difference between 20% and 30% transportation reduction is approximately 201 million tons of CO2. That difference is equivalent to 502.5 million MW-h of electricity produced by natural gas, which ranges from 19.5% to 214% of the total required additional amount of natural gas production in 2020 over the multiple scenarios, clearly a significant difference maker. Although transportation and other non-electricity emission reductions are important, again the size of renewable growth rates also play a role in controlling the natural gas reduction rate as using the 06-07 growth rates resulted in a 2-5% reduction in the required natural gas reduction rate.

Another conclusion that may seem unusual at first is the fact that at 100% efficiency all of the 2020-2030 wind growth rates are identical regardless of the assumed anticipated growth when the natural gas reduction rate was not 0%. This result occurred because at 100% efficiency all of the anticipated growth is eliminated, so the size of the anticipated growth is meaningless, therefore the required energy is equal to the energy lost from reductions in output from coal and natural gas.

The natural gas reduction rate is an interesting element because reducing the amount of electricity generated from natural gas is much easier than increasing the amount of electricity generated from natural gas. However, the higher the value of reduction the greater perceived economical consequences because the larger the required drop, the more radical the transition from natural gas to renewable energy which will result in a greater level of job loss and electricity interruption.

Another piece of useful information that was acquired from the secondary investigation is the importance of the 2007-2020 study on the anticipated natural gas reduction rate. For example when looking at the difference between the natural gas reduction rate in the 50% and 80% efficiency scenarios in the 2020-2030 analysis there is no difference. The reason for this lack of difference is because with all things remaining the same, the 30% efficiency scenario for 2007-2020 analysis that linked into the 50% and 80% 2020-2030 analysis established the same natural gas growth rate. In the first portion of the second investigation, natural gas and coal contributions to the grid were reduced to meet the emission cap. This reduction has nothing to do with increased efficiency because those reductions are required. Efficiency would only matter in this situation if said efficiency exceeded 100%. However, the efficiency did influence the resultant wind growth rate from 2020-2030.

A final point regarding the second portion of the investigation is that four specific scenarios actually generated a condition where no natural gas reductions were required to adhere to the both the energy requirements and the proposed emission cap. The reason for such a result is that emission reduction from non-electricity sectors was considerably higher than required due to the lower initial requirement of natural gas due to the high level of renewable growth in the 2007-2020 time period. In fact these scenarios actually demonstrated a significant reduction required wind growth due to the lack of natural gas loss.

All of the above investigations sought to generate a workable range of information pertaining to the relationship between any energy gap and the emission caps generated by the ACES. However, generating a realistic single scenario would go a long way to understanding what needs to be done, if anything.

For the 2020 portion of the specific analysis, based on the previous analysis regarding reductions in the transportation sector it is reasonable, albeit a little optimistic, to anticipate a 12.5% reduction in emissions from 2007 to 2020. Also due to increased efficiency it is reasonable to assume an anticipated electricity demand that is the average of the low and average scenarios provided by the EIA, which would require an additional 447,903,500 MW-h of electricity. Energy savings due to efficiency increases was assumed to be approximately 1.07 quadrillion Btus. Finally the same non-electricity and transportation reduction scheme that was used in the broader above analysis was used in this specific example (210,171,166 tons of CO2).
Annual renewable growth rates were typically estimated based on general trends. Wind was assumed to grow at 17%, slightly smaller than the previously estimated maximum growth of 20%, a value used in the 06-07 scenario of the above analysis. Solar was estimated slightly higher at 18% growth annually due to the much lower initial baseline of provided electricity; however, it can be argued that growth in solar power has far and away the largest possible standard deviation based on potential future costs. Overall an annual growth of anywhere from 10% to 35% would not be out of the question. Nuclear growth was limited to 0.8% due to the already high capacity rates of currently operating plants (90%+) and the trend that it is improbable that a significant number of new plants will be constructed and fully functional by 2020 due to the high capital costs and general lengthy construction times. Similar to the nuclear growth rate, increases in geothermal-based electricity were considered small, 1% annual growth, due to similar concerns over plant construction times and lack of attention because geothermal is not as hyped or flashy as wind or solar. Finally the growth rate for biomass was estimated at a conservative 2.5% due to concerns that significant new barriers to its expansion would be created with the reduction of co-firing in coal plants due to coal loss and questions about feedstock supply. Similar to solar, biomass is difficult to gauge because of its wide range of growth potential. The results for the 2007-2020 analysis using the above scenario assumptions are shown in the table below.

In a scenario that could very well be witnessed in reality, natural gas growth is manageable, but higher than most growth in the past, especially on a consistent basis. Recall that since 1996 until 2007 the highest year to year growth in natural gas as an energy source was 10.82% from 1997 to 1998.11 2006 to 2007 produced the second highest year to year growth with the previously illustrated 9.81%. In addition the future annual growth rate of natural gas from 2006 to 2020 can be calculated at 0.78% with 96.5% of that growth coming in the last 5 years (from 2015 to 2020).12 Also this growth requires a total natural gas volume of 7.01 trillion cubic feet be devoted to electricity generation, (4.04 trillion cubic feet more natural gas than used 2007), a result that will require a significant number of new national gas acquisition projects or considerable increase in natural gas importation. Remember that the 4.04 trillion cubic feet of additional natural gas is only applied for a single year, 2020. For most of the investigated scenarios, from 2010 to 2030 an additional 25-60 trillion cubic feet of natural gas will be required (this specific analysis required an additional 42.24 trillion cubic feet). One bright spot is the amount of coal loss is significant, but controlled at only 8.94% annually with a total reduction of 71%.

For the 2020-2030 portion of the specific scenario investigation, another 10.5% in reduction of transportation emissions was anticipated along with an efficiency which resulted in a savings of 2.94 quadrillion Btus or an additional 1.87 quadrillion Btus from 2020 to 2030. The non-electrical and transportation emission reductions were also carried over from the broad investigation.
Assuming no change in any non-wind renewable annual growth rates from 2020 to 2030, the results from the 2020-2030 investigation are shown in the table below.

Regarding the results of the 2020-2030 portion of the investigation, the natural gas reduction rate is manageable, but higher than desired. Unfortunately the biggest problem is the accelerated increase in the required annual wind growth rate to fill the energy gap. The reason this increase is a problem is outlined below.

The electricity demand from wind in 2030 in the most immediate analysis is 2,636,171,979 MW-h (approximately 76.5 times the amount produced in 2007). Based on the information provided by the EIA the total MW potential of wind power in the United States rose between 2006 and 2007 from 11,603 to 16,818. Using that information an average full potential of operation can be calculated at approximately 2170 hours per year or a capacity of 24.8%.

The largest wind farm in the United States is Horse Hollow Wind Energy Center in Taylor and Nolan Counties in Texas, which produces 735 MW of peak power from 421 turbines, covers a land mass of 47,000 acres or approximately 64 acres/MW.13 Using this information how much land and total capacity will be required to generated the anticipated wind derived electricity in the above analysis? First, it is reasonable to assume that wind technology will not remain stagnant, but will steadily improve. A 30% increase in turbine efficiency from now until 2030 seems reasonable with the ability to retrofit older models. With this increase the ratio of acres per MW drops to 49.19. Next assume that the average full potential of operation increases by 2% per year from 2007 to 2030 due to administration of offshore wind turbines and even the possibility of aerial suspended turbines capturing higher velocity winds more frequently. Even with those increases in efficiency and power generation, a total of 770.4 GW will still be required covering a total land area of 59,210 square miles to attain the required wind-based electricity in the above scenario, a result that falls far short of the extremely ambitious 300 GW scenario proposed by the EERE.14

The sobering reality of the above wind requirements leads to the obvious conclusion that clearly if significant reductions are going to come from the electricity generation sector other trace/zero emission renewables will need to be cultivated. Unfortunately as previously mentioned such a scenario does not look promising. One of the best options, nuclear power is struggling because of high capital costs, extended plant construction times and a lack of technological development in the United States due to continuing concerns about terrorism and nuclear waste. Biomass is a huge question mark. Solar could grow at 30% annually over the next two decades and still be a relative non-factor in electricity production (255,533,901 MW-h in 2030). Hydroelectric is pretty much tapped out outside of some small pickings made through tidal generation. Geothermal has potential, but may need new geological mapping and a lot more attention. Include in all that uncertainty the fact that support and mandates for renewable energy growth continue to be weakened with every new draft of the ACES and things do not look promising.

Another concern is the apparent competition between efficiency and renewable innovation and development. The ACES Section 782 subsection g allocates permits to go into a fund labeled the State Energy and Environmental Development (SEED) from which state and local governments can draw funds for efficiency and renewable projects. 20% of the SEED money must go to renewable energy programs and another 20% must go to energy efficiency leaving the remaining 60% to be allocated freely between the two. The question is why must funds be divided between one or the other, why not devote considerable funds to both efficiency and renewable energy innovation and development? Overall if competition is required, it appears that more funds should be distributed to renewable development over efficiency because although energy efficiency represents the quintessential ‘low-hanging fruit’ of emission reduction15, efficiency can only go so far and in later years renewable energy will be far more important and will take far longer to implement. It seems that too many people are thinking too short-term due to short-term economics, but that type of thinking created these emission issues in the first place. It appears more suitable to extend efficiency improvements by 3-5 years if that same time frame can be reduced from significant renewable development.

Returning to one of the analysis assumptions regarding offsets, what if offsets were used? Offsets would provide the ability to advance towards cap adherence while not contributing additional stress on the energy gap? The problem with offsets is that it is difficult to confirm whether or not they are genuinely reducing CO2 or other GHG levels as a number of sources have demonstrated their lack of reliability.16,17,18,19 So although one may argue that numerically offsets would significantly aid cap adherence while not increasing the potential energy gap, such an argument would be an exercise in futility because the Earth only cares if those offsets are actually working in reality, not just on paper and the whole point of the cap is to generate genuine emission reduction. Also it is difficult to hypothesize the number of offsets that would be utilized. Some argue that because of European infiltration into international offset markets for a number of years now, the available number of international offsets would be far and few between and those available would have significant costs.20 If this contention is correct that leaves a cap of 1 billion tons of domestic offsets per year for substitution. However, domestic offset opportunities have not been effectively isolated and classified in the detail required to make a firm assumption regarding how prevalent their stockpiles over the coming decade and how effective they would be at contributing to emission reduction.

Anti-deforestation based offsets are a different matter. Although there are some remaining problems regarding additionality of deforestation offsets, the ACES as currently structured does provide some elements that look to reduce deforestation. However, including emission reductions from deforestation against the cap can be a little tricky. Suppose in a given year 300 million tons of CO2 are prevented from being released into the atmosphere due to anti-deforestation efforts. How does this reduction play against the cap? It would be incorrect to continuously count these savings because deforestation is a single-time release event. Therefore, it would be proper to count such a savings against the cap in year increments as the total savings divided by the difference between the target year and the year of initiation in the particular savings program. For example if that 300 million tons of CO2 was prevented from release in 2012 then 37.5 million tons of CO2 could be counted off of the 1.2 billion tons of CO2 required for removal to adhere to the 2020 cap, if the 300 million tons are not counted all at once in 2012. Anti-deforestation measures are exceedingly important and should be pursued, but it does not appear that they will provide significant relief to cap adherence vs. energy gap creation and the necessary size of renewable growth rates through procedures in the ACES.
This perceived difficulty in meeting both the emission cap as well as the resultant energy gap might invoke an interesting, if not somewhat controversial strategy. One of the questions that can be asked after looking at the results of this investigation is whether or not the 2020 cap of 17% is a good thing? Previously it was argued that the cap was too weak, that it needed to be higher to generate the necessary momentum to carry into the more difficult 42% emission reduction demanded by the 2030 cap. However, the yo-yo effect of natural gas growth and decline seen in this analysis under most of the investigated scenarios casts doubt on the benefits of the 2020 cap. Rationally it does not make logical or economic sense to increase the electricity derived from natural gas by 75-300% (over a vast number of explored scenarios) over a 10-year period then do an about-face and decrease the electricity derived from natural gas by 67-95% from the 2020 high over the next 10 years after that. For example at 50% 2020 efficiency to 100% 2030 efficiency 10% to 20% transportation reduction with 06-07 growth rates at average anticipation, the rise and fall (yo-yo effect) of natural gas electricity generation is shown in the figure below.

Some may argue that the necessary end point infrastructure already exists, the natural gas plants themselves, and they simply function at a low electricity generating capacity due to the low costs of coal, which in time is neutralized by the ACES. Although this seems true, one must not forget about where the supply of natural gas to increase the capacity of those plants will originate. Currently, although it can be argued that through unconventional natural gas reverses the United States has enough natural gas to generate the necessary levels of electricity, most of those reserves have yet to be explored or tapped. Both exploration and tapping cost significant capital, capital it does not make logical sense to spend for only 5-10 years of operation where instead that capital can be spent on trace/zero emission electricity generation. Therefore, it may be worth considering eliminating the 2020 emission cap altogether as the cap itself is the only element that facilitates this natural gas yo-yo effect. Clearly such a decision needs to be weighed carefully.

In a scenario that eliminates the 2020 cap, the 2012 cap would be extended from 2012 to 2020 acting as almost like an 8-year grace period for various corporations, but ensuring no increase in emissions. After 2020 the dynamics behind the progression of the 2030 cap, which could be strengthened to something like 50% instead of 42% due to the grace period, would be enforced requiring greater emission reductions and harsher penalties for those failing to comply. Realistically it is highly probable to conclude that the emission reduction from 2012 to 2020 will not attain some form of equilibrium around 97% of the 2005 value, but will progressively fall due to the impending application of the 2030 cap because even though they could, it would not make sound business sense for corporations to do nothing over the 8 year period with a significant increase in reductions in the future. The scenario differs from business as usual because in the business as usual scenario there is no emission cap in the future. It would be reasonable to expect an emission reduction between 5-12% from 2012 to 2020, smaller than the current cap at 17%, but larger than the 3% cap at 2012.
Assuming that U.S. emissions in 2012 adhere to the 97% ACES cap and a linear decrease in emissions between noted cap years, the three tables below illustrate the differences in CO2 equivalent ppm contributions to the global environment from U.S. emissions from 2012 to 2030 in three proposed scenarios: Normal 2020 Cap and Normal 2030 Cap; 8% Reduction in 2005 emissions by 2020 and Normal 2030 Cap; 2012 Cap up to 2020 and Normal 2030 Cap;

So in the result of the scenario abandoning the 2020 cap in favor of extending the 2012 cap and no strengthening of the 2030 cap the total ppm difference is 1.164. In the passive reduction scenario the total ppm difference is 0.748. However, it is currently impossible to gauge whether or not abandoning the 2020 cap is rational because no estimates exist for the capital that would be expended to generate the requisite natural gas supplies to bridge the energy gap leading to 2020.

Another question surrounding the viability of dumping the 2020 cap would be the behavior of the natural gas companies. The capital required to generate the necessary supply to meet the electricity generation needs provided by natural gas will come from natural gas companies. However, if the 2020 cap is removed the natural gas supply requirement will drop significantly leading to a significantly reduced amount of investment in creating new supply wells. Think about it this way – a pharmaceutical company spends 3-6 years developing a drug for market, sells that drug on the market for 4-6 years making a significant amount of money doing so, then the drug is banned. Under the current 2020 cap that is the highly probable existence of the natural gas industry in the generation of electricity. The question is will the pharmaceutical/natural gas company make or lose money through the entire process? Overall it is highly likely that with electricity regulation profit will not be made. Therefore, would the pharmaceutical company aim to invest in a longer-term project that would require 6-10 years of investment before payoff (trace/zero emission energy providers) or do nothing. Knowing that answer would go along way to determining if dropping the 2020 cap is a wise move. If the natural gas companies invest in renewables, then the additional 8-year grace period has meaning because when the 2030 cap takes over in 2021 a larger amount of renewable electricity generation will be in the pipeline. If the natural gas companies do nothing then the additional 8-year grace period only results in additional CO2 being put into the atmosphere.

Overall the above investigation identifies a number of important questions that need to be asked about the future of electricity production under the ACES. First, what trace/zero emission electricity providers have the ability to/need to replace coal and natural gas in the future? Second, what behavior can be anticipated from coal and especially natural gas electricity generating sectors in the future regarding investment in renewables? Third, how legitimate are offsets and how large will their role be in future emission reduction? Fourth, is the 2020 cap a benefit or an obstacle to effective emission reduction? Fifth, can the global environment afford a smaller anticipated reduction in emissions from the United States from now until 2020 with a possible larger reduction between 2020 and 2030? Without identifying high probable and honest/objective answers for each of these questions, it would be difficult to envision the ACES being effective at reducing CO2 emissions without generating large excessive costs and/or energy shortfalls over its lifetime.

-------------------------------------------------------------------------------------------
1. “Electric Power Industry 2007: Year in Review.” Table ES1. Summary Statistics for the United States, 1996 through 2007. Energy Information Administration. January 2009.

2. “Emissions of Greenhouse Gases Report” Table 5. U.S. Carbon Dioxide Emissions from Energy and Industry, 1990, 1995, 2000-2007. Energy Information Administration. Dec 2008.

3. “Emissions of Greenhouse Gases in the United States 2007.” Table 6. U.S. Energy-Related Carbon Dioxide Emissions by End-Use Sector, 1990-2007. Energy Information Administration. Dec 2008.

4. Hong, B.D, and Slatick, E. R. “Carbon Dioxide Emission Factors for Coal.” Energy Information Administration, Quarterly Coal Report, January-April 1994 pp 1-8.

5. “EPA Analysis of the American Clean Energy and Security Act of 2009 H.R. 2454 in the 111th Congress.” Emissions Inventory – coverage & caps – Master – 051509. Environmental Protection Agency. 2009.

6. “International Energy Outlook 2009.” Trend 3: Electricity Demand. Energy Information Administration. May 2009. 71-75.

7. “International Energy Outlook 2009.” Table H8. World Installed Geothermal Generating Capacity by Region and Country, 2006-2030. Energy Information Administration. May 2009.

8. “International Energy Outlook 2009.” Table H5. World Installed Nuclear Generating Capacity by Region and Country, 2006-2030. Energy Information Administration. May 2009.

9. “International Energy Outlook 2009.” Table H7. World Installed Wind-Powered Generating Capacity by Region and Country, 2006-2030. Energy Information Administration. May 2009.

10. “International Energy Outlook 2009.” Solar Photovoltaic and Solar Thermal Electric Technologies Box. Energy Information Administration. pg 68-69.

11. “Electric Power Annual 2007.” Table 1.1. Net Generation by Energy Source by Type of Producer, 1996 through 2007. Energy Information Administration. January 2009.

12. “International Energy Outlook 2009.” Table H12. World Net Natural-Gas-Fired Electricity Generation From Central Producers by Region and Country, 2006-2030. Energy Information Administration. May 2009.

13. Mims, Christopher. “The World's 10 Largest Renewable Energy Projects.” Scientific American Magazine. June 4, 2009.

14. “20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” U.S. Department of Energy - Energy Efficiency and Renewable Energy. July 2008.

15. Creyts, Jon, et, Al. “Reducing U.S. Greenhouse Gas Emissions: How much at What Cost? U.S. Greenhouse Gas Abatement Mapping Initiative Executive Report.” McKinsey & Company. December 2007.

16. Government Accountability Office. “INTERNATIONAL CLIMATE CHANGE PROGRAMS: Lessons Learned from the European Union's Emissions Trading Scheme and the Kyoto Protocol's Clean Development Mechanism.” November 2008.

17. Mukerjee, Madhusree. “Is a Popular Carbon-Offset Method Just a Lot of Hot Air?” Scientific American Magazine. June 4, 2009.

18. Wara, Michael, and Victor, David. “A Realistic Policy on International Carbon Offsets.” Program on Energy and Sustainable Development: Freeman Spogli Institute for International Studies. Working Group #74. April 2008.

19. Schneider, Lambert. “Is the CDM fulfilling its environmental and sustainable development objectives? An evaluation of the CDM and options for improvement.” Öko-Institut prepared for the WWF. November 2007.
20. Climate Progress. “Do the 2 billion offsets allowed in Waxman-Markey gut the emissions targets? Part 1.” http://climateprogress.org/2009/05/27/domestic-international-offsets-waxman-markey/















Wednesday, July 8, 2009

Turning the Clock Back – Reducing Atmospheric CO2

It is important to understand that the prospects of limiting the permanent environmental consequences brought on by global warming comes down to a combination of technological intervention, commonly referred to as geo-engineering, and significant reductions in global emissions of greenhouse gases (GHGs), especially CO2. The answer is not one or the other, for neither geo-engineering nor emission reduction can stand on its own as a solution. The rational behind such an assertion is that technology can only go so far in neutralizing or even masking the effects of global warming and only within a certain range of temperatures, continuing emissions would either eventually eclipse any form of technological intervention or adaptation or bankrupt the world. With regards to the necessity of geo-engineering, unfortunately the current and future influence of CO2 and other GHGs on the Earth’s temperature have left little reason to believe that existing natural carbon sinks will limit permanent detrimental effects brought on by rapid temperature and oceanic acidity increases.

As of late 2008 the concentration of CO2 in the atmosphere was 385-386 ppm.1 Current annual global emissions are estimated to be between 28 GtCO2 to 33 GtCO2 increasing 1.8% annually. Emitting approximately 7.8 GtCO2 (2.123 Gt carbon) increases the concentration of CO2 in the atmosphere by 1 ppm.2 The IPCC (2007) estimates that oceanic carbon sinks annually absorb approximately 8.056 GtCO2 and land based carbon sinks take in another 3.30 GtCO2.3 Unfortunately these estimates come with significant ranges of standard deviation where the total absorption can be anywhere from 50% lower to 50% higher than the above values.3 Taking all this information into account, the total concentration of CO2 in the atmosphere increases by approximately 2.133 to 2.775 ppm per year when just considering the influence of CO2, not other GHGs to estimate a CO2 equivalency value. With the prospects of emissions increases to almost 44 GtCO2 in the next 20 years, such a significant increase is not favorable especially when considering that there are a number of individuals that regard a concentration of 350 ppm as the threshold for environmental maintenance and safety.1 Yet, even with this concentration as a general safety point, most environmentalists believe, under current conditions, sustaining a concentration of 400 ppm to be a sufficient goal.

Suppose the world actually gets its act together and starts significantly reducing emissions, not just talking about it in all of these glorious goals for emission reduction conferences, most of which are fairly unrealistic right now. Barring the development of an incredible new energy source, such as economically viable hot fusion, at this time based on current behavior it is realistic to expect a maximum reduction of 50% in global emissions from current emission levels over the next 40 years. For simplicity assume that this reduction occurs in a linear fashion. The table below illustrates the change in atmospheric CO2 concentration leading up to 2050.


After viewing the final concentrations at 2050 one could conclude that those values are not bad at all, all things considered. Unfortunately these values represent a possibility that is still improbable. Why is the above scenario improbable? First, in the above scenario the emission reductions occur right away in 2010, a result that is unlikely in reality despite the potential passage of the Waxman-Markey bill in United States, which was previously discussed. Currently no climate legislation is even up for discussion in China and India, two large polluters in their own right. Second, even though the rate of increase in atmospheric CO2 concentration slows, it is still increasing despite reducing current emissions by 50%. Third, this example only focuses on examining CO2, not other greenhouse gases that also influence temperature change and does not take into consideration any positive feedback forcing that would occur due to rising temperatures, permafrost thawing or melting arctic ice.

Fourth, this example assumes that the capacity for CO2 in all naturally occurring sinks will remain constant. It is highly unlikely that this assumption will be accurate. However, whether the absorbing capacity will increase or decrease is a point of contention. Some believe that the accelerated growth of trees due to the increased atmospheric concentration of CO2 and warmer climate will increase land sink capacity. Unfortunately it is highly probable that the overall rate of change in growth would be less than the proportion of CO2 required for the growth enhancement because although CO2 may be the limiting factor in most cases of photosynthesis, it is not a considerable stopgap. For example increasing the atmospheric CO2 concentration by 3% may increase tree growth and resultant CO2 absorbance by 0.5%, but not 5%. If this is the case, any increase in growth will be significantly lower than what is needed to offset the CO2 released into the atmosphere to generate that growth. Even if this increase proves to be true, prognosticating the increase is remarkably difficult due to the uncertainty surrounding the number of new trees planted, their lifespan, the amount of prevented deforestation, etc.

Unfortunately it seems much more probable that the capacity of natural sinks will decrease with time if atmospheric CO2 concentrations remain high. Continued deforestation and current agricultural practices will reduce available land sinks while acidification of oceans will reduce oceanic sinks. The ocean sinks in particular are rather tricky because of the uncertainty about the relationship between different sequestering and releasing elements.

When CO2 dissolves in water carbonate ion concentrations drop leading to an increase in ocean acidity. In fact CO2 absorption has reduced surface pH by approximately 0.1 in the last decade after over 100 million years of steady decrease in acidity.4,5,6 Carbonate becomes thermodynamically less stable as oceanic acidity levels increase, in turn increasing the metabolic cost to organisms when constructing carbonate-based infrastructure (shells and skeletons). In fact the Southern Ocean near Antarctica is already experiencing significant warming far beyond anywhere else in the world and the increased acidification is having a negative influence on the ability of G. bulloides to build their shell.7 Similar results in calcification rates have also been seen in the Arabian Sea for other similar carbonate shell builders.8

The rate of calcium carbonate precipitation is an important element in determining the sink capacity of the ocean because calcium carbonate has a tendency to be removed through gravitational settling.6 Considering this removal due to calcium carbonate precipitation is important because despite the total sum of dissolved carbon species (DIC) decreasing, the remaining carbon shifts its balance in favor of pure CO2 (aq) increasing the higher partial pressure of CO2 in the ocean.6 The reason for the shift is the loss of CO3 which drives the aqueous carbonate equilibrium reaction [CO2 (aq) + CO32- + H2O ↔ 2HCO3] to the left to compensate.6 However, a dissolution of calcium carbonate results in an opposite shift reducing oceanic concentration of CO2 enhancing atmospheric CO2 acquisition. Basically precipitation of carbonate carbon reduces CO2 uptake from the atmosphere whereas dissolution of carbonate carbon increases CO2 uptake from the atmosphere.

However, a second factor must be considered, the interaction and association between particulate organic carbon and calcium carbonate concentration shifts.9,10 A decrease in calcium carbonate reduces the rate and effectiveness of moving particulate organic carbon to deeper waters, thus weakening the biological pump portion of oceanic CO2 absorption method.6 This result reduces the total CO2 sink capacity of biological denizens of the ocean like phytoplankton.11,12 So an impasse exists in that does decreasing the concentration of calcium carbonate increase oceanic sink capacity or decrease oceanic sink capacity? Currently there is no good answer to that question. However, even if sink capacity is increased, that means that ocean acidity will increase ruining the life sustaining ability of the ocean.

Another significant factor of change could be melting arctic ice. The additional liquid volume provided by melting could possibly increase the ability of the ocean to absorb CO2, but any increase in capacity would only be a very small percentage of the total existing capacity and could also be rendered moot in the context of warming because of the reduced albedo from the lack of white ice. In fact some groups have pointed out that the rapid acceleration of arctic ice loss places a significant hurtle to reducing surface temperature, one that may not be addressed by simply reducing CO2 emissions.13

From a land sink perspective, planting new trees and other flora would indeed increase overall CO2 absorption, but first deforestation practices would have to end. Also with a world population that continues to increase, finding areas in which to plant these trees would be difficult. Most of the current deforestation is the result of clearing landmass for agricultural purposes such as food and/or bio-fuel production. Competitive elements in land use is a problem due to the need to continue to feed a growing population that wants significant choice in food consumption, not just grains and vegetables, as well as devotion to a growing demand for bio-fuel and biomass energy. With these two dueling elements, planting new trees seems to be a distant third. Even if planting trees were first on the list for land use, additional absorption from trees would more than likely be far too slow to ward off significant detrimental climate change. Unfortunately speed is an issue, something trees do not do well; they do it by volume not efficiency and apparently volume seems out of the question. Adding to all of that, there is evidence that natural sinks both terrestrial and oceanic have declined since the 1990s.14 Overall it is probably unrealistic to expect an increase in absorption capacity for natural sinks and reasonable to expect a greater than even chance that capacity will decrease over time.

Even if natural carbon-sink capacity remains constant, it is not acceptable to allow atmospheric CO2 levels to persist at 400 ppm or higher without taking action. Any real interventional action can be categorized one of two ways. The first category focuses on reducing the amount or absorbance quality of sunlight striking the Earth. The second category focuses on reducing the amount of CO2 that is already in the atmosphere. Both strategies aim to reduce the overall temperature of the Earth, but the difference in strategy is considerable. Note that as previously mentioned, in addition to these efforts, the global community will also have to engage in strategies to reduce the amount of CO2 and other GHGs released into the atmosphere.

One of the most popular options in this first category is seeding the atmosphere with significant quantities of sulfur dioxide or a sulfur dioxide-similar compound. The basis of this strategy is rooted in the explosion of Mount Pinatubo on June 15, 1991 and the resultant release of 20 million tons of sulfur dioxide, which lowered the cumulative temperature of the Earth by 0.5 oC over the period of a year.15 This strategy is attractive because of its low cost and it can be administered with current existing technology.

Unfortunately there are some significant concerns with this solution. First, the sulfur dioxide will not remain in the atmosphere for more than three to four months, thus concentrations would need to be replenished at a quarterly rate. Second, the sulfur dioxide would significantly compromise the ozone layer increasing the probability that higher numbers of individuals would suffer from skin cancer and other UV-related conditions. Third, when the sulfur dioxide falls from the atmosphere there is reason to believe that it would have delaterious effects on both the ocean and land masses. Fourth, the significant reduction of sunlight to the Earth’s surface would have a negative influence on solar power installations, a significant concern in that solar power is thought to be one of the chief power sources in an energy environment that does not rely on fossil fuels. Fifth, there are some quesitons regarding how effective this method would actually be in that a significant amount of sulfur dioxide did not reach the atmosphere during the eruption of Mount Pinatubo and the drop in temperature could have been more of a localized effect than a global effect. Thus estimates of the amount of sulfur dioxide required to lower the Earth’s temperature globally a certain amount over a specific time period could be inaccurate. Sixth, although the technology already exists to release large quantities of sulfur dioxide into the atmosphere, the political structure does not. No protocol exists for who would actually disperse the sulfur dioxide, who would maintain it and who would accept responsibility for any damages accrued during its administration.

So the simplest solution in category one still has some difficult questions that need to be answered. Another option that has received attention is the distribution of various space-based mirrors or other covering material, usually at Lagrange Point 1 or 2, which would reflect a specific percentage of sunlight before it even reaches Earth leading to a reduction in temperatures. Unlike launching sulfur dioxide, the space mirror idea is immediately saddled with significant economic viability questions as well as technology questions. Although some technology hurdles for this solution have been overcome, notably due to the work of Roger Angel and his associates at the University of Arizona, these advances have not reduced the overall price of installing and maintaining the mirror.16 A large part of the costs for this strategy is derived from launching the mirror sections into space. The development of a secondary and much cheaper means to launch objects from Earth into Near Earth Orbit would go a long way to making a space mirror system more economically viable. Unfortunately other problems persist, most notably maintenance issues. Repairing any portion of the mirror that is damaged would prove to be a difficult endeavor and a costly one both financially and environmentally because a certain percentage of sunlight would no longer be reflected elsewhere. Overall by the time a space mirror becomes economically viable and operational, too much permanent damage will have been done to the environment making the solution moot.

Although there are other options in the first category, they will not be discussed due to similarity with the two above options or improbability. Basically the first category can be summed up as an attempt to treat the symptoms or negative outcomes of a condition, but not the condition itself. Redirecting the light from the sun to another location may in fact reduce surface and air temperatures, but does nothing to reduce the concentration of CO2 and other GHGs in the atmosphere. These strategies can be equated to a patient in the hospital that is bleeding profusely and instead of taking the patient into surgery to stop the bleeding, the treatment focuses on replacing the lost blood for as long as possible until the patient heals him/herself. There in lies the problem with sunlight blocking methods, they do not solve the problem just seek to delay its consequences. With the continuing build-up of CO2 what happens when a section of the solar mirror breaks or wind patterns shift unexpectedly and disturb the distribution of the sulfur dioxide in the air? Basically these strategies need to be maintained without fail for as long as CO2 concentrations exceed safe levels, which look to be decades, if not centuries.

Similar to the first category, the second category of possible solutions has a solution that is viewed to be economically and technologically viable right at this moment. Ocean fertilization involves seeding swatches of the ocean with iron leading to rapid phytoplankton growth, which would then absorb CO2 from the ocean for use in photosynthesis. With decreasing concentrations of CO2 in the ocean, the ocean will be able to absorb more CO2 from the atmosphere, reducing atmospheric CO2 concentrations. Later on when these phytoplankton die they would sink to the bottom of the ocean, effectively removing the CO2 from the short-term carbon cycle. Proponents of ocean fertilization believe it to be a cheap, easy and effective means of controlling global warming that is in a similar vein to planting trees only much faster and cheaper. However, similar to launching sulfur dioxide into the atmosphere, there are some concerns about ocean fertilization.

The biggest concern is the potential ecological damage generated by cultivating an extremely large amount of phytoplankton to a region that is not prepared to receive it. One continuing problem in the industrialized world is the expansion of hypoxic areas in bodies of water. These areas have been labeled ‘dead-zones’ and they frequently come about due to large concentrations of fertilizer run-off, which fosters blooms of algae and phytoplankton growth. These blooms eventually die sinking to the bottom of the body of water where they are broken down by bacteria, which begin to multiply rapidly consuming larger than normal quantities of oxygen stripping the region of a significant concentration of oxygen. There is reason to believe that a similar situation will occur when artificially stimulating the rapid growth of phytoplankton for CO2 absorption. Empirically no ‘dead-zone’ have been generated from past fertilization experiments, but the time of experimentation was short enough that it is impossible to rule out the possibility because one imagines a significant portion of time would be required to create a hypoxic region in the ocean with pre-existing sufficient levels of oxygen, more time than just a couple of months. Also the shorter time frame may explain some of the ‘less than encouraging’ absorption ability of the blooms in these experiments.

There are other problems including a significant reduction of sunlight over the area encompassing the blooms, which would negatively affect coral reefs and other sub-aquatic plant life. Also the potential exists to exacerbate global warming due to increased release rates of methane and nitrous oxide from the blooms and resultant bacteria growth. In addition fertilization programs are a logistics nightmare because the added iron quickly integrates into the ocean eliminating any real ability to distinguish it from non-iron supplemented water unless a tracker is utilized as well. Blooms must be continually monitored so any problems can be quickly engaged and neutralized.

It is difficult to postulate whether or not fertilizing with iron will result in similar dead-zones due to the prescribed phytoplankton blooms. However, it seems probable that one of two results will occur, either a dead-zone will be created or the resultant number of phytoplankton will not be large enough to significantly reduce the probability for climate change. Dead-zones seem probable because the defining characteristic of a dead-zone is large unnatural phytoplankton and/or algae blooms, not a specific method which results in the creation of those blooms. If fertilization does create dead-zones then it becomes a very difficult question whether or not to utilize iron fertilization as a viable method for climate control.

The joint Germany, India and Chile LOHAFEX project seems to support the position that the influence of iron fertilization is currently rather weak because it operates under more strict conditions than previous anticipated increasing the probability that it will more likely not be part of the solution for effective removal of atmospheric CO2.17,18 LOHAFEX entailed spreading 6 tons of iron sulfate over 300 square kilometers of ocean in the Southwest Atlantic Sector of the Southern Ocean within the core of an eddy and observed the results over a 39 day period. Initially the additional iron stimulated phytoplankton growth doubling their total biomass in the first 14 days due to CO2 absorption.17 However, growth was stymied by an increase in predation by zooplankton and amphipods. Increased predation is a significant negative because it reduces the amount of carbon removed from the surface layer, which in turn reduces the overall effectiveness of the fertilization. When thinking about it logically the increase in predation is not surprising due to the strict controls nature applies to limit rampant biosphere alterations. One bright spot may be the lack of dead-zone formation, but that result comes with two caveats. First, the time frame of the investigation may not have been large enough to nurture dead-zone generation. Second, there were no significant changes in bacteria concentrations between the area inside the fertilization and outside the fertilization region largely because of the significant increase in predation.

The reason that LOHAFEX failed where other fertilization investigation succeeded is in the type of plankton that bloomed. The successful experiments created blooms of diatoms, algae that use silicic acid to generate silica shells to protect against grazers.17 Unfortunately, there are not significant enough quantities of silicic acid in the Southern Ocean eliminating the ability to fertilize diatoms. The LOHAFEX study seems to eliminate the Southern Ocean as an environment for fertilization-based CO2 removal under standard protocols by identifying additional elements that may be required for any level of success. The potential loss of the Southern Ocean is significant because fertilization in warmer tropical regions have significant problems increasing CO2 absorption because of nutrition over-consumption due to pre-existing phytoplankton and creating blooms in more temperate regions is difficult due to less available nutrients. The Southern Ocean is a ‘best of both worlds’ type environment (large area, relatively low natural plankton growth and high nutrient content).

Another method available is directly removing CO2 from the atmosphere using technology, not nature. The technological removal of atmospheric CO2 is referred to as ‘air capture’. Air capture in some context really should not be viewed as geo-engineering because technically no meaningful change is being applied to nature. Instead it can be viewed as human correcting past mistakes by removing the CO2 previously emitted. Early on air capture was disparaged because many believed it would be too difficult to actually capture CO2 directly from the air due to the extremely low CO2 partial pressure and perceived unfavorable thermodynamics. However, this concern has proven to be more troublesome in theory than in practice. Despite the process being possible, air capture still has significant economic and energetic hurtles to surpass.

Clearly removing CO2 from the atmosphere should be the primary goal, but air capture is tricky because there is no significant market for the captured CO2, so the price of operation needs to be generally affordable before any private firm or government would get involved. Pricing air capture is probably the most controversial subject surrounding air capture. Most air capture proponents, with one notable exception which will be addressed directly later, state that although air capture currently has an average price around $500 per absorbed ton of CO2, this price will drop significantly over time, to something like $100-200 per ton, in similar fashion to most technology. One group Global Research Technologies (GRT) and its spokesperson, Klaus Lackner, claim that their air capture device already operates at a price point of $100 per ton and with time that operating cost will drop to under $30 per ton.19 So why are these estimates controversial?

Putting GRT and Klaus Lackner on hold for a moment, most air capture systems capture CO2 from the air via interaction with an alkaline NaOH solution which has a thermodynamically favorable reaction with CO2 resulting in the formation of dissolved sodium carbonate and water. The carbonate then reacts with calcium hydroxide (Ca(OH)2)) resulting in the generation of calcite (CaCO3) and reformation of the sodium hydroxide. This process of causticization transfers a vast majority of the carbonate ions (»94-95%) from the sodium to the calcium cation and the calcium carbonate precipitate is thermally decomposed to regenerate the previously absorbed gaseous CO2. The final step involves thermal decomposition of the calcite in the presence of oxygen along with the hydration of lime (CaO) to recycle the calcium hydroxide.20 This complete process is illustrated in the below figure.20



Basically there is little difference in the overall process regardless of how you design the capture system. There are differences in execution in that some choose to utilize a spray tower to initiate contacting (the interaction between the CO2 and the sodium hydroxide) where others use large fan-like blades and others still use other systems. Despite these differences, the reaction scheme remains similar. Therefore, it is difficult to presume how costs can be cut by 60-80% by simple changes in efficiency schemes. Instead if costs are going to be lowered by the rate at which most air capture prognosticators predict a new interaction, step or chemical will have to be subtracted or introduced.

Mr. Lackner and GRT believe that their proprietary resin, which is used in place of NaOH, is the new technology that significantly reduces price, hence their much lower cost estimates than other air capture designers. However, GRT and Mr. Lackner have yet to release any published data regarding the interaction, efficiency or any other functionality from any capture experiments. Nor are any specifics about any energetics involving their system discussed in any press releases or news stories. Without that information, it appears that the air capture system design is similar in all aspects to one utilizing NaOH with the exception of the initial binding step. In fact all GRT and Mr. Lackner have done communicating information about their air capture system is put out some press releases and conduct interviews proclaiming the greatness and cheapness of their system. The ‘technical literature’ section of the GRT website has been ‘coming soon’ for over two years.19 So until they release some actual data in a legitimate peer reviewed journal, any claims made by GRT or Mr. Lackner must be taken with a grain of salt.

With the cost per ton of CO2 absorbed being so important, it is reasonable to calculate the costs with the information currently available about air capture systems. From “Energy and Material Balance of CO2 Capture from Ambient Air” by Frank Zeman the energy requirements to capture one mole of CO2 are summarized for three of the most promising air capture schemes.



Using this energy information from Zeman and drawing information from the EIA regarding energy generation from coal and national gas combustion, a cost structure for each air capture system can be generated. It is worthy of note that currently no air capture system has been constructed to scale and operated over a significantly long time period, so realistically these cost estimates side towards a more optimal outcome.

From the EIA coal with a carbon content of 78 percent and a heating value of 14,000 Btu per pound emits about 204.3 pounds of CO2 per million Btu when completely burned. Complete combustion of 1 short ton (2,000 pounds) of this coal will generate about 5,720 pounds (2.86 short tons) of CO2 and that coal produces »4.103 kW-h per lb of coal burned, thus typical coal-based power generates about 1.435 kW-h per lb of CO2 released.21 Factoring in an efficiency of energy use of »35% generates a value of 0.502 kW-h per lb of CO2 released. With this energy/emission ratio, coal is not an efficient enough energy provider to power air capture. Nor is using the 2007 grid [0.8858 kW-h per lb of CO2 released] a valid selection. Therefore, the approximate 1.255 kW-h per lb of CO2 released generated by natural gas will be utilized in the analysis.

Looking at Zeman first, the system design includes a step that redirects heat from the exothermic hydration reaction to a heat exchanger to generate a steam loop. This redirection supposedly reduces the overall energy requirements, thus reducing cost. So calculating the cost for this scheme will be conducted two ways, one including 100% efficiency for the hydration reaction and one excluding it entirely as it has yet to be successfully demonstrated on an industrial level.

First, the overall efficiency of the system is calculated. Efficiency is defined as how much CO2 is emitted to power the particular air capture system versus how much CO2 is absorbed by the particular air capture system. The calculations below represent Zeman hypothesized energetics with 100% efficiency.

(442 kJ/mol – 105 kJ/mol) = 337 kJ required per mol of CO2 captured = 0.093611 kW-h/mol = 0.0021275 kW-h/gm = 0.96503 kW-h/lb of CO2 captured, so 0.96503 kW-h/lb CO2 captured/1.255 kW-h/lb CO2 emitted = 0.76895 lb of CO2 emitted for every 1 lb of CO2 captured.

Unfortunately electricity costs differ significantly depending on where the system is constructed. The New England states typically have values of 15-19 cents per kW-h where Midwestern states, with the exception of Illinois, rarely crest 10 cents per kW-h. Using the national average of 9.1 cents per kW-h,22 the cost of capturing one ton of CO2 is calculated to be:

0.091 dollars/kW-h * 0.96503 kW-h/lb of CO2 captured * 2000 lb/ton = $175.635 per gross ton CO2 captured; however, with each lb of CO2 captured, 0.76895 lb of CO2 is emitted, therefore, every net lb of CO2 captured the cost would be $175.635/0.23105 = $760.16 per net ton of CO2 captured.

The costs per net ton of CO2 absorbed for all of the other systems are –

Zeman with no hydration heat utilized – Net Emission exceeds Net Capture; [1.268 kW-h/lb CO2 captured / 1.255 kW-h/lb CO2 released = 1.01];

Baciocchi et al – Net Emission exceeds Net Capture; [1.481 kW-h/lb CO2 capture / 1.255 kW-h/lb CO2 capture = 1.18];

Keith et al – Net Emission exceeds Net Capture; [1.948 kW-h/lb CO2 captured / 1.255 kW-h/lb CO2 released = 1.55];

However, Keith et al. now plan on using titanate in the process due to a discovered lower heat requirement, lowing the energy and overall costs.23,24 However, this reduction does not seem to change the energy dynamics enough to make the process carbon negative.

So the cheapest air capture system that currently has published data for evaluation has a cost of $760.16 per net ton of CO2 removed from the atmosphere, higher than the previously estimated $500 per ton of CO2 removed estimates. Why is the actual cost higher than the current estimates? Overall it is probable that previous estimates did not include the efficiency term, instead just assuming that power requirements for system operation involved no carbon emissions. Basically estimates are using a gross ton of captured CO2 as a net ton. This assumption is actually a valid one if solar power or another zero-emission energy source is utilized, but currently it is unrealistic to assume that the use of solar power would reduce costs to the low hundreds of dollars per net ton of captured CO2 because of all of the high costs associated with solar power. In addition this analysis utilized an emission ratio derived entirely from natural gas because it is unrealistic to use alternative energy as the base and using 100% coal or the natural grid breakdown results in none of the strategies being carbon negative.

Note that the above cost expenditures are only valid if electricity is utilized to provide the energy for the entire process. However, the drying precipitate and calcination processes are thermal-based which means that they can be driven by heat that does not necessarily need to be derived from electricity. The caveat to moving away from electricity is that the new source(s) providing the heat for these thermal reactions need to be on-site and produce trace/no emissions. These requirements significantly limit the available provider options, the most viable option now is concentrated solar power. The problem with concentrated solar power is it limits the construction locations for air capture units and these units now become inconsistent due to their reliance on solar power and the lack of high quality storage batteries (although this may change in the future). Another concern is that there is inconclusive information regarding how to price the capital expenditures for the thermal infrastructure and providers, thus although one might expect a reduction in cost, it is difficult to confirm or quantify a cost reduction.

Overall although co-existing electricity and thermal providers seems like a good idea to reduce energy requirements, costs and CO2 expenditure (increasing efficiency), in the long run it may not be a practical solution. The reason is that eventually a vast majority of global electricity production will be produced by trace/no emission providers (at least it better be or it really does not matter does it), thus most of the efficiency and excessive cost problems with air capture will disappear (cost will still be an issue, just not a 900+ dollar issue). Therefore, constructing the thermal providing infrastructure for air capture units could be viewed as just a waste of capital for the long-run application of air capture. However, depending on the nature of the developed infrastructure the overall waste may not be significant enough. The biggest problem when deciding between a 100% electrical and an electrical-thermal air capture system is prognosticating when the efficiency percentages of the 100% electrical system will rival those of the electrical-thermal system.

Unfortunately for those supporting air capture, the financial costs do not end with just energy use. Although it is difficult to compute a long-term price ratio, few estimates directly reference the initial capital cost of building the air capture facility and no annual maintenance costs have been estimated, at least they are never identified as such (some estimates could be grouping all costs together in one big estimate without identifying the individual components). Before any of these costs can be estimated or computed accurately a prototype system will need to be constructed, a CO2 absorbed per time ratio will need to be empirically calculated over at least a year and a system lifespan would have to be estimated. Some have estimated various CO2 absorbed per time ratios, most notably GRT’s 1 ton of CO2 per day per air capture system19, but have yet to construct a ‘to scale’ system to verify those estimations and whether or not those absorption are gross ton or net ton. There are some reports that claim a capture rate of 90,000 kg of CO2 (99.21 tons) per day;19 however, it is difficult to determine if that is an actual current rate of capture or one hypothesized for the future.

Initial construction, maintenance and even overall energy costs are only a small problem compared to the big problem plaguing current air capture systems. This big problem is that the process requires huge amounts of water. This seems true based on the chemistry no matter how the CO2 is captured. Normally in the reaction scheme water acts as a pseudo-catalyst released when the sodium hydroxide interacts with the CO2 then absorbed in the interaction with lime to form calcium hydroxide. However, in the air capture system because it is not a closed system, a large and steady stream of atmospheric air is driven through the system, this air stream absorbs a large amount of the generated water, preventing it from being used later in the hydration reaction. Thus new water has to be utilized in the hydration reaction to complete the closed product-recycling loop. The water lost from the initial capture stage of the air capture system is largely dependent on the environmental temperature and the relative humidity of the air in that a lower temperature and higher humidity would result in lower rate of water loss. Based on current operational estimates for the capture section it is reasonable to conclude that at least 1960.80 gallons of water would be required for every gross ton of carbon absorbed (even more water per net ton due to efficiency concerns).20,25 Overall depending on the cost of water per gallon (usually anywhere from 4 to 8 cents), water replacement costs add $78.43 - $156.86 to the cost of removing 1 ton of CO2 from the air for a 100% efficient system. This water replacement cost seems to be frequently ignored when discussing air capture cost dropping to $10-30 dollars per ton.

However, water cost is not the central issue. To put it in a more real context, the amount of water required to equivalently reduce the atmospheric concentration of CO2 by 1 ppm would be 1.49 x 10^13 gallons (assuming 100% efficiency). Multiply that number by at least 100 and then suggest where all of that water is going to come from?

With all of the analysis so far regarding the cost of air capture, it would go far to demonstrate the total significance of this cost in an example. What is the total realistic cost per ton of CO2 of a single air capture device based on current analysis?

As previously mentioned, there is not very much information regarding either initial capital construction costs or maintenance costs, but any estimate would correspond to both the lifecycle and the overall operational rate of the system. Assuming a system would operate at an absorbance rate of 1 net ton a day over a period of 40 years, which includes maintenance stoppages and using optimal energy dynamics from Zeman. An estimate of $50,000 for the initial capital cost is seems reasonable. Again some have Mr. Lackner claiming a $30,000 capital cost, but the legitimacy of those claims is unclear.19 Even if the 30,000 figure was accurate, when was the last time a major construction project successfully adhered to the forecasted budget? Assume annual maintenance costs equal to 1% of the initial capital cost. Add costs for electricity and water, assuming 9.1 cents per kW-hr and 5 cents per gallon respectively, with yearly increases in the cost of electricity and water of 0.5%. Taking all of these elements together the total cost of the system over the lifespan of the capture system is shown in the table below:


* Units in metric tons; running total;
** Units in gallons;

From the information presented in the table over the course of a 40-year lifespan it will cost approximately 19.315 million dollars to capture 14,600 tons of CO2 or approximately 0.000187% of 1 ppm of CO2. To capture 1 ppm of CO2 using these estimates, it would cost approximately 10.32 trillion dollars. Frank Zeman has stated that he believes the total energy requirement to capture one mole of CO2 can be lowered to approximately 250 kJ/mol. Suppose this occurs, using the 250 kJ/mol figure in the above analysis, in 40-years it would cost approximately 8.699 million dollars to capture 14,600 tons of CO2 and cost 4.65 trillion dollars to reduce atmospheric CO2 concentrations by 1 ppm.

Such a huge cost requirement forces a return to the aforementioned problem of the lack of a market for the captured CO2; if a company cannot make or receive any money capturing CO2 a price of $1 per net ton of CO2 captured will be too expensive. One suggested option is transforming the CO2 into ‘carbon neutral’ hydrocarbon-based fuel, but the advantages of such a process are strained because it requires energy to create high purity streams of CO2 and hydrogen, energy currently derived from a fossil fuel combustion source.26 Also the fuel will eventually release its carbon load back into the atmosphere, which destroys the real purpose of air capture, permanently removing atmospheric CO2 in order to reduce atmospheric CO2 concentrations to avoid detrimental climate consequences. Perhaps if society did not have the technology required to transition away from fossil fuels for another couple of decades such an idea may be noteworthy, but that is not the case. Thus, such a closed loop carbon neutral system seems to have no benefit and only results in wasted energy.

Another option is selling the CO2 to oil companies for use in enhanced oil recovery and/or enhanced coal-bed methane recovery. Although practiced today, it is unlikely that a reasonable percentage of CO2 can be utilized in this way to cover the costs of the program. Also most environmentalists would argue that society should not continue to draw out more fossil fuels if their use would simply generate more pollution that the air capture system would have to absorb at a later time, with the only difference being no market for the captured CO2, creating once again just another closed loop system that wastes energy.

One final option for creating a market is transferring the absorbed CO2 from the air capture station to greenhouses where the CO2 is pumped into the greenhouse environment to enhance plant growth. Unfortunately the size of this market appears rather insignificant and unable to absorb a vast quantity of collected CO2.

With none of the most popular options for profitability being valid, who will bare the cost burden of employing an air capture strategy? It is difficult to imagine any private company actually undertaking an air capture program large enough to significantly reduce atmospheric CO2 and there is no point to undertaking an air capture program if it is not going to be large enough to significantly reduce atmospheric CO2. Therefore, the government has to somehow subsidize an air capture program, more than likely through a carbon tax or cap and trade system using offsets to fund the program. This subsidy cannot just come from the United States, but must come from a number of world governments. Overall at this point in time unless the government gets involved it is difficult to imagine an air capture program ever being incorporated.

Another method to remove atmospheric carbon gaining in popularity is the use of bio-char. In essence bio-char is black carbon synthesized through pyrolysis of biomass. Bio-char is effective because it is believed to be a very stable means of retaining carbon, sequestering it for hundreds to thousands of years. Of course directly testing such a phenomenon is almost impossible, but because charcoal is regarded as a specific form of bio-char and due to radioactive dating, proponents believe that similar storage properties will exist for other end products of pyrolysis from other feedstock materials.

The general scheme behind bio-char in carbon sequestering takes advantage of the natural photosynthetic cycle in most forms of flora. Plants normally release any absorbed CO2 via respiration through the course of their life or hold it until they die and release it through oxidation/decomposition after death. However, using those plants as feedstock in a pyrolysis reaction to form bio-char reduces the amount of CO2 released back into the atmosphere. Three products typically result from a pyrolytic reaction: bio-char, bio-fuel and syngas.27 Slow pyrolysis (slow heat and volatilization rates at 300-350 oC) typically results in higher bio-char yields whereas fast pyrolysis (higher heat and volatilization rates at 600-700 oC) typically results in higher bio-fuel yields.27 Therefore, bio-char has two distinct influences on the global carbon cycle, CO2 entrapment and bio-fuel production to offset fossil fuels use in the transportation sector. However, for the purposes of sequestering CO2 from the atmosphere maximum bio-char yield would be advisable. Maximum yields of bio-char typically range from 40-50% under slow pyrolysis conditions.27

Another side advantage to bio-char is that when it is integrated into soil, it increases the quality of that soil enhancing the growth rates and yields of any future crops. There is little doubt that bio-char improves the quality of soil on a general level by aiding in the supply and retention of nutrients, yet some questions remain about water retention and crop yields because of the lack of sufficient field studies.28,29,30 However, for the purpose of this analysis, the question will be how useful is bio-char as a means of carbon sequestration?

Carbon sequestration potential of bio-char as a tool to limit climate change depends largely on four separate factors: stability of bio-char in a given storage medium, total rate of change in greenhouse gas emission from feedstock sources, bio-char capacity in a given storage medium and the economic and environmental requirements in the production of bio-char.

Bio-char within the confines of specific environments have demonstrated a significant amount of stability. For example the most popular example of bio-char, Amazonian based terra preta provides support for bio-char stability with a carbon age of 6000 years.28,31 Also charcoal in the North Pacific Basin, has been shown to be hundreds of thousands to millions of years old.32

However, despite these stability studies, little work has been done regarding the oxidization potential of fire or microbial activity on black carbon. Suppose both fire and microbial activity do have marginal oxidation ability, if so then bio-char storage facilities would need to avoid forestry soils, arid regions and other high turnover regions in addition to regions that will not experience significant human disturbance.

Another issue of complexity with bio-char is its lack homogeneity. Different fractions of bio-char will decompose at different rates under different conditions.33 Various studies place half-lives for bio-char in a range from 100 years to 5000-7000 years.34 Fortunately this mixed lifespan may not be a critical issue if all of the fractions of bio-char decompose over a significantly large enough time. Therefore, bio-char proponents need to identify the minimum half-life that is appropriate for carbon sequestration and then identify what conditions will generates a half-life that equals or exceeds this target half-life in the resultant bio-char. A major concern regarding longevity studies of bio-char is the popular press frequently cites the common example of terra preta in the Amazon without identifying that possibility that accelerated bio-char synthesis may not have similar features. Overall there is still more work to be done on bio-char longevity and stability.

Almost all carbon sequestering strategies function on the premise that the negative carbon potential of the strategy will be realized in the future, not the present. Typically these strategies will generate more CO2 in the short-term then they absorb/sequester, but will absorb/sequester more CO2 in the long-term then they produce in their lifetimes. However, the rate at which this turnover occurs is important because if it will take hundreds of years for a strategy to become carbon negative then the strategy will not be useful. For bio-char the initial pyrolysis process will produce CO2 usually at about 45-55% of the total carbon content of the feedstock.27,35 Recall that bio-char is considered carbon negative because it stores carbon from feedstock that would have otherwise released its carbon content into the atmosphere after months to years of decomposition. So the speed and size of the transition from carbon positive to carbon negative for bio-char largely depends on the rate at which CO2 would have been released from the pyrolysed biomass had it not been pyrolysed.

A simple equation can be utilized to describe this issue as shown below:

CO2s = CO2d/dt – CO2e

where CO2s = CO2 Saved/Change; CO2d/dt = biomass decomposition rate; CO2e = CO2 released via pyrolysis;

Assuming a bio-char stability of at least past the pre-assigned target for bio-char sequestration, two critical factors drive the magnitude and speed of the equation. The decay half-life associated with the decomposing biomass and the amount of CO2 released in the pyrolysis process. The decay half-life is an important feature because suppose if a particular feedstock has a decay/release rate of hundreds of years, it would make little sense to char such a slow-releasing sample due to the energy requirements of the pyrolysis process. However, if the particular feedstock has a decay/release rate of 5 years, it would make sense to char the quicker-release sample. Fortunately a vast majority of the feedstock candidates are quick-release samples with the best candidate for bio-char production being a feedstock with high lignin concentrations such as husks, shells and kernels due to higher bio-char yields.27,36

Another issue that is rarely discussed by bio-char proponents is any influence that mass deposits of bio-char would have on the average albedo of the Earth. Large deposits of bio-char could darken the Earth’s surface at the point of entry, which will increase the probability of thermal absorption of sunlight at those areas and increase localized surface temperature. This change in surface albedo could neutralize some to most of the carbon sequestering benefit of bio-char.

Regarding the maximum concentration of black carbon allowable in soils, the aforementioned terra preta in the Amazon provide a useful demarcation. The ratio of soil organic carbon per hectare of terra preta can be as high as 250 Mg per hectare where up to 40% of this soil organic carbon is black carbon.35 Overall the expectation of 125 Mg per hectare as a ceiling is a rational one. The biggest concern stemming from bio-char soil concentration studies is that the observed soils and land mass were tropical; few studies have been conducted using fertile temperate soils which naturally higher soil organic carbon contents. This lack of research does make sense in that researchers exploring the ability of bio-char to increase crop yields would focus more on revitalizing poor soils rather than augmenting richer soils; however, the amount of land required to sequester the amount of CO2 required to avoid significant detrimental environmental damage even only in part, will require the use of richer soils as a storage base. This lack of soil type study is also meaningful because even if no bio-char ‘plantation’ strategy is executed, this requisite knowledge will be needed to determine whether or not farmer x with soul type y should utilize bio-char on his/her farm.

Another question is whether rate of incorporation has any significant affect on capacity. For example the terra preta in the Amazon formed over thousands of years, would soil be able to absorb these quantities over a much shorter time period, decades to centuries? With rising global populations it is important to identify the capacity ceiling for bio-char, so crop yields are not negatively affected causing food price spikes due to failed productivity.

The problem of land use is an important issue to consider for bio-char production. Proponents, like to think that a significant quantity of bio-char can be acquired from agricultural and forestry residues, but stripping these residues can significantly increase the probability of erosion damage and potentially reduce soil bacterium. Therefore, it is important to consider where the residues are coming from. Returning bio-char to the point of extraction may ease some of these effects, but it is difficult to conclude how stable the bio-char would be in these environments although it seems rational to expect normal levels of stability.

Part of the reason that biomass is proposed for synthesis of bio-char is that, similar to biomass energy production, nearly every form of biomass can be converted to bio-char. However, there are some transportation issues because unless the bio-char infrastructure results in a vast number of pyrolysis plants spread out all over the world, something some proponents would like to see, biomass will need to be transported from field to plant and emissions from the transportation medium could remove any significant benefit. In the end decisions will have to be made to maximize the effect of land use in how it will be split between bio-char production, biomass production, re-forestation and food production.

For all of the questions preceding this point, the biggest might be how much bio-char can actually be produced in a given year. Bio-char production is significantly impacted by the amount of energy generated from biomass, which is theorized to partially power the energy requirements for bio-char production. However, as mentioned previously in the energy gap post, prognosticating a total energy generation from biomass is incredibly difficult largely because most proponents of biomass tend to be too optimistic in estimating the infrastructure devoted to feedstock production. Another concern in the biomass/bio-char analysis is that few people actually price the required infrastructure when making estimates to the total energy generation potential, especially because it is highly probable that biomass co-firing rates will be negatively affected by the scrapping of various coal based power plants due to climate change policy. It is improbable to expect that there is not going to be a tipping point in biomass energy generation costs where the cost per electricity ratio will jump from 5-13 cents37 to a higher number due to the land and water requirements necessary to achieve the 100-300 EJ per year required for significant bio-char production.38

Currently it is estimated that the global potential for bio-char production is 0.6 ± 0.1 PgC per year with an extrapolation to 5.5 to 9.3 PgC per year in 2100.35 However, as discussed above, those estimates are very difficult to verify or even take seriously because of the utilization of the maximum possible upper limit. Realistically it seems rather silly to try to estimate something 90 years into the future, most people are still waiting for their flying car. Overall when considering the growth potential of bio-char the most conservative estimate of growth possible should be used. For prognostication probabilities must be appreciated over lower or upper bounds of possibility.

One of the more realistic studies regarding bio-char is located on the International Bio-char Initiative (IBI) website and looks to quantify the ability of bio-char to be an effective and significant tool in reversing atmospheric CO2 accumulation and possibly climate change. The figure below, from the IBI website, illustrates the model results.


The wedge demarcation is a reference to the famous Pacala and Socolow paper.39 The target goal supported by bio-char proponents is the annual removal of 1 Gt of carbon (3.67 Gt of CO2) from the atmosphere. Each scenario uses the reference value that 61.5 Gt of carbon per year takes place in the photosynthesis/respiration cycle. The conservative wedge estimates biomass generated from only cropping and forestry residue that has no future purpose (» 27% of the total residue). The moderate and optimistic scenarios estimate utilization of 50% and 80% of total cropping and forestry residue respectively. Unfortunately the designation ‘no future purpose’ is rather suspect as previously discussed. Each scenario assumed the use of slow pyrolysis (end production results in 40-45% bio-char) instead of fast pyrolysis (end production results in 20% bio-char).

Realistically it is highly unlikely that the optimistic plus scenario outlined in the IBI model will be achieved due to all of the additional requirements it demands, thus the best scenario that is probable would the optimistic one. Suppose that happens, overall the optimistic scenario still only removes approximately 0.3667 ppm per year, a significant value and nice bonus, but one far short of what is need to avoid significant and detrimental climate change with current and even future estimated emission patterns. Therefore, although bio-char can provide some relief, it is important to include other CO2 reduction strategies. The funny thing is that bio-char very well may save the Earth, just in a way that most proponents have not considered.

In addition to air capture and bio-char, mineral weathering has been explored as an idea to neutralize CO2 concentrations at point sources, but this interest has expanded to potentially using it as a means to draw-down existing atmospheric CO2 concentrations.

In nature magnesium-silicate minerals such as olivine (Mg2SiO4) and serpentine [Mg3Si2O5(OH)4] can react with CO2 producing magnesite (MgCO3). Wollastonite (CaSiO3) is also capable of reacting with CO2 to produce calcite (CaCO3).40 Unfortunately natural weathering is too slow to significantly reduce the rapidly accelerating CO2 concentrations that the environment is experiencing now.6,41 Investigations of the ability of these minerals to interact with CO2 occur within two different methodologies, ex situ (above ground using a chemical processing plant) and in situ (below ground using little to no chemical or mechanical alteration).40 Clearly between the methods in situ has a significant energy and economic advantage due to the lack of processing facilities and alterations to the minerals, but ex situ has the potential for a significantly higher reaction rate and percent conversion advantage. With speed becoming more and more of a factor in the reduction of atmospheric CO2 concentration, ex situ is currently a more valuable method to examine.

As previously mentioned direct carbonation of minerals like olivine and serpentine is slow, in large part due to the presence of the magnesium and depending on how much water is bound to the mineral. The initial work to accelerate the reaction rate of olivine and serpentine involved a process utilized by a familiar individual in this post, the aforementioned Klaus Lackner, which involved dissolving the mineral in question in hydrochloric acid (HCL) to produce silica and MgCl2.42,43 Although the process successfully removed the magnesium the economic costs of separating the MgCl2 from silica were significant due to gel formation and the energy required after the HCL dissolution step. The separation process involves a number of extract steps focusing primarily on crystallization and dehydration eventually generating Mg(OH)2 which is then carbonated. The reaction scheme for serpentine is shown below.44


Fortunately a more economically viable process was identified, serpentinization, using a hydrothermal fluid containing CO2 forming magnesite (MgCO3). If CO2 activity is significantly high only carbonate and silicic acid form. Clearly the advantage between methods is the lack of extraction steps to generate magnesite and the elimination of the solid liquid separation step due to lack of silica gel. The serpentinization reaction scheme is shown below.45


Supplies of olivine and serpentine may never be an issue as sizable deposits exists all over the world including significant concentrations on both the West and East Coasts of the United States.46 Therefore, the economic viability of ex situ carbonation is dependent on the reaction rate and generating a reaction completion of 90% or greater. Reaction rates are influenced by increase CO2 (aq) activity, temperature, reducing particle size, disrupting crystal structures and when applicable hydration water removal.40 Of course all of these factors involve energy and/or economic costs. Reducing particle size is especially important because olivine and serpentine react according to the shrinking-particle model and the shrinking-core model respectively.40 In the shrinking-particle model, the particle surface reacts to release magnesium into solution followed by precipitation of the produced magnesium-carbonate particles from the solution; therefore, the smaller the surface area the higher the reaction rate.40 However, it costs more and more energy to generate a smaller and smaller particle.

Recalling the importance of reaction efficiency (i.e. extent of reaction, Rx), typically acquiring a 100% reaction percentage is unrealistic. Rx is largely dependent on pretreatments and reaction conditions, with significant parameter crossover with those factors that influence reaction rates. The two most important methods for increasing Rx is reducing particle size and removing any chemically bound water.40

One of the most important factors in measuring the technical and economic viability of mineral sequestration is the ratio (RCO2) between ore mass and CO2 carbonation. Normally RCO2 assumes a 100% conversion of Fe+2, Mg and Ca and creates a baseline of mineral mass required to carbonate one unit of CO2 mass. Despite vast quantities of available minerals, regardless of which is selected, a lower RCO2 number would be better because the less mass required to sequester the same amount of CO2 would also result in reduced energy and economic costs. Olivine has the lowest RCO2 value at 1.6, followed by serpentine at 2.2 and wollastonite at 2.7.40

One of the biggest problems facing serpentine as a viable reaction candidate is the additional energy required to remove water in order to improve the reaction rate. The additional 290 to 325 kW-h/ton depending on whether the serpentine is antigorite or lizardite,40 significantly impacts the efficiency of carbon sequestering, a problem that was discussed previously for air capture strategies.

Olivine and wollastonite do not have the same problem with significant associated water hydration concentrations, thus little to no heat pretreatments need to be utilized. Most of the energy used in the preparation process for these two minerals comes from reducing surface area through grinding. It typically costs 10 – 12 kW-h/ton to reduce the particle size to 75 microns (commonly referred to as 200 mesh).40,47 At 200 mesh olivine particles typically have a Rx of 14% after one hour and 52% after three hours when saturated with CO2 (aq). The figure below illustrates the change in Rx after one hour vs. the energy utilized to decrease surface area.40


Surface area has the greatest influence on Rx and reaction rate where temperature has the second greatest influence and that influence expands to viable distribution strategies. The figure shown below demonstrates that temperatures that exceed 100 oC are preferable.40 Rx values decrease when temperatures exceed a certain level because the CO2 (aq) activity decreases and the reaction become thermodynamically less favorable. Unfortunately the very low reactivity rates at low temperatures decrease the probability of successfully using mineral sequestering strategies outside of specifically designed plants as standard air temperature and pressure are much lower than desired.


One of the more significant studies of olivine driven carbon sequestration calculated that an hourly sequestering rate of 1,100 tons of CO2 would cost 3,300 tons of olivine and 352 MW-h of energy.48 This rate would result in total sequestration of 9,636,000 gross tons of CO2 a year (0.00127 ppm) for a single plant.

In general the lowest costs for mineral based carbon sequestration were calculated at $78-81/ton for 100% pure olivine. Wollastonite ore (50% purity) also generate a competitive cost ratio of $110/ton due to its larger reactivity, but the cost was balanced by the higher RCO2 value and lower purity. Unfortunately unlike olivine and serpentine, resource supplies of wollastonite are limited reducing its ability to make a significant contribution to reducing atmospheric CO2 concentrations.

Unfortunately there are some caveats to the estimated costs. First, the study was conducted in a high-pressure environment (350 psi), which is another factor that reduces the probability of administration of mineral sequestration on a passive level in the environment. Reactions rates are considerably slower at normal atmospheric pressure levels. Second, not all of the costs associated with the removal were included; transport costs were small due to proximity (something that would not always be true in an atmospheric strategy) as well as no inclusion of capital, maintenance or other overhead costs for construction. However, despite these exclusions, the economics of specialized mineral sequestering plants are low enough to warrant further research.

However, the CO2 needs to be in an aqueous form to have a reasonable probability of initiating the reaction with the given mineral at a respectable rate. Therefore, a vast percentage of CO2 in the atmosphere will be unable to spontaneously react with any introduced minerals without pretreatment. Overall at this point in time it is difficult to consider mineral sequestration as an important methodology in the direct removal of atmospheric CO2 beyond what is performed naturally without the use of a specialized plant to increase reaction rates and extents of reaction. However, in addition to the use of specialized plants, mineral sequestration may be an important element in storage of CO2 removed from the atmosphere via another means. Commentary on the in situ methodology and its role in storage will be reserved for the future discussion about long-term carbon storage.

It appears that all major atmospheric CO2 removal options are either too limited in the amount of CO2 they will remove or too expensive. Regardless of this fact the expense must be managed because without the removal of already existing CO2, human civilization on Earth will be irrevocably changed.

Dr. James Hansen and others believe that a natural draw down of 50 ppm by 2150 would be possible by establishing a dedicated program that limits deforestation and increases the rate of reforestation along with including limited use of bio-char (slash-and-char) in favor of slash-and-burn agriculture.1 This type of draw down would only cost millions instead of the billions to trillions that more advanced technological driven removal strategies cost at this point in time. So why not pursue this strategy, why is speed so important?

The most important reason is that environmental changes are proceeding much faster than previously predicted. For example the fourth edition of the IPCC Climate Report hypothesized that by the end of the century summers would be devoid of arctic ice sheets.49 However, only three years later since the information for that report was complied, ice melt has accelerated so quickly that some scientists believe that it will only take a decade before there will be summers devoid of arctic ice.13 As previously mentioned the loss of arctic ice is not only important for the localized ecosystem, but also removing reflective white ice in favor of dark blue water will significantly increase the temperature of the oceans.

A possible reason explaining the huge difference in prognostication vs. reality is that it is highly likely that in the past there was overestimation of the negative effects of aerosols on global climate forcing due to the uncertainty of those effects.50 The general idea behind aerosols, excluding soot, is that they naturally scatter sunlight cooling the environment. Aerosols function under the same general principle as the sulfur dioxide geo-engineering idea. However, until recently there was conflicting data between climate models and satellite data where climate models predicted only a 10% reduction in warming vs. satellite data implicating a 20% reduction in warming. A new study has reconciled the discrepancy and determined that the 10% masking value is more accurate.51 With a less pronounced influence from aerosols (radioactive forcing of -0.3 Wm2 instead of –0.5 Wm2)51 it is reasonable to anticipate a faster rate of climate change than that predicted by the IPCC.

It is impossible to rationally argue that global warming is not significantly influencing the climate.52 The somewhat scary thing is how long this influence has been at work as a recent study concluded that 90% of the changes in biological systems over the past 38 years were consistent with warming trends.53 If there was a more deterministic timeline of events then it would be easier to select the most economical strategy to ward off the more detrimental aspects of climate change; however, that is not the case, which leaves available options split between greater economical advantage or greater certainty. Overall with what is at stake, it makes more sense to bet on certainty that might cost more over cost-effective that might not be successful. This statement does not mean that more natural strategies like reforestation and ‘slash-and-char’ should not be incorporated into a CO2 reduction plan of action, but that one cannot forego the more expensive technological strategies in favor of the more natural strategies.

Natural carbon sinks will be unable to offset the increase in emissions that will likely occur throughout the 21st century. Suppose for a moment that instead of a 50% reduction, the world becomes CO2 neutral by 2050 with an atmospheric concentration of CO2 topping out at » 450 ppm. Then assume only a 20% reduction in the capacity of natural sinks and negligible increases in all other GHGs. Under these very favorable conditions it would still take nature over 86 years (2136) before emission levels returned to what most view as the reasonably safety level of 350 ppm. Of course the above scenario is improbable, but even in that scenario average global temperatures will increase by and be maintained at least 2-3o C over the next 80-100 years, which will generate some significant and permanent environmental damage. Thus, technologically driven atmospheric carbon capture will be necessary despite the cost. So it is important to take steps to lower that cost as much as possible. These are just some of the steps that should be explored.

First, reducing the dependency of an air capture system on fossil fuel derived energy will be an important cost cutting step. The less emissions generated by the energy source servicing the air capture unit, the higher the efficiency of that unit. The seemingly best universal option, barring the development of some new energy source, would be to install solar panels on some wing-like addition to the basic capture design. In special cases, wind may prove to be a more reliable source than solar. Nuclear could be an option, but construction times place strains on its viability.

The use of solar power has been suggested by proponents of air capture, but the solar power needs to be generated on-site not from a separate solar energy generating infrastructure some odd miles away. For example, it is unlikely that the cost of solar power per kW-hr will ever drop below current fossil fuel energy prices, thus returning to the Zeman cost analysis, instead of the cost per net ton being $753.38 it would be the original gross ton price of $174.069 because no CO2 is being released to provide the energy of operation, but at best the general price per kW-hr from a solar energy provider would be similar (9.1 cents).

Second and more importantly, each capture unit needs to have a means to either synthesize or recapture most of the water lost during operation. Recapturing could occur by running the output atmospheric air stream from the contactor into a compressor and then filtering out the water. Lowering the temperature of the air before it enters the contactor would reduce the total amount of water loss or limiting construction of air capture units to environments with high levels of relative humidity. Unfortunately these first two strategies will require additional energy, but if that energy is provided from a zero-emission source the benefits will outweigh the disadvantages.

Third, a greater number of test-plots for exploration of bio-char need to be developed ranging in climate location and soil type. Some test-plots have been developed, but the primary goal behind those plots were to better understand the ability of bio-char to aid in crop growth, not to identify the ability of bio-char to store carbon. Note that this suggestion does not advocate creating bio-char plantations, but instead looks to identify locations where one could deposit bio-char. Also a practice plot to determine how bio-char interacts with modern-day farming techniques should also prove useful to whether or not bio-char can successfully sequester carbon over long-periods of time and aid in increasing crop yield over a period of time greater than a few years.

Overall there is still a lot of work to do when it comes to neutralizing the increasing atmospheric CO2 concentration, but continued focus on the evolution of CO2 atmospheric extraction techniques will generate the means to avert some of the more detrimental environmental results. Therefore, it is important that attention focus on not only mitigation strategies that reduce the amount of CO2 and other GHGs that are released into the atmosphere to begin with, but also on strategies that will reduce the existing concentration of CO2 at a sufficient rate.

--
1. Hansen, James, et, Al. “Target Atmospheric CO2: Where Should Humanity Aim?” The Open Atmospheric Science Journal. 2008. 2: 217-231.

2. Enting, I.G., et, Al. “Future Emissions and Concentrations of Carbon Dioxide: Key Ocean/Atmosphere/Land Analyses.” 1994. CSIRO – Division of Atmospheric Research Technical Paper #31.

3. “Working Group I: The Physical Science Basis of Climate Change.” Intergovernmental Panel on Climate Change. 2007. In: http://ipcc-wg1.ucar.edu/wg1/wg1-report.html.

4. Caldeira, K, and Wickett, M. “Anthropogenic carbon and ocean pH.” Nature. 2003. 425: 365.

5. Keeling, C, and Whorf, T. “Atmospheric CO2 records from sites in the SIO air sampling network, Trends: A Compendium of Data on Global Change.” Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., USA, 2004. (http://cdiac.esd.ornl.gov/trends/co2/sio-mlo.htm).

6. Ridgwell, Andy, and Zeebe, Richard. “The role of the global carbonate cycle in the regulation and evolution of the Earth system.” Earth and Planetary Science Letters. 2005. 234: 299– 315.

7. Moy, Andrew, et, Al. “Reduced calcification in modern Southern Ocean planktonic foraminifera.” Nature Geoscience. 2009. 2: 276 – 280.

8. del Moel, H, et, Al. “Planktic foraminiferal shell thinning in the Arabian Sea due to anthropogenic ocean acidification?” Biogeosciences Discussions. 2009. 6(1): pp.1811-1835.

9. Armstrong, R, et, Al. “A new, mechanistic model for organic carbon fluxes in the ocean: based on the quantitative association of POC with ballast minerals.” Deep-Sea Res. 2002. Part II 49: 219–236.

10. Klaas, C, Archer, D. “Association of sinking organic matter with various types of mineral ballast in the deep sea: implications for the rain ratio.” Glob. Biogeochem Cycles. 2002. 16(4): 1116.

11. Ridgwell, Andy. “An end to the ‘rain ratio’ reign?” Geochem. Geophys. Geosyst. 2003. 4(6): 1051.

12. Barker, S, et, Al. “The Future of the Carbon Cycle: Review, Calcification response, Ballast and Feedback on Atmospheric CO2.” Philos. Trans. R. Soc. A. 2003. 361: 1977.

13. Hawkins, Richard, et, Al. “In Case of Emergency.” Climate Safety. Public Interest Research Centre. 2008.

14. Canadell, Josep, et, Al. “Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks.” PNAS. 2007. 104(47): 18866-18870.

15. Biello, David, et, Al. “Researchers Use Volcanic Eruption as Climate Lab.” Scientific Nature. January 5, 2007.

16. Angel, Roger. “Feasibility of cooling the Earth with a cloud of small spacecraft near the inner Lagrange point (L1).” PNAS. 2006. 103(46): 17184-17189.

17. “Lohafex project provides new insights on plankton ecology: Only small amounts of atmospheric carbon dioxide fixed.” International Polar Year. March 23, 2009.

18. Black, Richard. “Setback for climate technical fix.” BBC News. March 23, 2009.

19. Various Press-Releases and News Articles from GRT Wesite: http://www.grtaircapture.com/

20. Zeman, Frank. “Energy and Material Balance of CO2 Capture from Ambient Air.” Environ. Sci. Technol. 2007. 41(21): 7558-7563.

21. Hong, B.D, and Slatick, E. R. “Carbon Dioxide Emission Factors for Coal.” Energy Information Administration, Quarterly Coal Report. January-April 1994. pp 1-8.

22. “Electric Power Industry 2007: Year in Review.” Energy Information Administration. May 2008. pg 1.

23. Mahamoudkhani, M. and Keith, D.W. “Low-Energy Sodium Hydroxide Recovery of CO2 Capture from Air.” International Journal of Greenhouse Gas Control Technologies. Pre-Print;

24. Keith, David. “Direct Capture of CO2 from the Air.” Unpublished Presentation. www.ucalgary.ca/~keith

25. Stolaroff, Joshuah, et, Al. “Carbon Dioxide Capture from Atmospheric Air Using Sodium Hydroxide Spray.” Environ. Sci. Technol. 2008. 42: 2728–2735.

26. Zeman, Frank, Keith, David. “Carbon neutral hydrocarbons.” Phil. Trans. R. Soc. A. 2008. 366: 3901–3918.

27. Amonette, Jim. “An Introduction to Biochar: Concept, Processes, Properties, and Applications.” Harvesting Clean Energy 9 Special Workshop. Billings, MT Jan 25, 2009.

28. Glaser, B, et, Al. “The Terra Preta phenomenon – A model for sustainable agriculture in the humid tropics.” Naturwissenschaften. 2001. 88: 37–41.

29. Glaser, B. Lehmann, J, Zech, W. “Ameliorating physical and chemical properties of highly weathered soils in the tropics with charcoal - a review.” Biology and Fertility of Soils. 2008. 35: 4.

30. Lehmann, J., and Rondon, M. “Bio-char soil management on highly-weathered soils in the humid tropics.” Biological Approaches to Sustainable Soil Systems. 2005. Boca
Raton, CRC Press, in press.

31. Soubies, F. “Existence of a dry period in the Brazilian Amazonia dated through soil carbon, 6000-3000 years BP.” Cah. ORSTOM, sér. Géologie. 1979. 1: 133.

32. Herring, J.R. “Charcoal fluxes into sediments of the North Pacific Ocean: the Cenozoic record of burning.” In: E.T. Sundquist and W.S. Broecker, Editors, The Carbon Cycle and Atmospheric CO2: Natural Variations, Archaean to Present, A.G.U. 1985. 419–442.

33. Hedges, et Al. “The molecularly-uncharacterized component of nonliving organic matter in natural environments”, Organic Geochemistry. 2000. 31: 945–958.

34. Preston, C, and Schmidt, M. “Black (pyrogenic) carbon: a synthesis of current knowledge and uncertainties with special consideration of boreal regions.” Biogeosciences. 2006. 3: 397–420.

35. Lehmann, J, Gaunt, J, Rondon, M. “Bio-char sequestration in terrestrial ecosystems – a review.” Mitigation and Adaptation Strategies for Global Change. 2006. 11: 403–427.

36. Johnson, Jane, et, Al. “Chemical Composition of Crop Biomass Impacts Its Decomposition.” Soil Science Society American Journal. 2007. 71: 155-162.

37. IEA Energy Technology Essentials. “Biomass for Power Generation and CHP.” January 2007.

38. Berndes, G, Hoogwijk, M, Van Den Broeck, R. “The contribution of biomass in the future
global energy supply: A review of 17 studies.” Biomass and Bioenergy. 2003. 25: 1–28.

39. Pacala, S., and Socolow, R. “Stabilization Wedges: Solving the Climate Problem for the Next 50 Years with Current Technologies.” Science. 2004. 305 (5686): 968 – 972.

40. Gerdemann, S.J., et Al. “Ex-Situ and In-Situ Mineral Carbonation as a Means to Sequester Carbon Dioxide.” DOE Analysis.

41. Schuiling, R.D., and Krijgsman, P. “Enhanced Weathering: An Effective and Cheap Tool to Sequester CO2.” Climatic Change. 2006. 74: 349–354.

42. Lackner, K.S., Butt, D.P., Wendt, C.H. “Magnesite Disposal of Carbon Dioxide.” Los Alamos National Laboratory. 1997. LA-UR-97-660.

43. Lackner, K.S., Butt, D.P., and Wendt, C.H. “Progress on Binding CO2 in Mineral Substrates.” Energy Conversion Mgmt. 1997. 38: Suppl., S259-S264.

44. Lackner, K.S., et, Al. “The Kinetics of Binding Carbon Dioxide in Magnesium Carbonate.” Los Alamos National Laboratory. 1998. LA-UR-98-763.

45. O’Connor, W.K., et, Al. “Carbon Dioxide sequestration by Ex Situ Mineral Carbonation, Technology. 2000. 7S: 115-123.

46. O’Connor, W.K., et, Al. “Aqueous Mineral Carbonation: Mineral Availability, Pretreatment, Reaction Parametrics, and Process Studies.” 2004. DOE/ARC-TR-04-002.

47. O’ Connor, W.K., et, Al. “Carbon Dioxide Sequestration: Aqueous Mineral Carbonation Studies Using Olivine and Serpentine.” In-House Presentation; Albany Research Center
Office of Fossil Energy, US DOE.

48. Lyons, J.L., Berkshire, L.H., White, C.W. (2003). Mineral Carbonation Feasibility Study, Draft Report, Commissioned by National Energy Technology Laboratory. 56.

49. IPCC Fourth Assessment Report. WG I. Technical Summary, 73.

50. Hansen, James, et, Al. “Climate change and trace gases.” Phil. Trans. R. Soc. A. 2007. 365: 1925–1954.

51. Myhre, Gunnar. “Consistency Between Satellite-Derived and Modeled Estimates of the Direct Aerosol Effect.” Science. June 18, 2009. DOI: 10.1126/science.1174461.

52. United States Global Climate Research Program - http://www.globalchange.gov/

53. Cynthia Rosenzweig, et, Al. “Attributing Physical and Biological Impacts to AnthropogenicClimate Change.” Nature. 453 (7193): 353-357.