Friday, October 30, 2009

Sharp Solar Cell



This will not be in a market very soon. I am sure, because of cost issues. However it means that for critical applications, it is possible to achieve 35% which is a far cry from what can be supplied to the domestic market which operates as yet between 10 and fifteen percent.



As I have posted, there are pretty neat things been accomplished in the lab, and until we transition to a complex three dimensional nano architecture, the best results will run somewhere between 30 and perhaps 40 percent. This is obviously pretty good. It just is not cheap.



Every lab has understood the goal posts for decades. It is a little hard to imagine that this effort has been underway for so long, yet that is typical of technology research. The legend of the eureka moment is obviously overrated.



This will be of great value where money is not an issue and where surface area is.




Sharp Develops Solar Cell With World's Highest Conversion Efficiency



http://www.solardaily.com/images/sharp-new-pv-solar-ceel-structure-bg.jpg




Sharp has now succeeded in forming an InGaAs layer with high crystallinity by using its proprietary technology for forming layers. As a result, the amount of wasted current has been minimized, and the conversion efficiency, which had been 31.5% in Sharp's previous cells, has been successfully increased to 35.8%.



by Staff Writers



Washington DC (SPX) Oct 29, 2009



http://www.solardaily.com/reports/Sharp_Develops_Solar_Cell_With_World_Highest_Conversion_Efficiency_999.html



Sharp has achieved the world's highest solar cell conversion efficiency (for non-concentrator solar cells) of 35.8% using a triple-junction compound solar cell. Unlike silicon-based solar cells, the most common type of solar cell in use today, the compound solar cell utilizes photo-absorption layers made from compounds consisting of two or more elements such as indium and gallium.


Due to their high conversion efficiency, compound solar cells are used mainly on space satellites.


Since 2000, Sharp has been advancing research and development on a triple-junction compound solar cell that achieves high conversion efficiency by stacking three photo-absorption layers.


To boost the efficiency of triple-junction compound solar cells, it is important to improve the crystallinity (the regularity of the atomic arrangement) in each photo-absorption layer (the top, middle, and bottom layer).


It is also crucial that the solar cell be composed of materials that can maximize the effective use of solar energy.


Conventionally, Ge (germanium) is used as the bottom layer due to its ease of manufacturing. However, in terms of performance, although Ge generates a large amount of current, the majority of the current is wasted, without being used effectively for electrical energy.


The key to solving this problem was to form the bottom layer from InGaAs (indium gallium arsenide), a material with high light utilization efficiency. However, the process to make high-quality InGaAs with high crystallinity was difficult.


Sharp has now succeeded in forming an InGaAs layer with high crystallinity by using its proprietary technology for forming layers.


As a result, the amount of wasted current has been minimized, and the conversion efficiency, which had been 31.5% in Sharp's previous cells, has been successfully increased to 35.8%.

ReVolt Zinc Air Battery



We have been writing up so much leading edge energy storage technology that it is almost a relief to find someone doing magic with a basic chemical design and getting it to perform like a thorough bred.


What is been suggested here is a very safe battery that can deliver an operating range of at least 100 to 150 miles. This is a great jump over present battery technology and takes the electric vehicle from awful to quite bearable. When a driver enters his vehicle in the morning, he is always at some risk of having to drive an extra errand that demands a range reserve. 30 miles normally cannot give you that but 100 miles will.


Thus a battery able to supply over a hundred miles clearly makes the city car feasible.


At worst, it is a transition technology or an alternate for long range systems when they are available and perhaps too expensive to be underutilized.


This technology may also be packable into small devices. Then it certainly will displace the lithium business.


October 27, 2009


ReVolt Zinc Air Batteries Store Three to four Times Lithium Iion Batteries at Half the Cost


MIT Technology Review reports that REvolt of Switzerland claims to have solved issues with rechargeable Zinc Air Batteries. Nonrechargeable zinc-air batteries have long been on the market. But making them rechargeable has been a challenge.


http://nextbigfuture.com/2009/10/revolt-zinc-air-batteries-store-three.html



ReVolt Technology claims breakthrough achievements in developing a metal-air battery that overcomes all of the above barriers to deliver:


POWER: ReVolt’s new technology has a theoretical potential of up to 4 times the energy density of Lithium-Ion batteries at a comparable or lower production cost.



LIFETIME: Extended battery life due to stable reaction zone, low rates of dry-out and flooding, and no pressure build-up problems.



RECHARGEABILITY: Controlled deposition with no short-circuit, high mechanical stability.



COMPACT SIZE: No need for bulky peripherals such as cooling fans or temperature control systems.




ReVolt says it has developed methods for controlling the shape of the zinc electrode (by using certain gelling and binding agents) and for managing the humidity within the cell. It has also tested a new air electrode that has a combination of carefully dispersed catalysts for improving the reduction of oxygen from the air during discharge and for boosting the production of oxygen during charging. Prototypes have operated well for over one hundred cycles, and the company's first products are expected to be useful for a couple of hundred cycles. McDougal hopes to increase this to between 300 and 500 cycles, which will make them useful for mobile phones and electric bicycles.


For electric vehicles, ReVolt is developing a novel battery structure that resembles that of a fuel cell. Its first batteries use two flat electrodes, which are comparable in size. In the new batteries, one electrode will be a liquid--a zinc slurry. The air electrodes will be in the form of tubes. To generate electricity, the zinc slurry, which is stored in one compartment in the battery, is pumped through the tubes where it's oxidized, forming zinc oxide and releasing electrons. The zinc oxide then accumulates in another compartment in the battery. During recharging, the zinc oxide flows back through the air electrode, where it releases the oxygen, forming zinc again.


In the company's planned vehicle battery, the amount of zinc slurry can be much greater than the amount of material in the air electrode, increasing energy density. Indeed, the system would be like a fuel-cell system or a conventional engine, in that the zinc slurry would essentially act as a fuel--pumping through the air electrode like the hydrogen in a fuel cell or the gasoline in a combustion engine. McDougal says the batteries could also last longer--from 2,000 to 10,000 cycles. And, if one part fails--such as the air electrode--it could be replaced, eliminating the need to buy a whole new battery.


SunPower Panels



Don‘t you just love vigorous competition at work? Sharp may be breaking performance records for the exotics, but SunPower is breaking performance records for the domestics. Here they can lock in panels with an efficiency of around twenty percent. This is a solid improvement over the fifteen percent benchmark bandied about and certainly starts making the technology attractive.



When you realize that ten percent efficiency is awfully close to zero on a cloudy day, it is real hard to get excited about all that area you are sponging up. At twenty percent, a thirty percent loss is not so catastrophic.



In the meantime we will be seeing a lot more of Nansolar’s printed solar cells likely operating around ten percent and cheap at $1.00 per watt.



Right now efficiency is directly attached to cost per watt. The business strategy is to design a power farm that allows easy panel replacement as new product arrives. That way, the operator can expect to increase his output through the normal replacement cycle.




SunPower Announces World-Record Solar Panel



by Staff Writers



San Jose CA (SPX) Oct 29, 2009



http://www.solardaily.com/reports/SunPower_Announces_World_Record_Solar_Panel_999.html



SunPower has announced that it has produced a world-record, full-sized solar panel with a 20.4 percent total area efficiency. The prototype was successfully developed using funds provided by the U.S. Department of Energy (DOE) under its Solar America Initiative (SAI), which was awarded to SunPower approximately two years ago.


The new 96-cell, 333-watt solar panel is comprised of SunPower's third generation solar cell technology that offers a minimum cell efficiency of 23 percent. In addition, the larger area cells are cut from a 165 mm diameter ingot and include an anti-reflective coating for maximum power generation.


With a total panel area of 1.6 square meters, including the frame, SunPower's 20.4 percent panel achieved the highest efficiency rating of a full sized solar panel and this rating was confirmed by the National Renewable Energy Lab (NREL), an independent testing facility.


"SunPower has the engineering expertise and proven technology to accomplish this remarkable milestone in such a short period of time," said Larry Kazmerski, executive director, science and technology partnerships, located at NREL.


"My colleagues at the DOE and NREL had cautioned me that reaching a 20 percent solar panel was a stretch, but this did not dampen my optimism that it would happen. I congratulate SunPower and its team of talented engineers on realizing this accomplishment."


SunPower expects to make the 20.4 percent efficiency solar panel commercially available within the next 24 months. The company plans to begin operating a U.S. panel manufacturing facility in 2010 using automated equipment designed and commercialized with SAI funding.


SunPower recently announced the availability of the SunPower T5 Solar Roof Tile (T5), the first photovoltaic roof product to combine solar panel, frame and mounting system into a single pre-engineered unit. The T5 was also developed using research and development funds from the SAI.


"We are excited with the rapid pace in which we've been able to develop these advanced technologies," said Bill Mulligan, SunPower's vice president of technology and development.


"Without the funding from the SAI, it would have taken us much longer to deliver both the world-record 96-cell solar panel and the innovative T5 Solar Roof Tile. We appreciate the DOE's continued support of the solar energy industry."


The Solar America Initiative is focused on accelerating widespread commercialization of clean solar energy technologies by 2015 and to provide the U.S. additional electricity supply options while reducing dependence on fossil fuels and improving the environment.


It aims to achieve market competitiveness for solar electric power through government partnerships with industry, universities, national laboratories, states, and other public entities by funding new research and development activities.


Thursday, October 29, 2009

Solar Variability




When I went and dug up information of the sun’s variability a couple of years ago, I came across figures that showed minuscule changes. This article shows us that that was misleading. I get the impression that the folks who care about these things are possibly still collecting relevant data. This is possible because the important parts are those the atmosphere absorbs and we do not see at all.


The importance of all this is that the variability of the extreme ultraviolet (EUV) is over a range of six percent of the total EUV. On top of that, it is absorbed by the atmosphere or less likely reflected. Absorption means that this will heat that part of the atmosphere that does the absorbing. That heat must be accounted for in our global heat engine equation.


After observing the field for a couple of years, I am not too trusting these days. A six percent 11 year variation in a single source of heat is clearly and convincingly prospective as a climate driver. If not, it is because the output in that range is too low. It also reacts mainly with the outer reaches of our atmosphere but that may a too conservative opinion. After all, how could we properly tell?


In space we have satellites to measure the total flux of any wave band.. We can do the same on the ground and near ground. In between our coverage is much weaker to the point of been nearly blind.


The point is that this is a prospective heat driver that has not been respected for whatever reason. A couple of unrecognized heat transfer mechanisms and we are in business. The variation itself is not trivial. It is linked to the sunspot cycle rather directly. The increase in radiation is significant and is much different than variation over the other bands (as far as I can tell)


A heating of the upper atmosphere would naturally slow down the radiative heat loss from the planet and generate a warming climate. The loss of this heated zone of the upper atmosphere, once accomplished, would allow built up heat in the lower atmosphere to escape. Should the cycle fail to rekindle, then this cooling cycle will continue.


Now understand that the heat loss during the latter stages is been drawn from a much lesser atmospheric volume. This implies that it will get a lot colder.


What I am saying is that putting a heat cap on the upper atmosphere is possibly sufficient to explain the warming effect of the sunspot cycle. The failure of the cycle allows a progressive chilling of the climate. Two or three failed cycles in a row and we have a little ice age to contend with.


It will not produce a real ice age, but it will certainly curtail our recent livable winters. I had recognized for some time that we needed something that leveraged the sunspot cycle to properly explain the data. The apparent output of the sun lacked variability and appeared to do it all. This effect modifies heat loss.




The Sun's Sneaky Variability


10.27.2009


http://science.nasa.gov/headlines/y2009/27oct_eve.htm?list1109684


October 27, 2009: Every 11 years, the sun undergoes a furious upheaval. Dark sunspots burst forth from beneath the sun's surface. Explosions as powerful as a billion atomic bombs spark intense flares of high-energy radiation. Clouds of gas big enough to swallow planets break away from the sun and billow into space. It's a flamboyant display of stellar power.


So why can't we see any of it?


Almost none of the drama of Solar Maximum is visible to the human eye. Look at the sun in the noontime sky and—ho-hum—it's the same old bland ball of bright light.


"The problem is, human eyes are tuned to the wrong wavelength," explains Tom Woods, a solar physicist at the University of Colorado in Boulder. "If you want to get a good look at solar activity, you need to look in the EUV."


EUV is short for "extreme ultraviolet," a high-energy form of ultraviolet radiation with wavelengths between 1 and 120 nanometers. EUV photons are much more energetic and dangerous than the ordinary UV rays that cause sunburns. Fortunately for humans, Earth's atmosphere blocks solar EUV; otherwise a day at the beach could be fatal.



When the sun is active, intense solar EUV emissions can rise and fall by factors of thousands in just a matter of minutes. These surges heat Earth's upper atmosphere, puffing it up and increasing the drag on satellites. EUV photons also break apart atoms and molecules, creating a layer of ions in the upper atmosphere that can severely disturb radio signals. To monitor these energetic photons, NASA is going to launch a sensor named "EVE," short for EUV Variability Experiment, onboard the Solar Dynamics Observatory as early as this winter.


"EVE gives us the highest time resolution (10 sec) and the highest spectral resolution (<>


Although EVE is designed to study solar activity, its first order of business is to study solar inactivity. SDO is going to launch during the deepest solar minimum in almost 100 years. Sunspots, flares and CMEs are at low ebb. That's okay with Woods. He considers solar minimum just as interesting as solar maximum.


"Solar minimum is a quiet time when we can establish a baseline for evaluating long-term trends," he explains. "All stars are variable at some level, and the sun is no exception. We want to compare the sun's brightness now to its brightness during previous minima and ask ourselves, is the sun getting brighter or dimmer?"


Lately, the answer seems to be dimmer. Measurements by a variety of spacecraft indicate a 12-year lessening of the sun's "irradiance" by about 0.02% at visible wavelengths and 6% at EUV wavelengths. These results, which compare the solar minimum of 2008-09 to the previous minimum of 1996, are still very preliminary. EVE will improve confidence in the trend by pinning down the EUV spectrum with unprecedented accuracy.


http://science.nasa.gov/headlines/y2009/images/deepsolarminimum/irradiance.jpg



Above: Space-age measurements of the total solar irradiance or "TSI". TSI is the sun's brightness summed across all the wavelengths of the electromagnetic spectrum--visible light and EUV included. TSI goes up and down with the 11 year solar cycle. Credit: C. Fröhlich.



The sun's intrinsic variability and its potential for future changes are not fully understood—hence the need for EVE. "The EUV portion of the sun's spectrum is what changes most during a solar cycle," says Woods, "and that is the part of the spectrum we will be observing."


Woods gazes out his office window at the Colorado sun. It looks the same as usual. EVE, he knows, will have a different story to tell.

Acidification and Shellfish Decline


As you might imagine, this is not going to happen any time soon, but it demonstrates a possible consequence of continuing to build up the CO2 content in the atmosphere. I am really not too sure about this because the ocean is sponge for CO2 and a major active part of the CO2 cycle in the same way the ocean is for water.




The issue is simply that we are producing a huge amount of CO2 and we all agree that this is likely a bad idea. The good news is that the transition to a non carbon economy is both possible and also starting to happen, with or without cheap sources of energy. A lot will transition fairly fast but the stupendous growth of the global economy will keep a lot of that carbon based power afloat.




However, global change is presently so fast that we are no longer talking a century but are talking decades. The USA and Europe can fully change out their entire rolling stock of automobiles to electric over a single decade. Heavy transport will change out in the decade after. This leaves the carbon market only the coal power industry and that is waiting to see if we will have success with hydrogen boron fusion energy. If that occurs, the total conversion will take perhaps ten to fifteen years worldwide.




The whole issue of carbon dioxide is about to become mute when that happens.




We may end up deciding some day that it is a good idea to burn coal simply to produce CO2 to replace that which we are naturally sequestering as we terraform the planet with deep rich soil and intensive agriculture.






Ocean Acidification May Contribute To Global Shellfish Decline




http://www.terradaily.com/reports/Ocean_Acidification_May_Contribute_To_Global_Shellfish_Decline_999.html




http://www.terradaily.com/images/hard-clam-bay-scallop-eastern-oyster-acid-ocean-experiment-bg.jpg





At the end of experiments (about 20 days) larvae of the hard clam, bay scallop, and Eastern oyster that were raised in seawater with high carbon dioxide concentration (right column) were smaller and later to develop than larvae raised in seawater with carbon dioxide concentration matching current ocean levels (left column.) Credit: Stephanie Talmage




by Staff Writers



Stony Brook NY (SPX) Oct 28, 2009



Relatively minor increases in ocean acidity brought about by high levels of carbon dioxide have significant detrimental effects on the growth, development, and survival of hard clams, bay scallops, and Eastern oysters, according to researchers at Stony Brook University's School of Marine and Atmospheric Sciences.


In one of the first studies looking at the effect of ocean acidification on shellfish, Stephanie Talmage, PhD candidate, and Professor Chris Gobler showed that the larval stages of these shellfish species are extremely sensitive to enhanced levels of carbon dioxide in seawater. Their work will be published in the November issue of the journal Limnology and Oceanography.


"In recent decades, we have seen our oceans threatened by overfishing, harmful algal blooms, and warming. Our findings suggest ocean acidification poses an equally serious risk to our ocean resources," said Gobler.


During the past century the oceans absorbed nearly half of atmospheric carbon dioxide derived from human activities such as burning fossil fuels. As the ocean absorbs carbon dioxide it becomes more acidic and has a lower concentration of carbonate, which shell-making organisms use to produce their calcium carbonate structures, such as the shells of shellfish.


In lab experiments, Talmage and Gobler examined the growth and survivorship of larvae from three species of commercially and ecologically valuable shellfish. They raised the larvae in containers bubbled with different levels of carbon dioxide in the range of concentrations that are projected to occur in the oceans during the 21st century and beyond.


Under carbon dioxide concentrations estimated to occur later this century, clam and scallop larvae showed a more than 50% decline in survival. These larvae were also smaller and took longer to develop into the juvenile stage. Oysters also grew more slowly at this level of carbon dioxide, but their survival was only diminished at carbon dioxide levels expected next century.


"The longer time spent in the larval stage is frightening on several levels," said Talmage. "Shellfish larvae are free swimming. The more time they spend in the water column, the greater their risk of being eaten by a predator. A small change in the timing of the larval development could have a large effect on the number of larvae that survive to the juvenile stage and could dramatically alter the composition of the entire population."


Although levels of carbon dioxide in marine environments will continue to rise during this century, organisms in some coastal zones are already exposed to high levels of carbon dioxide due to high levels of productivity and carbon input from sources on land.


"This could be an additional reason we see declines in local stocks of shellfish throughout history," said Talmage. "We've blamed shellfish declines on brown tide, overfishing, and local low-oxygen events. However it's likely that ocean acidification also contributes to shellfish declines."


Talmage and Gobler hope their work might help improve the success rate of shellfish restoration projects.


"On Long Island there are many aquaculturists who restock local waters by growing shellfish indoors at the youngest stages and then release them in local estuaries," said Talmage. "We might be able to advise them on ideal carbon dioxide conditions for growth while larvae are in their facilities, and offer suggestions on release times so that conditions in the local marine environment provide the young shellfish the best shot at survival.


Immunization Season



How rarely we publish good news. The global immunization campaign is possibly the first significant global public policy initiative organized at the governmental level. It has been rolling along for decades, and despite the annual grumbling, it is really an outstanding policy success.


This reports tells us that the number of one year old babies not been immunized has dropped to 24 million or perhaps a fifth of the total. Even if it was physically impossible to reach the balance, and I am sure that over time, a lot get their shots, the biological effect is immense. Everywhere, only a small fraction of the population is vulnerable. The prospect of maintaining a pool of disease in a sea of the immune is difficult and usually a downtrend begins.


In the meantime, we are been exposed to a public flap over swine flue that is perhaps a bit over the top, but will certainly chase everyone in this year to get their blanket coverage renewed. The industry has obviously prospered and this is plausibly a test run of their ability to react to a truly serious threat that can kill thousands. It is obviously working quite well.


Unheralded, is the fact that Africa is almost visibly responding to this preemptive treatment. A whole range of childhood killers are been brought under control. This began a long time ago, and many that are adults with their own children are prospering in good health today.


The point is that good health is changing the economies of even the poorest of the poor.



October 21, 2009


Global Child Immunizations at All-Time High, Despite Rising Costs


A new report highlights the success of worldwide vaccine and immunization programs, but cautions about continued challenges and high costs


By Katherine Harmon



AWAITING VACCINATION The new immunization report reveals that despite progress, 24 million children remain unvaccinated.


FLICKR/HDPTCAR


More children are now immunized across the globe than ever before, according to the 2009 The State of the World's Vaccines and Immunization Report, released Wednesday. An estimated 106 million infants received vaccinations in 2008, noted the analysis published by the World Health Organization (WHO), the United Nations Children's Fund (UNICEF), and The World Bank.



"This represents more than 2.5 million deaths prevented," Jon Andrus, deputy director of the Pan American Health Organization (PAHO), said at a press briefing about the results. "Is that enough? No." Some 24 million infants worldwide still do not receive all of the recommended inoculations in their first year.



The Global Immunization Vision and Strategy (GIVS), co-sponsored by WHO and UNICEF, estimates that by vaccinating 90 percent of the world's children against 14 illnesses for which there are vaccines (diphtheria, pertussis, tetanus, measles, polio, tuberculosis, hepatitis B, Hib disease, rubella, meningococcal disease, pneumococcal disease, rotavirus, and, where applicable, Japanese encephalitis and yellow fever), another two million child deaths could be prevented. If achieved, that number would go a long way toward meeting the goals of the 2000 United Nations Millennium Declaration, which sought to decrease mortality in children under five by two thirds by 2015 (using 1990 figures as a basis).


"This is about saving lives," Daisy Mafubelu, WHO assistant director general of Family and Community Health, said at the briefing. But saving lives does not come cheaply. Reaching the GIVS goals would cost about $76 billion, as the cost of immunizing each child is on the rise.


Cost of saving lives


In the 1980s immunizing each child cost only a few dollars on average. But by next year it is expected to run about $18. To meet the GIVS target, that figure would increase to $30, due to the sheer number of vaccines required and the complexity of newer formulas. Such per-child costs may not sound like much in today's expensive U.S. health care market, but for many developing countries, that is more than 10 percent of annual per capita income.


Much of the tab is picked up by governments, although international aid organizations, such as UNICEF, are currently buying more than half of the vaccines that go to children.


All the money also adds up to a mammoth industry. Having burgeoned to nearly three times its 2000 size, the global market for vaccines was more than $17 billion last year. As vaccine and immunization programs expand and become more stable, more drugmakers are inclined to get into the market. "With bigger volume—and bigger predictable volume—we're getting more manufacturers entering," Helen Evans, deputy chief executive officer of the Global Alliance for Vaccines and Immunizations, said at the meeting. And the growing amount of competition has helped to drive down the price of individual vaccines.


The substantial costs can be recouped in the long run by savings on subsequent reduced medical treatment for the illnesses. The cost of vaccinating against smallpox across the globe between 1967 and 1977 cost about $100 million. The eradication of the disease, however, has saved an estimated $1.3 billion a year since then, according to the report.


"Immunization does not only save lives, it improves them," Saad Houry, deputy executive director of UNICEF, said at the briefing. It can also improve the health of a nation's economy. Even those who survive preventable childhood illnesses are often either physically disabled or less educated (due to missed school) and are unable to contribute as much to the economy as healthier individuals.


Reaching more people


A hobbled international economy will certainly put new strains on expensive international vaccination programs, although the recession has not impacted UNICEF's funding for vaccines, and many countries have continued to support purchases despite declining GDPs, Houry said. An increase in overall poverty, however, will likely mean fewer children are immunized. "Of [the] 89 million people who will be pushed back into poverty" by the global recession, Rakesh Nangia, The World Bank Institute'svice president and director of strategy and operations for the bank's Human Development Network, said at the briefing, "many of the children will not be able to get their vaccines."


Promising new vaccines also present new challenges. Those meant for adolescents, such as the human papillomavirus (HPV) vaccine, or for adults, such as the H1N1 vaccine, will need new tactics for distribution. Public health workers in the field hope to capitalize on existing distribution channels to reach some recipients—as they already do for distributing other preventative tools such as vitamin supplements and mosquito bed nets—but more strategizing remains.


The last 20 percent of the population not receiving the inoculations will be the hardest to reach, UNICEF's Houry noted. That fifth also usually includes those who don't receive other health or basic services, a sign of deeper inequity. Just physically getting vaccines to many rural populations can present incredible logistical challenges, especially in tropical climates where continuous refrigeration of vaccine courses is especially difficult. "In many cases the systems of delivery are very weak," Nangia said.

Wednesday, October 28, 2009

Economist on Hydrogen Boron Fusion



This is an exceptionally good article published originally in the Economist. It completes our understanding of where we are going with all this and actually is very encouraging. The hydrogen fusion part of the reaction is far more believable than other proposed protocols because the comparable momentum of boron to hydrogen is large enough to suggest a much lower energy expense in achieving fusion.


The result is that we break up a boron atom into high energy helium and no nasties. It is almost too good to be true.


Anyway, This article will go a long way to helping all of us understand just why this can work and also why now after all the effort spent over the past forty years. Both methods do look very close to success. And that is likely to be the most important thing to so far happen in this century. It will never be free energy, but I can settle with the next best thing and the swift end of carbon based energy technology.


Do not invest in coal plants for the long haul. This going to be resolved a lot quicker than anyone presently imagines.


October 25, 2009


Positive Mainstream Reviews of Dense Plasma Focus Fusion and IEC Fusion


The Economist has positive coverage of dense plasma focus fusion.


http://nextbigfuture.com/2009/10/positive-mainstream-reviews-of-dense.html


Mr Lerner’s machine is called a dense plasma focus fusion device. It works by storing charge in capacitors and then discharging the accumulated electricity rapidly through electrodes bathed in a gas held at low pressure. The electrodes are arranged as a central positively charged anode surrounded by smaller negatively charged cathodes.



When the capacitors are discharged, electrons flow through the gas, knocking the electrons away from the atomic nuclei and thus transforming it into a plasma. By compressing this plasma using electromagnetic forces, Mr Lerner and his colleagues at Lawrenceville Plasma Physics, in New Jersey (the firm he started in order to pursue this research) have created a plasmoid. This is a tiny bubble of plasma that might be made so hot that it could initiate certain sorts of fusion. The nuclei in the plasmoid, so the theory goes, would be moving so fast that when they hit each other they would overcome their mutual electrostatic repulsion and merge. If, of course, they were the right type of nuclei.



For the test run, Mr Lerner used deuterium, a heavy isotope of hydrogen, as the gas. This is the proposed fuel for big fusion reactors, such as the $12 billion International Thermonuclear Experimental Reactor being built at Cadarache in France and the $4 billion National Ignition Facility at Livermore, California. It is not, however, what he proposes to use in the end. In fact his trick (and the reason why it might be possible to produce a nuclear reaction in such a small piece of apparatus) is that what he does propose is not really fusion at all. Rather, it is a very unusual form of nuclear fission. Normal fission involves breaking uranium or plutonium atoms up by hitting them with neutrons. The reaction Mr Lerner proposes would break up boron atoms by hitting them with protons (the nuclei of normal hydrogen atoms). This process is known technically, and somewhat perversely, as aneutronic fusion. The reason is that the boron and hydrogen nuclei do, indeed, fuse. But the whole thing then breaks up into three helium nuclei, releasing a lot of energy at the same time. Unlike the sort of fusion done in big machines, which squeeze heavy hydrogen nuclei together, no neutrons are released in this reaction.

From an energy-generation point of view, that is good. Because neutrons have no electric charge they tend to escape from the apparatus, taking energy with them. Helium nuclei are positively charged and thus easier to rein in using an electric field, in order to strip them of their energy. That also means they cannot damage the walls of the apparatus, since they do not fly through them, and makes the whole operation less radioactive, and thus safer.



The plasmoids Mr Lerner has come up with are not yet hot enough to sustain even aneutronic fusion. But he has proved the principle. If he can get his machine to the point where it is busting up boron atoms, he might have something that could be converted into a viable technology—and the search for El Dorado would be over.




Department of Defence study of energy security mentions Polywell (Inertial electrostatic fusion) fusion in the abstract (84 page pdf) (H/T IECfusiontech)


Findings of multiple Department of Defense (DoD) studies and other sources indicate that the United States faces a cluster of significant security threats caused by how the country obtains, distributes, and uses energy. This paper explores the nature and magnitude of the security threats as related to energy—some potential solutions, which include technical, political, and programmatic options; and some alternative futures the nation may face depending upon various choices of actions and assumptions. Specific emerging options addressed include Polywell fusion, renewable fuel from waste and algae cultivation, all-electric vehicle fl eets, highly-efficient heat engines, and special military energy considerations



The DoD, and especially the DoN (Dept of the Navy), could benefit greatly from the potential of nuclear power. But nuclear fission power is expensive and presents ongoing safety concerns. A spinoff from a form of nuclear fusion developed in the 1960s by Farnsworth and Hirsch has achieved groundbreaking success recently. This Polywell fusion device was pioneered and scientifically demonstrated in 2005 by Robert W. Bussard. This type of fusion can use boron-11 and hydrogen as the fuel. Fusion of these elements produces no neutrons and no radioactive waste. Estimated cost to build a Polywell electric plant is less than that for a similar power-producing, combined-cycle gas plant or coal plant. A gigawatt-sized reactor would be a sphere about 15 meters in diameter. If all power for the United States were generated with boron-11 and hydrogen Polywell fusion, the total yearly requirement for boron would be less than 5% of current U.S. boron production and would cost less than two trainloads of coal at current prices for both commodities. A single coal plant requires a trainload every day for full-scale operation. The U.S. Navy could adapt such devices to ship propulsion and free ships from the tether of petroleum use and logistics. The Polywell device could enable very inexpensive and reliable access to space for DoD and the nation as a whole.



Nuclear Fusion—The Farnsworth-Hirsch Fusor



The DOE and international groups have invested hundreds of millions of dollars and decades on the tokamak approach. If all works well for the ITER, a fusion power plant will come online in 2050. However, a device derived from the Hirsch-Farnsworth fusor may enable operation of a fusion power plant to begin by 2015—or earlier.



Philo T. Farnsworth invented the electron tube technology that enabled television. He also discovered a technique to produce fusion with a sort of electron tube. The basic concept of the machine is the confi nement of energetically injected nuclei into a chamber containing a positive grid electrode and a concentrically interior negative grid electrode. The injected particles fly through a hole in the outer grid and accelerate toward the inner grid. Nuclei fuse when they collide with sufficient cross-sectional energy in the center of the machine. Particle-grid collisions limit obtainable output power. This fusion method is known as Inertial Electrostatic Confinement Fusion (IECF). Robert Hirsch joined Farnsworth in his lab and developed a more advanced version of IECF, which uses concentric spherical grids.



Tuck, Elmore, Watson, George Miley, D.C. Barnes, and Robert W. Bussard have extended the research. Many people have developed “fusors” (including a high-school student), which produce fusion from deuterium-deuterium reactions but do not produce net power. These devices have been used as compact neutron sources.



Nuclear Fusion—Bussard Polywell Fusion



Dr Robert W. Bussard published results in 2006 claiming that he had achieved 100,000 times better performance than had ever previously been achieved from an IECF device. Bussard’s machine replaces the physical grid electrodes with magnetic confinement of an electron gradient known as a “polywell” that accelerates the positive ion nuclei into the center of the negative gradient. His paper in the 2006 proceedings of the International Astronautical Congress states that he had developed a design based on his previous success that, if built, would produce net power from fusion. Bill Matthews’s article in Defense News covered the story in March 2007. In November of 2005, the machine achieved 100,000 times greater performance than any previous fusor. Analysis of those experimental results led Bussard to conclude that his design will produce net power.



Bussard’s company, EMCC, continues his work since his death in October 2007. Alan Boyle at MSNBC. com covered recent developments at EMCC in an online column in June 2008, and Tom Ligon, former Bussard employee, wrote a combination history and technical description published in 2008. Bussard referred to his confinement mechanism as “magnetic grid” confinement. The system has no actual, physical electrode grids, such as in the Farnsworth-Hirsch machines.



In Bussard’s concept of a net-power-producing machine, the high-energy fusion particles produced from fusion would directly convert their energy to electricity. The high-energy charged particles resulting from the fusion will fly toward an electrical-energy-capture grid (not used for particle confinement) and expend their energy by being decelerated by this grid, which will be tuned to the energy and charge of the fusion products. The high-energy particles need not actually impact the grid and heat it. Rather, they can decelerate as the electrical grid extracts energy from the charged particle’s motion, thus “pushing” a voltage onto the grid and yielding direct electric power from the fusion. About 25–35% of the power in this type of device will be in bremsstrahlung power, which will have to be thermally converted. The total power efficiency will probably be in the 60–75% range.



One of the great advantages of IECF is the potential to use boron and hydrogen as the fusing elements. In a Bussard fusor, a sphere—with a strong magnetic fi eld imposed on it and electrons injected into it—would develop a gradient of those electrons, such that the center of the sphere would appear to a positively charged particle as if it were a negatively charged electrode (somewhat like the electrode grid of the Hirsch-Farnsworth device). Positively charged nuclei of boron and hydrogen would be injected at appropriate angles into the sphere and would “fall” into the negative well of electrons toward this virtual anode at the center of the sphere. If the particles do not collide with each other, they will fly an oscillating path within the vessel by alternately traveling toward the center of the sphere and then out toward the sphere limits until the force of the “virtual” negative electrode at the center of the sphere again attracts the positively charged nuclei toward the center again. If the virtual electrode has sufficient power (about 156 kilovolts for boron/hydrogen fusion), when the hydrogen and boron nuclei collide, they will fuse. A high-energy carbon atom will be formed, which will instantly fission into a helium nucleus with 3.76 million eV of energy and a beryllium atom. The beryllium atom will instantly divide into two additional helium nuclei, each with 2.46 million eV of energy. Boron and hydrogen, when fused in this matter, produce 6.926 E13 joules/kilogram.

To place this ability in context, the United States consumed from all sources (e.g., nuclear, fossil fuel, and renewables) in 2007 approximately 107 exajoules (E18 joules). One hundred thousand kilograms of boron-11 with the proportional amount of hydrogen (which would be vastly smaller than the amount required for a “hydrogen economy”) could produce about seven times more energy than the United States consumed from all sources by all modes of consumption in 2007. Therefore, (assuming 100% efficiency for simplicity’s sake) about 120 metric tons (not 100 tonnes, because only boron-11, which is 80% of natural boron, gives the desired fusion with hydrogen) of amorphous boron would provide equivalent power for all U.S. energy needs for over 6 years. About 1.8 million metric tons of boric oxide (about 558,000 metric tons of boron) were consumed worldwide in 2005, and production and consumption continue to grow. The United States produces the majority of boron yearly, although Turkey reportedly has the largest reserve. At $2 per gram for 99% boron, the cost in raw boron to produce six times the United States 2007’s energy supply (not just utilities but all energy) would be $240 million (120,000 kilograms × $2.00/gram)—6 years worth of U.S. power for a little more than the price of coal to run one coal-fired power plant for 1 year.



If a Bussard power plant consumed one gram of boron-11 per second, this fusion rate would produce approximately 69 gigawatts, roughly the simultaneous power output of 69 major electric power generating plants—more than one tenth of all coal-plant power generation in the United States. About 320 kilograms of boron-11 fuel ($640,000 worth of boron) at one of these fusion plants would provide 1 year’s continuous power output at 700 megawatts. A typical coal-fired electric utility power plant nominally produces 500 megawatts of electricity, but it requires about 10,000 short tons of coal per day (a short ton is only about 91% the size of a metric ton). A short ton of coal for electric utilities cost around $56 in 2008. So, one day’s worth of coal for a single coal-fired plant cost about $560,000, and a year’s worth for a single plant cost over $204 million. The United States has approximately 600 coal-fired power plants, about 500 of which are run by utility companies for public power. A 500-gigawatt (or even larger) Polywell fusion plant (which could cost less than $500 million to build) built to replace a coal-fired plant will pay for itself by coal-cost savings in less than 3 years of operation if the charge per kilowatt-hour remains constant. Because the fusion plant has fewer moving parts and fewer parts in general, it should be less expensive to maintain and operate as well.



Over the past year, Bussard’s Company, EMCC, has built a new device to verify and extend the 2006 results. Contingent on continued funding, a prototype power plant with 100 megawatts of net power production could be built at a cost less than $300 million, and producing power within 5 years—perhaps as early as 2015. Because of the nature of this device, the power output versus input is directly proportional to the seventh power of the radius of the containment sphere. A 100-megawatt power producer requires a sphere about 3 meters in diameter. A gigawatt power producer would require a sphere approximately 15–20 meters in diameter. EMCC’s decade-ago designed machine size for a 100-megawatt generator to power a naval vessel is a cylinder about 20 feet in diameter and 30 feet in length.



With no way to convert a Bussard Polywell machine to a bomb, no radioactive waste produced, small relative size, ability to operate on abundant boron and hydrogen fuel, relatively inexpensive to build, and only moderate operational safety issues (high voltage and X-ray emission during operation), these machines offer a path to a magnum advance in civilization; elimination of the carbon emission aspect of climate change; a whole new realm of platform propulsion capability and deployed electricity abundance for the U.S. military; and abundant, inexpensive energy for all who adopt its use. These machines could be exported worldwide without concern that they would proliferate nuclear bomb technology.

Global Heat Boomerang






I would like to say that the Russian effort on climate science goes back to Peter the Great and reflects a long standing scientific tradition. They actually have the data for that time span but they also have likely a lot less of a problem with site disturbance than we have discovered in the USA.



What comes out of this paper is the fact that they have determined that in the past 7500 years that we have had 18 cycles of significant warming followed by an abrupt temperature drop. This is important for the sheer number of events and the probable periodicity of this effect.



I am unable to review their data but I can make a bald statement that approximately twenty events are ample to secure a statistically significant signature. Their statement that we have a natural phenomenon of periodic warming culminating in a sharp collapse is as good as gold. The last one known as the little ice age was one of the worst, but we may not be sure of that. It certainly was as bad as it gets.



We can now dismiss volcanic activity as a driver, unless Alaska shows a closely correlated periodicity. That still fails to explain the heating.



This is more like a climatic boomerang between the Northern and Southern Hemispheres. We make the conjecture that heat is shifted north thanks to oceanic activity, until feedback abruptly reverses it. Then heat is shifted to the Southern Hemisphere and it accumulates there for a while until it is unloaded again. This is multidecadal cycle.



Prominent Russian Scientist: 'We should fear a deep temperature drop -- not catastrophic global warming'



'Warming had a natural origin...CO2 is 'not guilty'



Tuesday, October 27, 2009 By Marc MoranoClimate Depot





http://www.climatedepot.com/a/3515/Prominent-Russian-Scientist-We-should-fear-a-deep-temperature-drop--not-catastrophic-global-warming







Reprint of new scientific paper: (Full pdf paper available here.)





THE SUN DEFINES THE CLIMATE





(Habibullo Abdussamatov, Dr. Sc. - Head of Space research laboratory of the Pulkovo Observatory, Head of the Russian/Ukrainian joint project Astrometria - (translated from Russian by Lucy Hancock) Dr. Abdussamatov is featured on page 140 of the 2009 U.S. Senate Report of More Than 700 Dissenting Scientists Over Man-Made Global Warming. Also see "Related Links" below.)





Key Excerpts: Observations of the Sun show that as for the increase in temperature, carbon dioxide is "not guilty" and as for what lies ahead in the upcoming decades, it is not catastrophic warming, but a global, and very prolonged, temperature drop.





[...] Over the past decade, global temperature on the Earth has not increased; global warming has ceased, and already there are signs of the future deep temperature drop.





[...] It follows that warming had a natural origin, the contribution of CO2 to it was insignificant, anthropogenic increase in the concentration of carbon dioxide does not serve as an explanation for it, and in the foreseeable future CO2 will not be able to cause catastrophic warming. The so-called greenhouse effect will not avert the onset of the next deep temperature drop, the 19th in the last 7500 years, which without fail follows after natural warming.





[...] We should fear a deep temperature drop -- not catastrophic global warming. Humanity must survive the serious economic, social, demographic and political consequences of a global temperature drop, which will directly affect the national interests of almost all countries and more than 80% of the population of the Earth.





A deep temperature drop is a considerably greater threat to humanity than warming. However, a reliable forecast of the time of the onset and of the depth of the global temperature drop will make it possible to adjust in advance the economic activity of humanity, to considerably weaken the crisis.





Excerpts: Experts of the United Nations in regular reports publish data said to show that the Earth is approaching a catastrophic global warming, caused by increasing emissions of carbon dioxide to the atmosphere. However, observations of the Sun show that as for the increase in temperature, carbon dioxide is "not guilty" and as for what lies ahead in the upcoming decades, it is not catastrophic warming, but a global, and very prolonged, temperature drop.





Life on earth completely depends on solar radiation, the ultimate source of energy for natural processes. For a long time it was thought that the luminosity of the Sun never changes, and for this reason the quantity of solar energy received per second over one square meter above the atmosphere at the distance of the Earth from the Sun (149 597 892 km), was named the solar constant.





Until 1978, precise measurements of the value of the total solar irradiance (TSI) were not available. But according to indirect data, namely the established major climate variations of the Earth in recent millennia, one must doubt the invariance of its value.





In the middle of the nineteenth century, German and Swiss astronomers Heinrich Schwabe and Rudolf Wolf established that the number of spots on the surface of the Sun periodically changes, diminishing from a maximum to a minimum, and then growing again, over a time frame on the order of 11 years. Wolf introduced an index (“W”) of the relative number of sunspots, computed as the sum of 10 times number of sunspot groups plus the total number of spots in all groups. This number has been regularly measured since 1849. Drawing on the work of professional astronomers and the observations of amateurs (which are of uncertain reliability) Wolf worked out a reconstruction of monthly values from 1749 as well as annual values from 1700. Today, the reconstruction of this time series stretches back to 1611. It has an eleven-year cycle of recurrence as well as other cycles related to onset and development of individual sunspot groups: changes in the fraction of the solar surface occupied by faculae, the frequency of prominences, and other phenomena in the solar chromosphere and corona.





Analyzing the long record of sunspot numbers, the English astronomer Walter Maunder in 1893 came to the conclusion that from 1645 to 1715 sunspots had been generally absent. Over the thirty-year period of the Maunder Minimum, astronomers of the time counted only about 50 spots. Usually, over that length of time, about 50,000 sunspots would appear. Today, it has been established that such minima have repeatedly occurred in the past. It is also known that the Maunder Minimum accompanied the coldest phase of a global temperature dip, physically measured in Europe and other regions, the most severe such dip for several millennia, which stretched from the fourteenth to the nineteenth centuries (now known as the Little Ice Age).





The search for a relationship between large climate variations and phenomena observed in the Sun led to an interest in finding a connection between periods of change in the terrestrial climate and corresponding significant changes in the level of observed solar activity, because the sunspot number is the only index that has been measured over a long period of time.





Determinative role of the Sun in variations in the climate of the Earth





The Earth, after receiving and storing over the twentieth century an anomalously large amount of heat energy, from the 1990's began to return it gradually. The upper layers of the world ocean, completely unexpectedly to climatologists, began to cool in 2003. The heat accumulated by them unfortunately now is running out.





Over the past decade, global temperature on the Earth has not increased; global warming has ceased, and already there are signs of the future deep temperature drop (Fig. 7, 11). Meantime the concentration of carbon dioxide in the atmosphere over these years has grown by more than 4%, and in 2006 many meteorologists predicted that 2007 would be the hottest of the last decade. This did not occur, although the global temperature of the Earth would have increased at least 0.1 degree if it depended on the concentration of carbon dioxide. It follows that warming had a natural origin, the contribution of CO2 to it was insignificant, anthropogenic increase in the concentration of carbon dioxide does not serve as an explanation for it, and in the foreseeable future CO2 will not be able to cause catastrophic warming. The so-called greenhouse effect will not avert the onset of the next deep temperature drop, the 19th in the last 7500 years, which without fail follows after natural warming.





The earth is no longer threatened by the catastrophic global warming forecast by some scientists; warming passed its peak in 1998-2005, while the value of the TSI by July - September of last year had already declined by 0.47 W/m2





For several years until the beginning in 2013 of a steady temperature drop, in a phase of instability, temperature will oscillate around the maximum that has been reached, without further substantial rise. Changes in climatic conditions will occur unevenly, depending on latitude. A temperature decrease in the smallest degree would affect the equatorial regions and strongly influence the temperate climate zones. The changes will have very serious consequences, and it is necessary to begin preparations even now, since there is practically no time in reserve. The global temperature of the Earth has begun its decrease without limitations on the volume of greenhouse gas emissions by industrially developed countries; therefore the implementation of the Kyoto protocol aimed to rescue the planet from the greenhouse effect should be put off at least 150 years.







Consequently, we should fear a deep temperature drop -- not catastrophic global warming. Humanity must survive the serious economic, social, demographic and political consequences of a global temperature drop, which will directly affect the national interests of almost all countries and more than 80% of the population of the Earth. A deep temperature drop is a considerably greater threat to humanity than warming. However, a reliable forecast of the time of the onset and of the depth of the global temperature drop will make it possible to adjust in advance the economic activity of humanity, to considerably weaken the crisis.





For complete paper see here:





Related Links:





UN Fears (More) Global Cooling Commeth! IPCC Scientist Warns UN: We may be about to enter 'one or even 2 decades during which temps cool' - September 4, 2009





Flashback: 'Sun Sleeps': Danish Scientist declares 'global warming has stopped and a cooling is beginning...enjoy global warming while it lasts' - Sept. 2009





Climate Fears RIP...for 30 years!? - Global Warming could stop 'for up to 30 years! Warming 'On Hold?...'Could go into hiding for decades' study finds – Discovery.com – March 2, 2009





Paper: Scientific evidence now points to global COOLING, contrary to UN alarmism





Meteorologist: 'Global cooling in its 8th year, declining ocean heat content, sea level rises slowed or stopped, sun in a deep slumber' – April 30, 2009





Geologist: 'Records of past natural cycles suggest global cooling for first several decades of the 21st century to about 2030' – June 5, 2009





Astronomers: 'Sun's output may decline significantly inducing another Little Ice Age on Earth' – August 15, 2009





Indian Geologist Dissents -- launches website: 'Enjoy Global Warming: Its natural' - Sept. 2009



Gadget

This content is not yet available over encrypted connections.