Friday, September 30, 2011

Keystone XL Pipeline Attracts Popular Support





Whatever the ultimate fate of the oil industry, we are still in the business of building the infrastructure necessary to internally supply the US market.  The Keystone XL pipeline will finally link Alberta produced oil with the huge refinery infrastructure in the Gulf.  The losers in this development are the offshore suppliers who can and will be displaced by internal oil.

As they have every reason to thwart this development it is hardly surprising to witness a well funded challenge to the building of the pipeline.

You may think that 700,000 barrels is only a small fraction of the market, but in reality pipelines once approved are immediately doubled and redoubled as the market dictates without much regulatory fuss.  Thus this pipeline can quickly ramp up to 1.5 to 3.0 million barrels per day.

In that case, delaying the pipeline a year or two is worth a great deal in terms of product sales

On top of all the oil sands production continuing to come on stream we have a rapidly climbing output from the Bakken that is sweet oil besides.  That pipeline allows North Dakota production to displace oil sands production in present pipelines.

At the end of the day, the pipeline will surely be cleared in spite of the political theatre.

Alberta-Texas Pipeline a Ray of Hope

Posted by Rich Trzupek Bio ↓ on Sep 29th, 2011



One month after passing an initial environmental review, the proposed Keystone XL pipeline project crossed another significant hurdle, as a series of State Department hearings in states affected by the project drew to a close. Like any recent project involving any sort of fossil fuel use, the Keystone XL pipeline drew the usual crowd of protesters and fear-mongers. Yet, recognizing the importance of the project in both economic terms and in terms of energy independence, Keystone XL has garnered a surprising amount of public support as well.

The pipeline, running from Alberta, Canada to the Gulf Coast, would have the capacity to bring an additional 700,000 barrels of crude oil pumped out of Canadian tar sands. The United States imports roughly 10 million barrels of crude per day, so completion of Keystone XL has the potential to displace a fair amount of the crude that we currently import from overseas. In a world where China and India are gobbling up as much crude supply as they can through long-term contracts, it obviously makes sense to ensure our own energy security through projects like this. Yet, the environmental Left has been doing everything it can to muddy the waters of a decision that should be crystal clear when it comes to this new pipeline.

The State Department hearings reflected much of the poisonous influence that the environmental Left and their partners in the mainstream media have had on many otherwise reasonable, hardworking Americans. The environmental Left’s core messages are: 1) there is no level of acceptable risk, 2) there are no environmental missteps, only environmental disasters, and 3) anyone who disagrees with message one or two is lying and, most likely, in collusion with evil corporate polluters.

Consider, for example, how the concerns of one farmer in Nebraska were reported by Canadian wire service Postmedia News. Farmers Scott and Bruce Boettcher, who drove four and a half hours to attend the hearing, are highlighted in the story:

Their lifetime of experience has made them highly skeptical of studies — by both TransCanada and the State Department — that conclude environmental damage from an oil spill would be limited and localized. The water is not static — it moves, Bruce says, and oil spilled into it will move, too.

“Them scientists are not telling the truth about that ground,” he told Postmedia News during a break in the Lincoln hearings.

In fact, the highly sophisticated models that are used to determine the extent and severity of oil spills do indeed take into account the fact that water moves. We know, from decades of experience, how any kind of groundwater contamination will act and how severe the potential damage is. And, after all those decades of experience, we’ve gotten very, very good at both limiting the size of the inevitable (if very occasional) spill and remediating any environmental effects.

Yet, environmental groups and their media allies latch onto any story – no matter how convoluted – that will play to the tired old narrative that America is dangerously polluted and each new project brings us a step closer to environmental catastrophe. In addition to supposedly poisoning ground water in Nebraska, the environmental Left asserts that crude taken from Canadian oil sands is more greenhouse gas-intensive than other forms of crude and that emissions of other pollutants will increase as well if this “dirty” crude is allowed to enter the United States.

Both claims are silly. When you add everything up, oil sands crude is middle of the pack when it comes to greenhouse gas emissions, and the fact that the crude is “dirtier” only means that refineries have to do more to remove contaminants, not that the release of more contaminants into the environment will be allowed. In any case, somebody – somewhere – is going to refine this supply of crude. Growth in Asia guarantees that there will be no shortage of demand for a long time. So the real question is: do we want to make a deal with our neighbor to help stabilize our own energy picture, or do we want them to sell it to somebody else? Either way, the wells in Alberta will keep pumping.

Secretary of State Hillary Clinton has hinted that she is inclined to rule favorably on the project. With an election year fast approaching, it would appear to be in the administration’s best interest to push Keystone XL through, if only to raise a little political capital among the millions of Americans who remain distressed by the economy, gas prices and unemployment.

The political advantages were made clear during the recent hearings. While hearings of this type usually only bring out the critics, many supporters of the Keystone XL pipeline stepped up to the microphones to urge the administration to move forward. The administration is expected to announce its decision by the end of the year. Whatever the decision, it is sure to be challenged in court by the parties who disagree with it, but that small ray of sunshine peeking through the gloom of our cloudy energy future suggests that the process – as cumbersome and time consuming as it may be – is moving forward at long last.

About Rich Trzupek

Rich Trzupek is a veteran environmental consultant and senior advisor to the Heartland Institute. He is the author of the Encounter Broadside "How the EPA's Green Tyranny is Stifling America" and the upcoming book "Regulators Gone Wild: How the EPA is Ruining American Industry" (Encounter Books).

Extreme Weather Deaths Drop 98% Since 1920s




This is very good news and it certainly needed to be said and noted even.  The reality is that the whole globe is advancing economically everywhere despite the legions of naysayers peddling their version of the end is nigh.  More importantly global communication density has made it possible for potential victims to be alerted and to then side step the danger.

The exceptions are tsunamis where the time gap is simply too short.  There is simply not enough time after the shock of danger recognition for the victim to react sensibly or even in time to save his life.

Otherwise, deaths by earthquakes are effectively 98% preventable.  Merely compare the Haiti disaster with one in California in similar strength and circumstances.  Certain additional improvements could make such losses extremely unlikely.

Other wise it is all about floods and those can always be sidestepped if timely warning is available. Everyone knows what a tidal surge will do in the Bay of Bengal or the Gulf of Mexico, but we now have at least a day’s notice to leave town.  Most who drowned in New Orleans did have the plausible choice to walk to high ground well before the storm actually hit.  The problem was that no one thought to spell it out and to designate marshalling areas.  Wet and miserable is a much better option than dead.

Katrina did finally educate otherwise sensible folks that been on the receiving end of a storm surge is no option whatsoever.  This was obscure in the past but Katrina and the Boxing Day Tsunami ended any further ignorance.

Beach houses are nice, but no place to be in the possible track of a hurricane.

Actually I would go a lot further in this.  I think that all communities with a coastal waterfront should map all available high points and plan marshalling routes in the event of an emergency.  If time permits, these high points can be used to route evacuation buses to remove people and eventually return them.  It is cheap to do and simple to inform residents as part of the normal flow of annual communications between local government of their residents.

Tsunamis never happen, but when they do as a once ever event, it is then that everyone needs to know exactly were to go now.  How many would have been saved in Japan had everyone one immediately proceeded to high ground as a matter of course and general expectation?  There they had good systems in place but still not good enough.  That size of quake demanded immediate evacuation.

Deaths From Extreme Weather Events Have Fallen 98 Percent Since the 1920s

by Staff Writers

Los Angeles, CA (SPX) Sep 23, 2011


Despite concerns about global warming and a large increase in the number of reported storms and droughts, the world's death rate from extreme weather events was lower from 2000 to 2010 than it has been in any decade since 1900, according to a new Reason Foundation study.

The Reason Foundation report chronicles the number of worldwide deaths caused by extreme weather events between 1900 and 2010 and finds global deaths caused by extreme weather events peaked in the decade running from 1920 to 1929, when there were 241 deaths a year per million people in the world.

From 1930 to 1939 there were 208 deaths a year per million people. But from 2000 to 2010 there were just 5.4 deaths a year per million people in the world. That's a 98 percent decline in the weather-related death rate since the 1920s. Extreme weather events were responsible for just .07% of the world's deaths between 2000 and 2010.
The extreme weather categories studied in the Reason Foundation report include droughts, floods, wildfires, storms (hurricanes, cyclones, tornadoes, typhoons, etc.) and extreme temperatures, both hot and cold.

Droughts were the most deadly extreme weather category between 1900 and 2010, responsible for over 60 percent of extreme weather deaths during that time. The worldwide death rate from droughts peaked in the 1920s when there were 235 deaths a year per million people.

Since then, the death rate has fallen by 99.9 percent. The study finds that global food production advancements, such as new crops, improved fertilizer, irrigation, and pesticides, along with society's better ability to move food and medical supplies, were responsible for reducing the number of deaths in times of severe drought.

Floods were to blame for 30 percent of the deaths during the timeframe studied, making them the second most deadly extreme weather category.

The death rate for floods topped out in the 1930s at 204 deaths a year per million people. Deaths from floods have fallen by over 98 percent since then and there was an average of approximately one flood death per year per million people from 2000 to 2010.

Deaths from storms spiked as recently as the 1970s, when there were 10 deaths a year per million people. But the death rate has dropped by 75 percent since then, with storms being blamed for two deaths a year per million people from 2000 to 2010.

The average number of extreme weather events recorded increased from 2.5 per year in the 1920s to 8.5 in the 1940s to 350 per year for the period 2000-2010.

The study notes technological and telecommunication advances made it significantly easier to learn of and respond to weather events. Broader news coverage and an increased tendency by authorities to declare natural disaster emergencies have also contributed to the large uptick in the number of storms recorded.

"Overall mortality around the world is increasing, while mortality from weather events is decreasing," said Dr. Indur Goklany, the author of the Reason Foundation study.

"Despite the intense media coverage of storms and climate change's prominent role in political debates, humanity is coping far better with extreme weather events than it is with other much more important health and safety problems."

"The number of reported extreme weather events is increasing, but the number of deaths and the risk of dying from those events have decreased," said Julian Morris, the study's project director and vice president of research at Reason Foundation.

"Economic development and technological improvements have enabled society to protect against these events and to cope better with them when they do occur." Full Report Online

The full study is online here and here (.pdf).

Canola Terroir?




Canola was born to provide a commodity oil product to the international market.  To that end it was purified and all sources of flavor removed.  This made it perfect for industrial food production as one could simply add flavors back in as you needed them.

It is no trick to blend olive oil with canola to produce a pleasant enough product on its own.

Yet here we discover that chefs are discovering the original flavors provided by cold pressed canola oils and we learn that the result can directly challenge olive oil.  This is unexpected and we can see this discovery running through the food industry over the next decade or so.  What farmer does not want to label his oil crop to sell at non commodity pricing?

This is more good news for a product that keeps surprising.

Why Canadians should soak up a canola oil revolution

MARK SCHATZKER
Globe and Mail Update

Published Tuesday, Aug. 30, 2011 5:17PM EDT



What if I were to tell you there is an ingredient that expresses the very soul of Canadian terroir? A nectar of the prairie, if you will. A golden northern essence that, like a good wine or a reeking French cheese, reflects both the soil that nourished it and the human hand that reared it.

Its name? Canola oil. Laugh all you want.

But before you scoff at that near-odourless liquid that comes in a two-litre plastic jug for $5.69, there are two things you should know about canola oil.

The first is that it is arguably the most successful Canadian invention of all time. (Insulin and “roll up the rim to win,” notwithstanding.) Fifty years ago, it did not exist. Back then, there was only a member of the brassica family (think mustard, cabbage, turnip, etc.) known as rapeseed, whose oil tasted bitter and contained a type of fat that is considered toxic. Then, in the 1970s, Canadian agricultural scientists bred and crossbred rape until it tasted sweet and the toxins were all but gone.

The second is that this revolutionary vegetable oil – high in omega-3s, low in saturated fat – is considered more heart-healthy than olive oil. The leftover seed meal even boosts milk production in dairy cows. In 1978, the Western Canadian Oilseed Crushers’ Association christened this new and improved plant, joining “Canada” with the Latin word for oil. (Some maintain the name joined “Canada” with “oil” and “low acid.” The controversy rages.)

When it comes to seed-based ingredients, canola may lack the trendiness of quinoa or grits. But consider that this year Canada will grow more than 13-million tonnes of the stuff – accounting for 20 per cent of the global production – and export almost $3-billion of oil (which buys you a lot of quinoa and grits).

Trendiness may be in canola’s future. Because it just so happens that not all canola oil is as flavourless as the stuff on supermarket shelves, which has been extruded, degummed, acidified, vacuumed, bleached and deodorized, all in the name of shelf life and what you might call the 21st-century agro-industrial palate.

This other, far rarer canola oil is made by artisan farmer types with small presses that gently squeeze out, drop by drop, an oil with a colour reminiscent of a late August sunset. It is known as cold-pressed or virgin canola oil. And as its name suggests, it possesses a flavour profile that bests many extra virgin olive oils. Its texture is silky and clean, the notes are toasty, nutty and grassy, and the finish is sweet and vegetal.

That distinctive flavour and ultra-golden hue are a result of the very substances industrial processors seek to remove: organic compounds. Many of them are considered healthy. Sterols reduce cholesterol, tocopherols can help prevent everything from macular degeneration to Parkinson’s disease, and squalene’s role in preventing cancer is being studied. (Industrial processors aren’t trying to ruin your salad – they remove these and other compounds because they shorten shelf life and make the oil unsuitable for frying.)

Despite canola’s Canadian roots, the cold-pressed variety gets more love in Europe (where, unbelievably, it’s occasionally referred to as “virgin rape oil”). In Denmark and Sweden, numerous brand names can be found on supermarket shelves. In Germany, the government bestows an annual award to virgin rapeseed oils “which are characterized by an intense seed-like aroma and taste, combined with a nutty aftertaste.”

Canada isn’t anywhere close to that level of virgin canola appreciation. But it does have its admirers. Michael Allemeier, an instructor at the Southern Alberta Institute of Technology who was formerly the chef at Mission Hill Family Estate in Westbank, B.C., proclaimed cold-pressed canola to be “the olive oil of the North” in 2003. “I love using it for finishing soups and in salad dressings,” he says. “It really adds something. The flavour is rich, mouth-coating and nutty.”

Mr. Allemeier uses Highwood Crossing virgin canola oil, which is grown near Peace River and pressed south of Calgary by third-generation farmer Tony Marshall. The very same oil is featured at Calgary’s River Café, where chef Andrew Winfield drizzles it on tomatoes, brushes it on flatbread and folds it into slow roasted potatoes.

Thousands of kilometres east, Ontario has a virgin canola oil of its own, grown and pressed in Waterford and sold under the brand name Pristine Gourmet.

Do the two oils taste different? Does virgin canola reflect differences in geography? Does it express “terroir”?

I poured samples of both oils onto two white saucers. The Pristine Gourmet oil was a deeper yellow – more pastured egg yolk than August sunset – and the flavour was more intensely nutty. The Highwood Crossing oil, on the other hand, was sweeter and had a silkier texture. Though the Highwood Crossing oil seemed milder, its finish was unexpectedly more pronounced when used in salad dressings. They were clearly the same kind of oil, and yet clearly different, too. In other words: terroir.

As with dark, bitter Tuscan olive oils, the flavour of virgin canola oil on its own provides only a vague hint of how it combines with other ingredients. It adds savoury depth to salad dressings, and dimension to otherwise flavour-challenged dishes. I’ve found that it somehow magnifies the umami in tomatoes – something Mr. Allemeier confirmed. (I poured it over a fusilli tossed with an Italian tomato sauce and my picky two-year-old ate a record-breaking three bowls.)

Chef Jonathan Gushue of Langdon Hall – whose cuisine is as ambitious as it is reflective of Canadian ingredients – loves the Pristine Gourmet oil. He uses it in everything from flank steak with a canola and pink-lady onion vinaigrette to a smoked canola Hollandaise sauce. “The best thing we did,” Mr. Gushue says, “was a canola crème pâtissière with sea buckthorn. It has such a distinctive mouth feel. It’s rich but not heavy and has a clean finish.”

Almost a year ago, Mr. Gushue banished olive oil from the Langdon Hall kitchen, replacing it primarily with virgin canola but also with virgin soybean oil and hemp oil. He hasn’t looked back. “Why would you use anything else when you have such an amazing product on your front doorstep?”

Laser Tech Could Detect Roadside Bombs




This design concept appears to plausibly solve the roadside bomb detection problem.  Link that bit of detection with a computer driven dynamic search program and the weapon may end up put out of business.

Inasmuch as the IED has been the only creditable response available to the Islamic insurgent along with the suicide bomber, we may have the system that stops them cold.  Recall that the only reason these methods are deployed at all is because all conventional methods fail miserably.

I recall a heads up knock down confrontation in Kandahar during the initial deployment of NATO forces there in which I suspect that the Taliban were suckered into and in which they took over five hundred casualties in a tactical box against minimal casualties for NATO.  That was effectively the one and only instance of conventional methods attempted by the Taliban.

Now we can expect to remove their remaining weapon of any significance with this new protocol.  It will certainly take some time to develop and deploy but I hope it gets the priority it deserves.

New laser tech could detect roadside bombs

15:58 September 19, 2011



A new system that utilizes laser light to detect the presence of explosive compounds could be used to identify roadside bombs

Approximately sixty percent of coalition soldier deaths in Iraq and Afghanistan are due to improvised explosive devices (IEDs), placed along the roads. Because these bombs are often planted in public areas, it is important to detect them in a way that doesn't harm the surrounding infrastructure, or unnecessarily require civilians to evacuate nearby buildings. Researchers from Michigan State University believe that a laser-based system that they developed could fit the bill.

The laser itself is similar in output to a simple presentation pointer. Used in conjunction with a camera, it would direct both short and long pulses of light at suspicious objects or areas. The short molecules cause the molecules of explosive substances to vibrate, while the longer pulses are used to "read" those vibrations, which are unique to each explosive substance.


One of the challenges of field detection of explosives is the fact that there are so many similar chemical compounds present in the environment, and they can mask the sought-after molecules. Using the laser system, however, even a billionth of a gram of explosives can reportedly be detected.


The Michigan State technology is now being developed by spin-off companyBioPhotonic Solutions. A similar system is currently being researched at Princeton University.



EAST LANSING, Mich. — A research team at Michigan State University has developed a laser that could detect roadside bombs – the deadliest enemy weapon encountered in Iraq and Afghanistan.

The laser, which has comparable output to a simple presentation pointer, potentially has the sensitivity and selectivity to canvas large areas and detect improvised explosive devices – weapons that account for around 60 percent of coalition soldiers’ deaths. Marcos Dantus, chemistry professor and founder of BioPhotonic Solutions, led the team and has published the results in the current issue of Applied Physics Letters.

The detection of IEDs in the field is extremely important and challenging because the environment introduces a large number of chemical compounds that mask the select few molecules that one is trying to detect, Dantus said.

“Having molecular structure sensitivity is critical for identifying explosives and avoiding unnecessary evacuation of buildings and closing roads due to false alarms,” he said

Since IEDs can be found in populated areas, the methods to detect these weapons must be nondestructive. They also must be able to distinguish explosives from vast arrays of similar compounds that can be found in urban environments. Dantus’ latest laser can make these distinctions even for quantities as small as a fraction of a billionth of a gram.

The laser beam combines short pulses that kick the molecules and make them vibrate, as well as long pulses that are used to “listen” and identify the different “chords.” The chords include different vibrational frequencies that uniquely identify every molecule, much like a fingerprint. The high-sensitivity laser can work in tandem with cameras and allows users to scan questionable areas from a safe distance.

“The laser and the method we’ve developed were originally intended for microscopes, but we were able to adapt and broaden its use to demonstrate its effectiveness for standoff detection of explosives,” said Dantus, who hopes to net additional funding to take this laser from the lab and into the field.

This research is funded in part by the Department of Homeland Security. BioPhotonic Solutions is a high-tech company Dantus launched in 2003 to commercialize technology invented in a spinoff from his research group at MSU.

Thursday, September 29, 2011

Increased Light Speed Explanation



I have had the opportunity to review the nature of the experiment conducted by CERN in conjunction with the OPERA detector at LNGS.  Their conclusion is that the speed of the neutrinos traveling between the CERN and the OPERA detector exceeded the speed of light.

I am pleased with this result because it is confirmation of my own thoughts and theories based on the metric I introduced in my paper in the June 2010 edition of AIP’s Physics Essays.

It is my conjecture that mass m is an invariant determined by a finite number n of fundamental particles.  The n value for the neutrino is very small.  However n for the universe is very large and is usually taken mathematically as infinity which is incorrect.  This n determines the nature of the metric in a vacuum.  Similarly a much smaller n determines the nature of the metric in an object such as a star or a planet.

Returning to Einstein’s famous equation E = mc2 we assert m is an invariant determined by local n for the neutrino.  However, the external metric imposed on this neutrino in a vacuum will differ directly from the external metric imposed passing through a denser universe presented by the crust of the Earth.  Since E is then measured at the detection end in a vacuum and during the journey m was effectively smaller in the ‘universe’ of the crust itself, the equation is balanced by a higher speed of light c.

As an aside, in an unpublished paper, I exactly describe the neutrino in geometric terms so as to allow my new metric to be applied.

This experimental work means that it should be possible to map the natural density of the inside of the Earth and the Sun using neutrinos if it can be made sensitive enough.

Einstein’s General Theory is not contravened as some have eagerly suggested.

Measurement of the neutrino velocity with the OPERA detector in the CNGS beam

Abstract

The OPERA neutrino experiment at the underground Gran Sasso Laboratory has measured the velocity of neutrinos from the CERN CNGS beam over a baseline of about 730 km with much higher accuracy than previous studies conducted with accelerator neutrinos. The measurement is based on high statistics data taken by OPERA in the years 2009, 2010 and 2011. Dedicated upgrades of the CNGS timing system and of the OPERA detector, as well as a high precision geodesy campaign for the measurement of the neutrino baseline, allowed reaching comparable systematic and statistical accuracies.

An early arrival time of CNGS muon neutrinos with respect to the one computed assuming the speed of light in vacuum of (60.7 ± 6.9 (stat.) ± 7.4 (sys.)) ns was measured. This anomaly corresponds to a relative difference of the muon neutrino velocity with respect to the speed of light (v-c)/c = (2.48 ± 0.28 (stat.) ± 0.30 (sys.)) ï¾—10-5.

Introduction


The OPERA neutrino experiment [1] at the underground Gran Sasso Laboratory (LNGS)
was designed to perform the first detection of neutrino oscillations in direct appearance mode in the νμ→ντ channel, the signature being the identification of the Ï„− lepton created by its charged current (CC) interaction [2].

In addition to its main goal, the experiment is well suited to determine the neutrino velocity with high accuracy through the measurement of the time of flight and the distance between the source of the CNGS neutrino beam at CERN (CERN Neutrino beam to Gran Sasso) [3] and the OPERA detector at LNGS. For CNGS neutrino energies, = 17 GeV, the relative deviation from the speed of light c of the neutrino velocity due to its finite rest mass is expected to be smaller than 10-19, even assuming the mass of the heaviest neutrino eigenstate to be as large as 2 eV [4]. Hence, a larger deviation of the neutrino velocity from c would be a striking result pointing to new physics in the neutrino sector. So far, no established deviation has been observed by any experiment.

In the past, a high energy (Eν > 30 GeV) and short baseline experiment has been able to test deviations down to |v-c|/c < 4ï¾—10-5 [5]. With a baseline analogous to that of OPERA but at lower neutrino energies (Eν peaking at ~3 GeV with a tail extending above 100 GeV), the MINOS experiment reported a measurement of (v-c)/c = 5.1 ± 2.9ï¾—10-5 [6]. At much lower energy, in the 10 MeV range, a stringent limit of |v-c|/c < 2ï¾—10-9 was set by the observation of (anti) neutrinos emitted by the SN1987A supernova [7].

In this paper we report on the precision determination of the neutrino velocity, defined as
the ratio of the precisely measured distance from CERN to OPERA to the time of flight of
neutrinos travelling through the Earth’s crust. We used the high-statistics data taken by OPERA in the years 2009, 2010 and 2011. Dedicated upgrades of the timing systems for the time tagging of the CNGS beam at CERN and of the OPERA detector at LNGS resulted in a reduction of the systematic uncertainties down to the level of the statistical error. The measurement also relies on a high-accuracy geodesy campaign that allowed measuring the 730 km CNGS baseline with a precision of 20 cm.

Wind Power Takes Aim at $0.04 Per Kilowatt Hour




A lot here on rare earths but the important thing is that direct power systems appear to be on the way into the market place.  This is very good news and certainly long awaited.  We can also presume that it will all be done with the simplicity of retro fitting existing turbines as there is no need to alter the real hardware of the blades and hub.

The curves also show us that productivity will also jump significantly.  Thus every existing rotor can look forward to an inevitable retrofit and a jump in production followed by no further retrofits.  However you play with the numbers the existing stock will become much more profitable.

Finish the job with industrial grade energy storage and superconducting transmission and we will never beat the general efficiency of a paid for wind turbine until the Focus Fusion device can be made to work.  The robustness and the lack of consumables even in the capital budget then pretty well does it.

I suspect that those innovations which every other power source also require anyway, pretty well assures wind power is to be with us for a long time unless the above mentioned device shapes up.

Can Wind Generate Electricity at $0.04 per Kilowatt-Hour?

Is an environmentally friendly rare earth metal mined in California the key to making wind power cheaper than coal?





In the poker game being played for the future of the wind turbine’s drivetrain, VC NEA just told VCCMEA Capital they would see the $15.1 million round B on permanent magnet generator (PMG) specialist Danotek and raise with a $35 million bet on PMG innovator Boulder Wind Power.

As just reported by Greentech Media, PMG technology’s emerging inevitability as a replacement for the traditional gearbox in the turbine drivetrain was affirmed when CMEA Capital and three other heavyweight Danotek backers (GE Energy Financial Services, Khosla Ventures, and Statoil Hydro) re-upped funding to advance development of the company’s PMG converter system.

NEA, which had already invested $11 million in first round backing for Boulder Wind Power (BWP), joined with first-time investor and international rare earth metals powerhouse Molycorp in a second round of funding. This is a unique synchronicity because a PMG’s magnets require rare earth metals.

The Danotek high-speed PMG system’s attractiveness to investors is based on a uniquely efficient stator-rotor configuration, as well as its existing relationships with wind industry manufacturers and developers such as Clipper Windpower and DeWind.

The BWP low-speed PMG system’s attractiveness is based on an innovative PMG concept that gets away from expensive rare earth metals and creates efficiencies that BWP says can make wind power competitive with traditional sources of electricity generation without the need for incentives.

Based on the three key factors in the cost of wind -- the capital cost of the turbine; the production of energy; and the cost of operations and maintenance -- the BWP direct drive with its PMG can be expected, according to rigorous modeling, to keep the cost of wind generated electricity down in the four cents per kilowatt-hour range, according to Sandy Butterfield, the company’s CEO.

At that price, said Butterfield, formerly the Wind Technology Center Chief Engineer at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL), wind would be would be -- on an unsubsidized basis -- a more cost-effective source of electricity than coal.

##

The BWP direct drive system will be “lighter and cheaper than a gear-driven system,” Butterfield said, “but the big bang is in reliability. With a direct drive generator, you have basically one big moving part to replace [instead of] a bunch of very high precision, high quality steel moving parts in a gear box.”

Turbine gearboxes, no matter how precisely designed and assembled, wear out long before the turbine’s 20-year life span is over, requiring a very costly replacement process involving replacement of the complicated lubrication system, as well.  “A direct drive system eliminates all of those opportunities for early failure,” said Butterfield, who, as head of the Gearbox Reliability Collaborative during his time at NREL, is one of the foremost U.S. authorities on the subject.

One of the most distinguishing characteristics of the BWP PMG design is that its magnets are part of an axial flux air core machine which operates at relatively low temperatures and are made with a rare earth metal called neodymium. More commonly, PMG magnets are part of iron core radial flux machines like Danotek’s, operate at relatively high temperatures and require a rare earth metal called dysprosium.

In very round numbers, Butterfield said, dysprosium sells -- in today’s very constrained market dominated by China’s hoarding of its unique rare earth metal supply -- for around $1,000 to $2,000 per kilo; neodymium sells for about $100 per kilo and is relatively more common.

More significantly, BWP has secured a portion of its newest funding from first-time investorMolycorp, the only rare earth oxide producer in the Western hemisphere and the largest outside of China. Molycorp will take a place on Boulder Wind Power’s board and be the "preferred provider" of neodymium from its flagship rare earth mine and processing facility, currently ramping up to full production, at Mountain Pass, California.

Rare earth metal processing techniques used in China, Butterfield said, “are pretty environmentally detrimental. But, he said, “Molycorp has developed a closed loop system that is both efficient and environmentally friendly. Nothing comes out of it and their yield is much better.”

This assures BWP a secure domestic supply of neodymium while other PMG system makers must continue to pursue supplies of dysprosium, which, Butterfield said, “drives the price of high temperature magnets.”

The $35 million “will get us to commercialization,” Butterfield said. Next, he wants “to secure commercial partners.” He is currently in talks “and very far along” with multiple turbine manufacturers, for whom he will design the direct drive PMG system to their turbines’ specifications. He expects to have operational prototypes within 18 months.

“We would be working with the design teams. We don’t expect the rotor to change. We don’t expect the tower to change,” Butterfield said. “The nacelle -- everything between the tower and the rotor -- will have significant changes. But it’s all mechanical engineering. We’re not inventing new science. And in many ways, this is an easier machine to handle.”

Commercial deployment will come at the end of that two-year process: “That is when we start selling machines.”

Long Standing Biochemistry Problem Cracked




The bottom line of this research is extremely important.  It means that we can now predict that we will produce useful oils and other molecular feedstocks through the means of agriculture that can happily displace oil based petrochemicals which we are certainly tiring of using.

There is plenty to do but the first application is likely not too far off.  Been able to squeeze canola oil from the meal and to then split off a commercial fraction that then goes to industrial applications is now a prospect to be taken up in the decade ahead.

I expect that we will now succeed in retiring the classic petroleum industry with technology such as this.  This is an important step forward.

Scientists Solve Long-Standing Plant Biochemistry Mystery



ScienceDaily (Sep. 19, 2011) — Scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and collaborators at the Karolinska Institute in Sweden have discovered how an enzyme "knows" where to insert a double bond when desaturating plant fatty acids. Understanding the mechanism -- which relies on a single amino acid far from the enzyme's active site -- solves a 40-year mystery of how these enzymes exert such location-specific control.


The work, published in theProceedings of the National Academy of Sciences the week of September 19, 2011, may lead to new ways to engineer plant oils as a renewable replacement for petrochemicals.

"Plant fatty acids are an approximately $150-billion-dollar-a-year market," said Brookhaven biochemist John Shanklin, lead author on the paper. "Their properties, and therefore their potential uses and values, are determined by the position of double bonds in the hydrocarbon chains that make up their backbones. Thus the ability to control double bond positions would enable us to make new designer fatty acids that would be useful as industrial raw materials."

The enzymes responsible for double-bond placement, called desaturases, remove hydrogen atoms and insert double bonds between adjacent carbon atoms at specific locations on the hydrocarbon chains. But how one enzyme knows to insert the double bond at one location while a different but closely related enzyme inserts a double bond at a different site has been a mystery.

"Most enzymes recognize features in the molecules they act on that are very close to the site where the enzyme's action takes place. But all the carbon-hydrogen groups that make up fatty-acid backbones are very similar with no distinguishing features -- it's like a greasy rope with nothing to hold onto," said Shanklin.

In describing his group's long-standing quest to solve the desaturation puzzle, Shanklin quotes Nobel laureate Konrad Bloch, who observed more than 40 years ago that such site-specific removal of hydrogen "would seem to approach the limits of the discriminatory power of enzymes."

Shanklin and his collaborators approached the problem by studying two genetically similar desaturases that act at different locations: a castor desaturase that inserts a double bond between carbon atoms 9 and 10 in the chain (a 'delta-9' desaturase); and an ivy desaturase that inserts a double bond between carbon atoms 4 and 5 (delta-4). They reasoned that any differences would be easy to spot in such extreme examples.

But early attempts to find a telltale explanation -- which included detailed analyses of the two enzymes' atomic-level crystal structures -- turned up few clues. "The crystal structures are almost identical," Shanklin said.

The next step was to look at how the two enzymes bind to their substrates -- fatty acid chains attached to a small carrier protein. First the scientists analyzed the crystal structure of the castor desaturase bound to the substrate. Then they used computer modeling to further explore how the carrier protein "docked" with the enzyme.

"Results of the computational docking model exactly matched that of the real crystal structure, which allows carbon atoms 9 and 10 to be positioned right at the enzyme's active site," Shanklin said.

Next the scientists modeled how the carrier protein docked with the ivy desaturase. This time it docked in a different orientation that positioned carbon atoms 4 and 5 at the desaturation active site. "So the docking model predicted a different orientation that exactly accounted for the specificity," Shanklin said.

To identify exactly what was responsible for the difference in binding, the scientists then looked at the amino acid sequence -- the series of 360 building blocks that makes up each enzyme. They identified amino acid locations that differ between delta-9 and delta-4 desaturases, and focused on those locations that would be able to interact with the substrate, based on their positions in the structural models.

The scientists identified one position, far from the active site, where the computer model indicated that switching a single amino acid would change the orientation of the bound fatty acid with respect to the active site. Could this distant amino-acid location remotely control the site of double bond placement?

To test this hypothesis, the scientists engineered a new desaturase, swapping out the aspartic acid normally found at that location in the delta-9 castor desaturase for the lysine found in the delta-4 ivy desaturase. The result: an enzyme that was castor-like in every way, except that it now seemed able to desaturate the fatty acid at the delta-4 carbon location. "It's quite remarkable to see that changing just one amino acid could have such a striking effect," Shanklin said.

The computational modeling helped explain why: It showed that the negatively charged aspartic acid in the castor desaturase ordinarily repels a negatively charged region on the carrier protein, which leads to a binding orientation that favors delta-9 desaturation; substitution with positively charged lysine results in attraction between the desaturase and carrier protein, leading to an orientation that favors delta-4 desaturation.

Understanding this mechanism led Ed Whittle, a research associate in Shanklin's lab, to add a second positive charge to the castor desaturase in an attempt to further strengthen the attraction. The result was a nearly complete switch in the castor enzyme from delta-9 to delta-4 desaturation, adding compelling support for the remote control hypothesis.

"I really admire Ed's persistence and insight in taking what was already a striking result and pushing it even further to completely change the way this enzyme functions," Shanklin said.

"It's very rewarding to have finally solved this mystery, which would not have been possible without a team effort drawing on our diverse expertise in biochemistry, genetics, computational modeling, and x-ray crystallography.

"Using what we've now learned, I am optimistic we can redesign enzymes to achieve new desirable specificities to produce novel fatty acids in plants. These novel fatty acids would be a renewable resource to replace raw materials now derived from petroleum for making industrial products like plastics," Shanklin said.

This work was funded by the DOE Office of Science. Additional collaborators include: Jodie Guy, Martin Moche, and Ylva Lindqvist of the Karolinska Institute, and Johan Lengqvist, now at AstraZeneca R&D in Sweden. The scientists analyzed crystal structures at several synchrotrons including: the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory, the European Synchrotron Radiation Facility (ESRF) in France, the Dutch Electron Synchrotron (DESY), and the MAX-lab National Laboratory for Synchrotron 

Brittleness of Aging Bones





I would now like to see these testing methods now applied to the bones of long lived reptiles that reach ages of a couple of centuries to discover what is different in their make up.  It is certain that our bones become brittle and its cause is not obvious.

Again this is surely a neglected area of research that will be well remedied over the next decade.  We are now seeing a sharp increase of healthy productive elderly citizens whose bone strength has suddenly become important. At the same time, bone strength among the elderly varies widely presumably a function of youthful stress and effort.

Thus we can expect more on this topic.

The brittleness of aging bones

01 September 2011
More than a loss of bone mass



At each size scale, the hierarchical structure of human cortical bone influences its susceptibility to fracturing with smaller levels affecting intrinsic toughness and higher levelss impacting extrinsic toughness. (Image courtesy of Ritchie, et. al)


It is a well-established fact that as we grow older our bones become more brittle and prone to fracturing. It is also well established that loss of mass is a major reason for older bones fracturing more readily than younger bones, hence medical treatments have focused on slowing down this loss.

However, new research from scientists at the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) shows that at microscopic dimensions, the age-related loss of bone quality can be every bit as important as the loss of quantity in the susceptibility of bone to fracturing.

Using a combination of x-ray and electron based analytical techniques as well as macroscopic fracture testing, the researchers showed that the advancement of age ushers in a degradation of the mechanical properties of human cortical bone over a range of different size scales. As a result, the bone’s ability to resist fracture becomes increasingly compromised. This age-related loss of bone quality is independent of age-related bone mass loss.

“In characterizing age-related structural changes in human cortical bone at the micrometer and sub micrometer scales, we found that these changes degrade both the intrinsic and extrinsic toughness of bone,” says Berkeley Lab materials scientist Robert Ritchie. “Based on multiscale structural and mechanical tests, we attribute this degradation to a hierarchical series of coupled mechanisms that start at the molecular level.”

Ritchie, who holds joint appointments with Berkeley Lab’s Materials Sciences Division and the University of California (UC) Berkeley’s Materials Science and Engineering Department, is the senior author of a paper published in the Proceedings of the National Academy of Science (PNAS) that describes this work. The paper is titled “Age-related changes in the plasticity and toughness of human cortical bone at multiple length scales.”

Human cortical or compact bone is a composite of collagen molecules and nanocrystals of a mineralized form of calcium called hydroxyapatite (HA). Mechanical properties of stiffness, strength and toughness arise from both the characteristic structure at the nanoscale, and at multiple length scales through the hierarchical architecture of the bone. These length scales extend from the molecular level to the osteonal structures at near-millimeter levels. An osteon is the basic structural unit of compact bone, comprised of a central canal surrounded by concentric rings of lamellae plates, through which bone remodels.

“Mechanisms that strengthen and toughen bone can be identified at most of these structural length scales and can be usefully classified, as in many materials, in terms of intrinsic toughening mechanisms at small length scales, promoting non-brittle behavior, and extrinsic toughening mechanisms at larger length scales acting to limit the growth of cracks,” Ritchie says. “These features are present in healthy, young human bone and are responsible for its unique mechanical properties. However, with biological aging, the ability of these mechanisms to resist fracture deteriorates leading to a reduction in bone strength and fracture toughness.”

Working with the exceptionally bright beams of x-rays at Berkeley Lab’s Advanced Light Source (ALS), Ritchie and his colleagues analyzed bone samples that ranged in age between 34 and 99 years. In situ small-angle x-ray scattering and wide-angle x-ray diffraction were used to characterize the mechanical response of the collagen and mineral at the sub micrometer level. A combination of x-ray computed tomography and in situ fracture-toughness measurements with a scanning electron microscope were used to characterize effects at micrometer levels.

“We found that biological aging increases non-enzymatic cross-linking between the collagen molecules, which suppresses plasticity at nanoscale dimensions, meaning that collagen fibrils can no longer slide with respect to one another as a way to absorb energy from an impact,” Ritchie says. “We also found that biological aging increases osteonal density, which limits the potency of crack-bridging mechanisms at micrometer scales.”

These two mechanisms that act to reduce bone toughness are coupled, Ritchie says, in that the increased stiffness of the cross-linked collagen requires energy to be absorbed by “plastic” deformation at higher structural levels, which occurs by the process of micro cracking.

“With age, remodeling of the bone can lead the osteons to triple in number, which means the channels become more closely packed and less effective at deflecting the growth of cracks,” he says. “This growing ineffectiveness must be accommodated at higher structural levels by increased micro cracking. In turn, the increased micro cracking compromises the formation of crack bridges, which provide one of the main sources of extrinsic toughening in bone at length scales in the range of tens to hundreds of micrometers. Thus, age-related changes occur across many levels of the structure to increase the risk of fracture with age.”