Tuesday, April 30, 2013

Talking with GOD




If the only thing that comes forth from these episodes of lucid at death experiences, happens be be a simple willingness to go through death's door in expectation of bliss, than much has been gained. We do see also that all those so affected returned completely settled in themselves, often for the first time in their lives.

I will go further. The near death experience is extremely good for the individual. That is an objective reality beyond denial.

What this describes is an actual communion with GOD and an instruction to share the experience. I am seeing this instruction in other reports and it becomes clear that this understanding is been promoted into our world. This is likely very important.

The tangible reality of a lucid vision of the afterlife is becoming well established and is repeatable and conforms to centuries of religious teaching and sidesteps much of the problem of faith. It happens to be impossible to reject the reality of these experiences, particularly when they even provide new information. Recall that Cayce's power came from the information he brought back.

His hint regarding Bimini took me decades to properly understand and it was in fact spot on. No one else could have pointed me there in the first place. Yet understanding Bimini was central to understanding the Atlantean world and a direct pointer to ample additional unrecognized evidence.

What this means is that hard empirical evidence is slowly leaking through. Embrace it.

Oklahoma woman shares her near-death experience and talking with God

http://naturalplane.blogspot.ca/2013/04/esoterica-historical-hauntings-at.html

An Oklahoma school teacher says she’s seen heaven’s pearly gates and has lived to tell the tale.


Crystal McVea’s near-death experience brought her face-to-face with a God that she spent a lifetime doubting. And the Altus, Okla. woman says the only reason she’s alive is so that she can tell others what she believes is waiting for them on the other side.


As many as 10 to 20 percent of people who go through cardiac arrest or clinical death have lucid memories of their brush with death, according to the Human Consciousness Project. Scientists have pinned these near-death experiences to high carbon dioxide levels, oxygen deprivation, or surges of steroids, epinephrine or adrenaline. Survivors’ stories about these out-of-body experiences are curiously similar — there are bright lights, tunnels and nebulous beings.


But what matters to McVea is how her glimpse of death has changed the way she’s living.


McVea was being treated for pancreatitis in 2009 when an unexpected reaction to pain medication caused her to stop breathing. She hasn’t been able to figure out exactly what happened to her body, but she knows that her heart stopped. Her mother screamed for help and medical staff rushed to her side, shouting, “Code Blue.” Nurses pumped oxygen into her body and performed CPR to try to revive her.


During the nine minutes that she spent unconscious, McVea says she was far away from the panic that descended on her hospital room.


When she closed her eyes on earth, she says she opened her eyes in heaven.


All of the usual details were there — lights, a tunnel, pearly white gates and angels. But what really stuck out was her experience of God — she said she could see, smell, taste, touch and hear him with more than the five senses she had on earth. She could speak to him without words.


“I didn’t see the human form of God, I didn’t see hands and feet and a face, I just saw the most beautiful light,” she told the New York Daily News. “What I know now is that I experienced his presence.”

McVea said she used to think of God as a cruel and authoritative father figure. But she said that during her religious experience, she was able to understand truths about Christianity that she had always questioned. She was able to love herself.


I just remember I felt free from all the lies I had lived and the untruth that God didn’t love me,” she said.

It’s not an uncommon feeling. Dr. Eben Alexander, a neurosurgeon who taught at Harvard, also spoke about a heightened experience of love during his near-death experience. In a Newsweek article, Alexander writes about the sense of relief that he, like McVea, felt after slipping into a coma. - Proof of Heaven: A Neurosurgeon's Journey into the Afterlife


“It was like being handed the rules to a game I’d been playing all my life without ever fully understanding it,” he writes.


He said that he felt loved, fearless and at peace with himself. Although many of his colleagues at Harvard weren’t convinced, Alexander stuck with his story.


McVea had a similar life-after-death experience. When doctors managed to revive her, McVea said she was reluctant to come back into an earthly consciousness. But she couldn’t forget God’s last words to her — “Tell them what you can remember.”


She spent years trying to grapple with this request. McVea wasn’t much of a public speaker. She was certain that some people would think she was crazy. And she struggled to come to terms with her past — the abuse she suffered as a child and the abortion she chose to have as a teenager. No one in her family knew about the secrets she had kept for years.


Still, she decided to pen a memoir, Waking Up in Heaven: A True Story of Brokenness, Heaven, and Life Again describing her experiences. Sure enough, she faced strong backlash.


“I got bashed for being Christian, I got bashed for not being Christian enough,” she said.


But the criticism hasn’t held her back. While she can’t offer 100 percent proof that her experience was real, she knows this for sure — she is no longer afraid of what lies beyond the grave.


“I grew up all my life terrified of dying, afraid of the pain and afraid of the unknown,” she said. “But peace and love surrounded every moment of my death and now I know death is nothing to be afraid of.” - NY Daily News


Resuscitation Medicine




A lot of what we know about death is changing drastically.  It is also becoming clearer.  I also think that CPR is additionally important in that it keeps disturbing the oxygen rich blood and postpones real cell death.  That is not suggested here.  It is noteworthy that the brain goes almost immediately into a dormant state and stays there until it is safe to come back.

What we do learn here is that if death is merely postponed, the consciousness will recall activity even without the direct use of sight and experience a common connection with an apparent entity that welcomes them and reassures them.  It is the uncalled for commonality of this experience that is most noteworthy.

The immediate take home from all this is that emergency responders and doctors need to up their game.  Brain death is not immediate and resuscitation is possible and even probable, which means that current practice is killing many.  Cooling the brain as quickly as possible is also strongly called fo.

In my case, I went 20 minutes without a heartbeat, but with excellent CPR applied.  Fortunately my wife refused to allow them to write me off and I made a full recovery.  Others have now gone for two and one half hours with full recovery in a similar situation.

What this tells us is that heart attack victims have a vastly improved prognosis.



Consciousness After Death: Strange Tales From the Frontiers of Resuscitation Medicine


04.24.13
12:23 PM


Sam Parnia practices resuscitation medine. In other words, he helps bring people back from the dead — and some return with stories. Their tales could help save lives, and even challenge traditional scientific ideas about the nature of consciousness.

“The evidence we have so far is that human consciousness does not become annihilated,” said Parnia, a doctor at Stony Brook University Hospital and director of the school’s resuscitation research program. “It continues for a few hours after death, albeit in a hibernated state we cannot see from the outside.”

Resuscitation medicine grew out of the mid-twentieth century discovery of CPR, the medical procedure by which hearts that have stopped beating are revived. Originally effective for a few minutes after cardiac arrest, advances in CPR have pushed that time to a half-hour or more.

New techniques promise to even further extend the boundary between life and death. At the same time, experiences reported by resuscitated people sometimes defy what’s thought to be possible. They claim to have seen and heard things, though activity in their brains appears to have stopped.

It sounds supernatural, and if their memories are accurate and their brains really have stopped, it’s neurologically inexplicable, at least with what’s now known. Parnia, leader of the Human Consciousness Project’s AWARE study, which documents after-death experiences in 25 hospitals across North America and Europe, is studying the phenomenon scientifically.

Parnia discusses his work in the new book Erasing Death: The Science That Is Rewriting the Boundaries Between Life and Death. Wired talked to Parnia about resuscitation and the nature of consciousness.

Wired: In the book you say that death is not a moment in time, but a process. What do you mean by that?

Sam Parnia: There’s a point used to define death: Your heart stops beating, your brain shuts down. The moment of cardiac arrest. Until fifty years ago, when CPR was developed, when you reached this point, you couldn’t come back. That led to the perception that death is completely irreversible.

But if I were to die this instant, the cells inside my body wouldn’t have died yet. It takes time for cells to die after they’re deprived of oxygen. It doesn’t happen instantly. We have a longer period of time than people perceive. We know now that when you become a corpse, when the doctor declares you dead, there’s still a possibility, from a biological and medical perspective, of death being reversed.

Of course, if someone dies and you leave them alone long enough, the cells become damaged. There’s going to be a time when you can’t bring them back. But nobody knows exactly when that moment is. It might not just be in tens of minutes, but in over an hour.

Death is really a process.

'The idea that electrochemical processes in the brain lead to consciousness may no longer be correct.'


Wired: How can people be brought back from death?

Parnia: Death is, essentially, the same as a stroke, and that’s especially true for the brain. A stroke is some process that stops blood flow from getting into the brain. Whether it’s because the heart stopped pumping, or there was a clot that stopped blood flow, the cells don’t care.

Brain cells can be viable for up to eight hours after blood flow stops. If doctors can learn to manipulate processes going on in cells, and slow down the rate at which cells die, we could go back and fix the problem that caused a person to die, then re-start the heart and bring them back. In a sense, death could become reversible for conditions for which treatments become available.

If someone dies of a heart attack, for example, and it can be fixed, then in principle we can protect the brain, make sure it doesn’t experience permanent cellular death, and re-start the heart. If someone dies of cancer, though, and that particular cancer is untreatable, then it’s futile.

Wired: Are you talking about bringing people to life days or weeks or even years after they’ve died?

Parnia: No. This is not cryogenics. When you die, most of your cell death occurs through apoptosis, or programmed cell death. If your body is cold, the chemical reactions underlying apoptosis are slower. Making the body cold slows the rate at which cells decay. But we’re talking about chilling, not freezing. The process of freezing will damage cells.

Wired: You also study near-death experiences, but you have a different term for it: After-death experience.

Parnia: I decided that we should study what people have experienced when they’ve gone beyond cardiac arrest. I found that 10 percent of patients who survived cardiac arrests report these incredible accounts of seeing things.

When I looked at the cardiac arrest literature, it became clear that it’s after the heart stops and blood flow into the brain ceases. There’s no blood flow into the brain, no activity, about 10 seconds after the heart stops. When doctors start to do CPR, they still can’t get enough blood into the brain. It remains flatlined. That’s the physiology of people who’ve died or are receiving CPR.

Not just my study, but four others, all demonstrated the same thing: People have memories and recollections. Combined with anecdotal reports from all over the world, from people who see things accurately and remember them, it suggests this needs to be studied in more detail.

Wired: One of the first after-death accounts in your book involves Joe Tiralosi, who was resuscitated 40 minutes after his heart stopped. Can you tell me more about him?

Parnia: I wasn’t involved in his care when he arrived at the hospital, but I know his doctors well. We’d been working with the emergency room to make sure they knew the importance of starting to cool people down. When Tiralosi arrived, they cooled him, which helped preserve his brain cells. They found vessels blocked in his heart. That’s now treatable. By doing CPR and cooling him down, the doctors managed to fix him and ensure that he didn’t have brain damage.

When Tiralosi woke up, he told nurses that he had a profound experience and wanted to talk about it. That’s how we met. He told me that he felt incredibly peaceful, and saw this perfect being, full of love and compassion. This is not uncommon.

People tend to interpret what they see based on their background: A Hindu describes a Hindu god, an atheist doesn’t see a Hindu god or a Christian god, but some being. Different cultures see the same thing, but their interpretation depends on what they believe.

Wired: What can we learn from the fact that people report seeing the same thing?

Parnia: At the very least, it tells us that there’s this unique experience that humans have when they go through death. It’s universal. It’s described by children as young as three. And it tells us that we should not be afraid of death.

Wired: How do we know after-death experiences happen when people think they do? Maybe people misremember thoughts from just before death, or just after regaining consciousness.

Parnia: That’s a very important question. Do these memories occur when a person is truly flatlined and had no brain activity, as science suggests? Or when they’re beginning to wake up, but are still unconscious?

The point that goes against the experiences happening afterwards, or before the brain shut down, is that many people describe very specific details of what happened to them during cardiac arrest. They describe conversations people had, clothes people wore, events that went on 10 or 20 minutes into resuscitation. That is not compatible with brain activity.

It may be that some people receive better-quality resuscitation, and that — though there’s no evidence to support it — they did have brain activity. Or it could indicate that human consciousness, the psyche, the soul, the self, continued to function.

Wired: Couldn’t the experiences just reflect some extremely subtle type of brain activity?
Parnia: When you die, there’s no blood flow going into your brain. If it goes below a certain level, you can’t have electrical activity. It takes a lot of imagination to think there’s somehow a hidden area of your brain that comes into action when everything else isn’t working.

These observations raise a question about our current concept of how brain and mind interact. The historical idea is that electrochemical processes in the brain lead to consciousness. That may no longer be correct, because we can demonstrate that those processes don’t go on after death.

There may be something in the brain we haven’t discovered that accounts for consciousness, or it may be that consciousness is a separate entity from the brain.

##

Electrical activity in the brain as a heart enters cardiac arrest. Image:Kano et al./Resuscitation

Wired: This seems to verge on  supernatural explanations of consciousness.

Parnia: Throughout history, we try to explain things the best we can with the tools of science. But most open-minded and objective scientists recognize that we have limitations. Just because something is inexplicable with our current science doesn’t make it superstitious or wrong. When people discovered electromagnetism, forces that couldn’t then be seen or measured, a lot of scientists made fun of it.

Scientists have come to believe that the self is brain cell processes, but there’s never been an experiment to show how cells in the brain could possibly lead to human thought. If you look at a brain cell under a microscope, and I tell you, “this brain cell thinks I’m hungry,” that’s impossible.

It could be that, like electromagnetism, the human psyche and consciousness are a very subtle type of force that interacts with the brain, but are not necessarily produced by the brain. The jury is still out.

Wired: But what about all the fMRI brain imaging studies of thoughts and feelings? Or experiments in which scientists can tell what someone is seeing, or what they’re dreaming, by looking at brain activity?

Parnia: All the evidence we have shows an association between certain parts of the brain and certain mental processes. But it’s a chicken and egg question: Does cellular activity produce the mind, or does the mind produce cellular activity?

Some people have tried to conclude that what we observe indicates that cells produce thought: here’s a picture of depression, here’s a picture of happiness. But this is simply an association, not a causation. If you accept that theory, there should be no reports of people hearing or seeing things after activity in their brain has stopped. If people can have consciousness, maybe that raises the possibility that our theories are premature.
Wired: What comes next in your own research?

Parnia: In terms of resuscitation, we’re trying to non-invasively measure what happens in the brain, in real-time, using a special sensor that allows us to detect any impending danger and intervene before extensive damage is done.

On the question of consciousness, I’m interested in understanding the brain-based modulators of consciousness. What helps a person become conscious or unconscious? How can we manipulate that to help people who look like they’re unconscious? And I’m studying how consciousness can be present in people who’ve gone beyond the threshold of death. All we can say now is that the data suggests that consciousness is not annihilated.

Fertility Needs in High-Yielding Corn Production




 The fertility needs of modern corn culture will not be resolved until we master the biochar protocol which allows sequestering of all nutrients on a continuing basis from either chemical treatment or natural decay based release.

In the meantime, corn happens to be our most hungry crop. Thus fertility is a challenge at best.

We need to perhaps find a way to engineer corn to somehow be a better citizen as we are completely set up to work with it in particular.



Fertility needs in high-yielding corn production

by Staff Writers

Urbana IL (SPX) Apr 22, 2013


Although advances in agronomy, breeding, and biotechnology have dramatically increased corn grain yields, soil test values indicate that producers may not be supplying optimal nutrient levels. Moreover, many current nutrient recommendations, developed decades ago using outdated agronomic management practices and lower-yielding, non-transgenic hybrids, may need adjusting.

Researchers with the University of Illinois Crop Physiology Laboratory have been re-evaluating nutrient uptake and partitioning in modern corn hybrids.

"Current fertilization practices may not match the uptake capabilities of hybrids that contain transgenic insect protection and that are grown at planting densities that increase by about 400 plants per acre per year," said U of I Ph.D. student Ross Bender. "Nutrient recommendations may not be calibrated to modern, higher-yielding genetics and management."

The study examined six hybrids, each with transgenic insect protection, at two Illinois locations, DeKalb and Urbana. Researchers sampled plant tissues at six incrementally spaced growth stages. They separated them into their different fractions (leaves, stems, cobs, grain) to determine season-long nutrient accumulation, utilization, and movement.

Although maximum uptake rates were found to be nutrient-specific, they generally occurred during late vegetative growth. This was also the period of greatest dry matter production, an approximate 10-day interval from V10 to V14. Relative to total uptake, however, uptake of phosphorus (P), sulfur (S), and zinc (Zn) was greater during grain fill than during vegetative growth. The study also showed that the key periods for micronutrient uptake were narrower than those for macronutrients.

"The implications of the data are numerous," said Matias Ruffo, a co-author of the paper and worldwide agronomy manager at The Mosaic Company. "It is necessary that producers understand the timing and duration of nutrient accumulation. Synchronizing fertilizer applications with periods of maximum nutrient uptake is critical to achieving the best fertilizer use efficiency."

Jason Haegele, another co-author of the paper and post-doctoral research associate at the U of I added, "Although macro- and micronutrients are both essential for plant growth and development, two major aspects of plant nutrition are important to better determine which nutrients require the greatest attention: the amount of a nutrient needed for production, or total uptake, and the amount of that nutrient that accumulates in the grain."

Study results indicated that high amounts of nitrogen (N), potassium (K), P, and S are needed, with applications made during key growth stages to maximize crop growth. Moreover, adequately accounting for nutrients with high harvest index values the proportion of total nutrient uptake present in corn grain), such as N, P, S, and Zn, which are removed from production fields via the grain, is vital to maintaining long-term soil productivity.

In Illinois, it is common to apply all the P in a corn-soybean rotation prior to the corn production year.
"Although farmers in Illinois fertilize, on average, approximately 93 pounds of P2O5 per acre for corn, the estimated 80 percent of soybean fields receiving no additional phosphorus would have only 13 pounds per acre remaining for the following year's soybean production," said Fred Below, professor of crop physiology. "Not only is this inadequate for even minimal soybean yield goals, but these data suggest a looming soil fertility crisis if fertilizer usage rates are not adjusted as productivity increases."

Integration of new findings will allow producers to match plant nutritional needs with the right nutrient source and right rate applied at the right time and right place. The same team of scientists is collaborating on a follow-up study investigating the seasonal patterns of nutrient accumulation and utilization in soybean production.

"Although nutrient management is a complex process, a greater understanding of the physiology of nutrient accumulation and utilization is critical to maximize the inherent yield potential of corn," concluded Bender.

"Nutrient uptake, partitioning, and remobilization in modern, transgenic insect-protected maize hybrids" by Ross R. Bender, Jason W. Haegele, Matias L. Ruffo and Fred E. Below was published in the January 2013 edition of Agronomy Journal (105:161-170). It is an open-access article available here. An abbreviated version of this article, entitled "Modern corn hybrids' nutrient uptake patterns," was published in Better Crops with Plant Food (available here).

Large Earthquakes can Trigger Another Far Away





 I am sorry chaps, but if rock is cracking and breaking, then it is pretty obvious that stress is moving through the system and not obviously settling down however gentle it might appear. Put another way, when a real collapse takes place, you have lost control of your assumptions and must wait until it truly settles down before you begin testing your ideas again. Of course, no one wants to do this.

This also naturally implies that stress release in one locale sets up stress elsewhere especially in a quake environment. For that reason, we observe long faults taking turns releasing one point after another sometimes over centuries. What it really tells us is that natural plasticity prevents permanent freezing of a fault.

In the meantime, the statistics appear to confirm all this as expected and also gives us a measured likelihood for successor events. At ten percent, it is not zero but it is a warning to be awake.

Numbers support theory large earthquakes can trigger another far away

by Staff Writers

Salt Lake City (UPI) Apr 19, 2013


Big earthquakes can trigger other quakes far from their geographical center at least 9 percent of the time, a statistical analysis by a U.S. researcher shows.

With a number of huge earthquakes in recent years -- in Sumatra, Indonesia, in December 2004, Chile in February 2010 and Japan in 2011 --leading many to question whether one large quake can cause another on the other side of the world, Tom Parsons of the U.S. Geological Survey surveyed catalogs of seismic activity on every continent except Antarctica going back to 1979.

Of the 260 earthquakes of magnitude 7 or greater during that period, small earthquakes on separate fault systems followed in the wake of 24 of them, triggered by seismic waves passing through distant lands, he said.


"It's a small hazard, but there is a risk," he said.

Parsons, who presented his results Friday at the Seismological Society of America annual meeting in Salt Lake City, says his next step will be to investigate the 24 quakes that caused far-off events and see if there is anything special about them.

"So far they look fairly ordinary. So we're going to have to really dig into them," he said.

Seismic activity during deadly Utah mine collapse yields insights

Salt Lake City (UPI) Apr 19, 2013 - Analysis of seismic activity recorded in a 2007 deadly Utah mine collapse shows its extent was greater than previous studies indicated, researchers say.

The owner of the Crandall Canyon coal mine initially blamed the collapse, which killed six miners and three rescue workers, on an earthquake but University of Utah researchers say analysis of the recordings of the tremor and hundreds of small aftershocks suggests they were a result of mining activity and the subsequent collapse.

"We can see now that, prior to the collapse, the seismicity was occurring where the mining was taking place and that, after the collapse, the seismicity migrated to both ends of the collapse zone," said Tex Kubacki, a graduate student in mining engineering.

Mapping the locations of the aftershocks "helps us better delineate the extent of the collapse at Crandall Canyon," he said.

A previous University of Utah study, based on far fewer aftershocks, said the epicenter of the collapse was near where the miners were working and the aftershocks showed the collapse area covered 50 acres.

The new study, based on data of hundreds of additional aftershocks, has extended the area of the collapse to the full extent of the western end of the mine, Kubacki said.

"It's gotten bigger," he said.

Most of the seismic activity before the collapse was due to mining, the researchers said, although they are investigating whether any of those small jolts might have been signs of the impending collapse.

So far, however, "there is nothing measured that would have said, 'Here's an event [mine collapse] that's ready to happen,'" said Michael "Kim" McCarter, a mining engineering professor and the study's co-author.

Electric Cars Will Be Great





 A number of things are somewhat overstated in this story, but that is not particularly relevant. What is relevant is that we still do not have a meaningful deliverable as far as the battery is concerned. We do have several ways to get there now that are completely creditable but are still grinding through the design realization stage. This always takes time even when you know that you will touchdown.

What we have instead is a body of rapidly improving technology that is simply waiting for the battery deliverable. Everyone knows now that we are going to get there.

Also poorly understood is that the arrival of the commercial super battery also mi am looking forward to my first electric that meets my needs. I have also no doubt that it will be self driving as well. From that moment, the cost of personal transportation will be on a steady decline curve and yes it will be great!

Someday, Electric Cars Will Be Great

Right now, they are expensive, inconvenient, and not very good for the environment.

By Bjørn Lomborg|Posted Sunday, April 14, 2013, at 7:00 AM



The idea of the electric car has long captured the imaginations of innovators,  including even Henry Ford and Thomas Edison more than a century ago. Celebrities, pundits, and political leaders alike have cast these vehicles as the apotheosis of an environmentally responsible future. German Chancellor Angela Merkel has proclaimed that there will be 1 million electric cars on the Autobahn by 2020. President Barack Obama has likewise promised 1 million electric cars in the United States, but five years sooner.

Someday, the electric car will, indeed, be a great product—just not now. It costs too much; it is inconvenient; and its environmental benefits are negligible (and, in some cases, nonexistent).

Many developed countries provide lavish subsidies for electric cars: amounts up to $7,500 in the U.S., $8,500 in Canada, 9,000 euros in Belgium, and 6,000 euros even in cash-strapped Spain. Denmark offers the most lavish subsidy of all, exempting electric cars from the country’s marginal 180 percent registration tax on all other vehicles. For the world’s most popular electric car, the Nissan Leaf, this exemption is worth 63,000 euros.Yet this is clearly not enough. In Denmark, there are still only 1,224 electric cars. In Germany, car sales totaled 3.2 million in 2011, but only 2,154 were electric.

The numbers have forced Obama and Merkel to reconcile their projections with reality. The US Department of Energy now expects only about 250,000 electric cars by 2015, or  0.1 percent of all cars on America’s roads. Merkel recently admitted that Germany will not get anywhere near 1 million electric cars by 2020.

No one should be surprised. According to an analysis by the Congressional Budget Office, a typical electric car’s lifetime cost is roughly $12,000 higher than a gasoline-powered car. Recent research indicates that electric cars may reach break-even price with hybrids only in 2026, and with conventional cars in 2032, after governments spend hundreds of billions of dollars in subsidies.

Costs and subsidies aside, electric cars have so far proved to be incredibly inconvenient. ABBC reporter drove the 484 miles from London to Edinburgh in an electric Mini and had to stop eight times to recharge, often waiting six hours or more. In total, he spent 80 hours waiting or driving, averaging just over six miles an hour—an unenviable pace even before the advent of the steam engine.

Electric cars also fail to live up to their environmental billing. They are often sold as “zero emissions” vehicles, but that is true only when they are moving.

For starters, the manufacturing process that produces electric cars—especially their batteries—requires an enormous amount of energy, most of it generated with fossil fuels. A life-cycle analysis shows that almost half of an electric car’s entire CO2 emissions result from its production, more than double the emissions resulting from the production of a gasoline-powered car.

Moreover, the electricity required to charge an electric car is overwhelmingly produced with fossil fuels. Yes, it then emits about half the CO2 of a conventional car for every mile driven (using European electricity). But, given its high CO2 emissions at the outset, it needs to be driven a lot to come out ahead.

Proponents proudly proclaim that if an electric car is driven about 180,000 miles, it will have emitted less than half the CO2 of a gasoline-powered car. But its battery will likely need to be replaced long before it reaches this target, implying many more tons of CO2emissions.

In fact, such distances seem implausible, given electric cars’ poor range: The Nissan Leaf, for example, can go only 73 miles on a charge. That is why most people buy an electric car as their second car, for short commutes. If the car is driven less than 32,000 miles on European electricity, it will have emitted more CO2 overall than a conventional car.

Even if driven much farther, 93,000 nukes, an electric car’s CO2 emissions will be only 28 percent less than those of a gasoline-powered car. During the car’s lifetime, this will prevent 11 tons of CO2 emissions, or about 44 euros of climate damage.

Given the size of the subsidies on offer, this is extremely poor value. Denmark’s subsidies, for example, pay almost 6,000 euros to avoid one ton of CO2 emissions. Purchasing a similar amount in the European Emissions Trading System would cost about 5 euros. For the same money, Denmark could have reduced CO2 emissions more than a thousand-fold.

Worse, electric cars bought in the European Union will actually increase global CO2emissions. Because the EU has a fixed emission target for 2020, it will offset emissions elsewhere (perhaps with more wind power), regardless of the type of car purchased: 38.75 tons of CO2 from a gasoline car, and 16 tons from the electricity produced for an electric car. But, while EU emissions stay the same, most electric batteries come from Asia, so an extra 11.5 tons of emissions will not be offset.

The electric car’s environmental transgressions are even worse in China, where most electricity is produced with coal. An electric car powered with that electricity will emit 21 percent more CO2 than a gasoline-powered car. And, as a recent study shows, because China’s coal-fired power plants are so dirty, electric cars make the local air worse. In Shanghai, air pollution from an additional million gasoline-powered cars would kill an estimated nine people each year. But an additional million electric cars would kill 26 people annually, owing to the increase in coal pollution.

The electric-car mantra diverts attention from what really matters: a cost-effective transition from fossil fuels to cheaper green energy, which requires research and innovation. Electric cars might be a great advance for that purpose in a couple of decades. But lavish subsidies today simply enable an expensive, inconvenient, and often environmentally deficient technology.

Monday, April 29, 2013

Understanding the Human Mind








This is a great update on our understanding of just how our brains might work. I have underlined parts and made a couple of notes. I am myself very much a student of Einstein's thinking and have been well served. What held him back on his agenda was an oversight of the handful of eighteenth century Mathematicians. My efforts ended that. Now we have to get past the mental block put up by scholarship.

This is a must read. It will help you to understand what our future looks like. Fifteen years is not a long time for the Baby Boom generation and their children. Guide yourself accordingly.

I have actually guided myself through the past forty years understanding this acceleration in knowledge. Then best advice I can give anyone today is to become conscious of its implications.



How Ray Kurzweil Will Help Google Make the Ultimate AI Brain

    BY STEVEN LEVY
    04.25.13



Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil, one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton, and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.

On Tuesday, Kurzweil moderated a live Google hangout tied to a release of the upcoming Will Smith film, After Earth, presumably tying the film’s futuristic concept to actual futurists. The discussion touched on the necessity of space travel and the imminent resolution of the world’s energy problems with solar power. After the hangout, Kurzweil got on the phone with me to explore a few issues in more detail.

WIRED: In the Google hangout you just finished, Will Smith said he had a copy of your book by his bedside because he’s been involved in a number of science fiction movies. How do you view science fiction?

RAY KURZWEIL: Science fiction is the great opportunity to speculate on what could happen. It does give me, as a futurist, scenarios. It’s not incumbent upon science fiction creators to be realistic about time frames and so on. In this movie, for example, the characters come back to Earth a thousand years later and biological evolution has moved so far that the animals are quite different. That’s not realistic. Also, there’s very often a dystopian bent to science fiction because we can perceive the dangers of science more than the benefits, and maybe that makes more dramatic storytelling. A lot of movies about artificial intelligence envision that AI’s will be very intelligent but missing some key emotional qualities of humans and therefore turn out to be very dangerous.

What’s the key to predicting the future?

I realized 30 years ago that the key to being successful is timing. I get a lot of new technology proposals, and I’d say 95% of those teams will build exactly what they claim if given the resources, but 95% of those projects will fail because the timing is wrong I did anticipate, for instance, that search engines would start emerging.  Fifteen years ago Larry Page and Sergey Brin were in exactly the right place at the right time with the right idea

You anticipated search engines?

Yes. I wrote about that actually as early as The Age of Intelligent Machines, in the 1980s. [The book was published in 1990.]

But did you predict that you would be working for a company that started as a search engine?

That’s exactly the kind of thing you can’t predict. It would be very hard to predict that these couple of kids at Stanford would take over the world of search. But what I did discover is that if you examine the key measures of price performance and capacity of information technology, they form amazingly predictable smooth exponential curves. The price performance of computation has been rising in a very smooth exponential since the 1890 census. This has gone on through thick and thin, through war and peace, and nothing has affected it. I projected it out to 2050. In 2013, we’re exactly where we should be on that curve.

[ and that is why I immediately recognized the importance of google when it came out and immediately developed my unique nom de plume 'arclein']

What are you working on at Google?

My mission at Google is to develop natural language understanding with a team and in collaboration with other researchers at Google. Search has moved beyond just finding keywords, but it still doesn’t read all these billions of web pages and book pages for semantic content. If you write a blog post, you’ve got something to say, you’re not just creating words and synonyms. We’d like the computers to actually pick up on that semantic meaning. If that happens, and I believe that it’s feasible, people could ask more complex questions.

Are you participating in Jeff Dean’s program there to build an artificial “Google Brain?”

Well, Jeff Dean is one of my collaborators. He’s a fellow research leader. We are going be using his systems and his techniques of deep learning. The reason I’m at Google is resources like that. Also the knowledge graph and very advanced syntactic parsing and a lot of advanced technologies that I really need for a project that really seeks to understand natural language. I can succeed at this much more readily at Google because of these technologies.

If your system really understood complex natural language, would you argue that it’s conscious?

Well, I do. I’ve had a consistent date of 2029 for that vision. And that doesn’t just mean logical intelligence. It means emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That’s actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029.

Will we get there simply by more computation and better software, or are there currently unsolved barriers that we have to hurdle?

There are both hardware and software requirements. I believe we actually are very close to having the requisite software techniques. Partly this is being assisted by understanding how the human brain works, and we’re making exponential gains there.  We can now see inside a living brain and see individual inter-neural connections being formed and firing in real time. We can see your brain create your thoughts and thoughts create your brain. A lot of this research reveals how the mechanism of the neocortex works, which is where we do our thinking. This provides biologically inspired methods that we can emulate in our computers. We’re already doing that. The deep learning technique that I mentioned uses multilayered neural nets that are inspired by how the brain works. Using these biologically inspired models, plus all of the research that’s been done over the decades in artificial intelligence, combined with exponentially expanding hardware, we will achieve human levels within two decades.

Do we really understand at all why someone’s brain can result in such an unique expression of a human? Take the transcendent intelligence of Einstein, the creativity of Steve Jobs, or the focus of Larry Page. What made those people so special? Do you have insights into that?

I examine that very question, in fact, with regard to Einstein specifically in my recent book, How to Create a Mind.

Tell me.

There are two things. First of all, we create our brain with our thoughts. We have a limited capacity in the neocortex, estimated to be about 300 million pattern recognizers, which are organized in a hierarchy. We create that hierarchy with our own thinking. I would not explain Einstein’s brilliance based on him having 350 million or 400 million. We have approximately the same capacity. But he organized his brain to think deeply about this one subject. He was interested in the violin, but he was no Jascha Heifetz. And Jascha Heifetz had an interest in physics, but he was no Einstein. We have a capacity to do world-class work in one field. That’s part of the limited capacity of the brain, and Einstein really devoted it to this one field.

But lot of physicists are devoted to their one field, and only one became Einstein.

I didn’t finish. The other aspect is courage to follow your own thought experiments and not fall off the horse because the conclusions are so different from your previous assumptions or the common belief of society. People are so unable to accept thinking different than their peers that they immediately drop their thought pattern when it leads to absurd conclusions. So there’s a certain courage to go with your convictions. Clearly Steve Jobs had that. He had a vision and carried it out. It’s that courage of your convictions.

What’s the biological basis for that kind of courage? If you had an infinite ability to analyze a brain, could you say, “Oh, here’s where the courage is?”

It is the neocortex, and people who fill up too much of their neocortex with concern about the approval of their peers are probably not going be the next Einstein or Steve Jobs. [ wow – I knew that there is a reason I have never given a rat's ass regarding anyone's intellectual opinion - arclein ]

Is this something one can control?

That’s a good question. I’ve been thinking about that and also why do some people readily accept the exponential growth of information technology and its implications, and other people are very resistant to it. I make the argument that hard-wired in our brain are linear expectations, because that worked very well 1000 years ago, tracking an animal in the wild. Some people, though, can readily accept the exponential perspective when you show them the evidence, and other people don’t. I’m trying to answer the question, what accounts for that? It really isn’t accomplishment level, intelligence, education level, socio-economic status. It cuts across all of those things. Some people’s neocortexes are organized so that they can accept the implications that they see in front of them without worrying too much about the opinion of others. Can we learn that? I would imagine yes, but I don’t have data to prove that.

Since we’ve been talking about Steve Jobs, let me bring up one of his famous quotes, from his speech at Stanford. He said, “Death is very likely the single best invention of life. It’s life’s change agent.” You are very famously trying to extend your life indefinitely, so you reject that, right?

Yes, This is what I call a deathist statement, part of a millennium-old rationalization of death as a good thing. It once seemed to make sense, because up until very recently you could not make a plausibly sound argument where life could be indefinitely extended. So religion, which emerged in prescientific times, did the next best thing, which is to say, ‘Oh, that tragic thing? That’s really a good thing.” We rationalized that because we did have to accept it. But in my mind death is a tragedy. Our initial reaction to hearing that someone has died is a profound loss of knowledge and skill and talents and relationships. It’s not the case that there are only a fixed number of positions, and if old people don’t die off, there’s no room for young people to come up with new ideas, because we’re constantly expanding knowledge. Larry Page and Sergey Brin didn’t displace anybody– they created a whole new field. We see that constantly. Knowledge is growing exponentially. It’s doubling approximately every year.

And you think that dramatically extended life is possible.

I think we’re only 15 years away from a tipping point in longevity.[ if that – arclein ]

Human Genetic Differences





 This article by Lloyd Pye tackles the one remaining conjecture regarding the emergence of modern humanity.

That conjecture has a cultural tradition behind it and additional geological support that has been largely ignored.

What it boils down to is that genetic intervention has occurred at least twice in human history. The first time occurred at least 200,000 years ago and possibly sooner. This led to the emergence of some form of modern man by 40,000 years ago.

The second event was the direct seeding of Earth with genetically prepared colonies of agricultural man along with the critical toolkit. This was done about 9000 BP with several large colonies afte4r the effects of the Pleistocene Nonconformity had settled down and the Holocene became well established.

This obviously means real engineering of the human genome way beyond any natural explanation. That is pretty evident anyway by simple inspection but also indicative of our next step. Just how does our genome differ? As it turns out, the differences are both radical and actually complete improbabilities using any natural protocol.

It goes without saying that every geneticist has the skills to disprove the general conjecture by reviewing the data and the claims. Instead their silence remains.

What I would like to see is this work replicated on the San, the Pygmies and the Bushmen in particular. They are the populations that may still have missing changes intact. Other populations should also be tested and cataloged against this understanding.

All other populations have long since been hybridized away.

The take home from this article is that our genome bears ample evidence of genetic manipulation whose natural provenance is a total impossibility.


What About Genetic Differences?



ye.com/interventionebook.html


Nothing supports the Intervention Theory and its Intragalactic Terraformers quite as much as the fascinating genetic differences between our human DNA, and chimp and gorilla DNA; and, since recent recovery, Neanderthal DNA; and, at some point in the future, hominoid DNA.

As we explained a few pages ago, the second chromosome in humans is a fusion of the 2nd and 3rd chromosomes in higher primates (HP).

The mainstream claims it was caused by a rare mutation called a Robertsonian translocation, which can combine chromosomes end-to-end, telomeres-to-telomeres, to somehow make them function well in a radically new configuration.

To combine two chromosomes so they can keep working is such an incredibly complex series of events, if it were not for mainstreamers having a desperateneed for that event to be considered plausible, they would laugh it out of existence.

Let’s try to follow their logic. Since all HP have 48 chromosomes (24 from each parent) it seems safe to assume any “common ancestor” (CA) of chimps and humans had 48 chromosomes.

Let’s assume two CAs have sex, and somehow in that process, the female’s egg has undergone a Robertsonian translocation mutation and its 2nd and 3rd chromosomes have fused into one.

When that mutated egg meets any normal, 24 chromosome sperm, it will not form a fertilized zygote . . . or if it does, soon after it will expire. Why? Because to replicate into more cells, each chromosome must line up with its pair from the other parent before being duplicated and pulled apart by fibers to opposite ends of the cell.



Then, the one cell splits into two. This process is called mitosis, and every one of our trillions of cells are copied in this manner, one after the other after the other, from the original one cell.

Now, if we consider a human-chimp cross, the 2nd human chromosome must line up with two chimp chromosomes. However, as the contents of the cell first duplicate and then pull apart, the intricate “dance” between them will soon end.
Why? One copy of the human 2nd chromosome and one copy each of the chimp 2nd and 3rd must somehow safely wind up at one end of the cell, while the other three copies need to be pulled to the other end before the cell can divide.

In that process the fibers become confused, and the resulting cells try to keep on replicating, but that chaos continues until the blastula expires.

So, there is no way a one-chromosome-short zygote will somehow become a viable fetus.



In addition to the above, now let’s consider the problems found in telomeres and centromeres.

Telomeres are the “caps” found at the ends of chromosomes that gradually reduce after each cell division. Think of them as a long string of “beads” on a necklace, and after each division of each body’s trillions of cells, a bead is lost.

When all of the telomeres have dropped off, the chromosomes stop replicating and the organism they support will die from advanced “old age.” Nothing can stop the slow loss of those beads.

Centromeres are segments of DNA usually located near a chromosome’s middle, and they are critical to successful cell division, which is the continual process of life that has to happen correctly, each time, every time, or things can go very, very haywire within the organism.

Now, with that in mind, let’s try to imagine what would happen if a pair of chromosomes fuse in the way the two primate chromosomes fused to create the “missing” one in humans.



The fusion puts the two central telomeres (blue) into the middle where the centromeres should be, and the new chromosome has a pair of (red) centromeres when it should only have one, and that one should be where the telomeres are.

This is a serious problem because telomeres perform a “stopping” function that is entirely inappropriate in the middle of a chromosome that is supposed to be fully functional. Uh-oh!

Even worse, the centromeres are only useful in cell division, so when that occurs there will be not one, but two places where it is happening, which will soon lead to a badly tangled mess.

Clearly, mainstreamers need multiple miracles to make this scenario plausible . . . and guess what? The exact array of miracles required has been found within human chromosome #2!

Traces of two HP telomeres are found in human chromosome #2, between bases #114,455,823 and #114,455,838. Those 15 are deactivated in some way that doesn’t stop the chromosome’s normal functioning. They have been neutered!

With the fused chromosomes, only the middle two telomeres are “deactivated.” The one at the top end and the bottom one at the other end are not altered, so their crucial role in cell division (dropping “beads”) will continue unhindered. How amazing is that? How . . . coincidental?

As for leaving two centromeres where only one can function, guess what? One of those seems also to have been deactivated, so that normal cell division can proceed successfully! Wow!

The sequential precision of this incredible, one-in-trillions fusion forces us to describe it as yet another of the many miracles the mainstream always seems to be blessed with. Incredible!

The Big Kahuna of Human Genetics

While the fusion “miracles” torture credibility for anyone except mainstreamers, believe it or not we find several more in other chromosomes in the human genome! These are inversions.

An inversion can occur when a segment of any chromosome is sliced into, top and bottom, and then pulled out, inverted, and put back into its original place, but with a “flipped” orientation.



According to textbooks, inversions are caused by “ionizing radiation” that causes the genetic bonds of chromosome’s to “temporarily” break loose, during which inversion occurs, followed by a reinsertion. These are rare, but verifiable.
Also consider that any two chromosomes might accidentally become “entangled,” and the result is the brief tearing loose of one segment that then inverts and moves back into its place.



The beauty of this is that every inversion is unique, and if passed on creates a landmark DNA signpost which cannot be reversed back to normal in future generations that carry it. It also works to disprove Darwin, as we shall see.

In theory, while an inversion changes the order of the alleles that comprise chromosomes and genes, the overall makeup of both will remain unharmed as long as every gene temporarily segregated from the chromosome is retained in the process of inversion and reinsertion.

Now, brace yourself for this: The genome of every human carries nine of those “miraculous” inversion/insertions that are not found in any of the corresponding chimp chromosomes! They are located in these human chromosomes:

1, 4, 5, 9, 12, 15, 16, 17, and 18!

According to Darwinian evolutionary gospel, this means that at some point after the proto-humans and chimps split from their supposed common ancestor, the first of the 9 inversions occurred, eventually to be followed by 8 more.

For example’s sake, let’s assume that the first occurred in chromosome #1, at 5 mya. One of the new proto-humans carrying those fused chromosomes gives birth to a child with an inversion/insertion not carried by chimps in chromosome #1. At 5 mya. Simple enough.

Now that child must run the gamut of infant and child mortalities to reach maturity. It does, and then finds a mate. Unlike the chromosome fusion case, which won’t allow offspring with a partner having a loose chromosome, inversions can be passed on with a partner who lacks it.

In each pairing, the offspring will have a one-in-two chance of inheriting the new inversion. The same will hold true for their offspring if they carry it, so their odds of passing it to any one of their children would be 50% - 50%.

Now imagine some astronomical odds. What the above means is that somehow the individual with the insertion in chromosome #1 produced a line of offspring that passed it down to every descendant member of its species to become a part of nearly 7 billion humans alive today!

Since today we all have an identical insertion in our #1 chromosome, it means the insertion had to start at some point with a mutation in one of us who somehow bequeathed it to the rest of us.

This mutation, whether it did something good for the individual who had it, or bad, or nothing at all, would, according to Darwinists, create an aberration that mushroomed out into humanity like a nuclear bomb. Now hold that thought.

For as unlikely as all that is, guess what? The mushroom cloud inversion improbability had to occur in exactly that way for every one of the eight more times it occurred! That’s right, it is, in fact, massive improbability to a power of 9!

In a few million years, on 9 separate occasions, proto-humans were born with a new and quite distinct inversion mutation that would then be passed on to everyhuman alive on Earth today. So the odds of that occurring are enormously more than long, they are beyond imagining!

With all that said, here is the kicker, the thing that will lay you low if you’re not ready for it: Each one had to happen in a sequence! If they all occurred together, it wouldn’t be evolution.

Let’s get clear on what the mainstream insists had to happen. Inversion #1 occurred and the genetic lines of all the other proto-humans had to die out. Only its progeny would live to pass the inversion along to subsequent generations.

Next, let’s say that at 4.5 mya the inversion in chromosome #4 occurred, and, of course, that one has to occur in one of the progeny in whom the first

Now only its progeny, carrying the inversions in chromosome #1 and #4, can move forward into the future. All other proto-human lines at 4.5 mya have to die out in one way or another.

If the mainstream is right, and the 9 randomly generated inversions occurred in a Darwinian sequence of gradually accumulating mutations, it couldn’t happen any other way. However, is there another way? One that makes more sense?
Of course there is!