Tuesday, February 1, 2011

NEED FOR ORGANIC FARMING


Organic farming can be understood as an agricultural method that doesn’t makes use of chemical fertilizers and pesticides. It was introduced by Sir Albert Howard, recognized as the Father of Organic Farming, who wanted to evolve a more eco-friendly way of agriculture. It depends on various other farming methods, like crop rotation and the use of compost. Organic farming thrives on the benefits obtained from recycling and use of natural products. Green manure, biological pest control methods and special cultivation techniques are employed to maintain soil productivity. Presently, organic farming is catering to a huge market worldwide. Explore the article to know its advantages and disadvantages.

Process And Consequence Of Organic Farming

Process

  • Organic farming is more economical than the other farming techniques. Its range of benefits includes reduced soil erosion (retaining fertility and reducing the need for fertilizers) and less use of water. Therefore, organic farming is more profitable.
  • Organic farming results in less nutrient contamination, since it stays away from artificial pesticides. This leads to reduced carbon-emission and increased biodiversity.
  • Organic farming is capable of producing the same crop variants that are produced by the conventional farming methods, even as it brings down the expenditure on fertilizers and energy by 50%. This type of farming also retains 40% more topsoil.
  • The issue of soil management is effectively addressed by organic farming. It involves techniques like crop rotation and inter-cropping and makes extensive use of green manure, which helps even damaged soil that is prone to erosion and salinity, to feed on micro nutrients.
  • This type of farming helps the farmers clear the weeds, without using any mechanical and chemical applications. Organic way of farming relies on practices like hand weeding and enhancement of soil with mulch, garlic and clove oil, corn gluten meal, table salt and borax, to get rid of weeds and insects, while ensuring crop quality.
  • Farming in the organic way is environment-friendly and non-toxic, as it uses green pesticides like neem, composed tea and spinosad. These pesticides boost the crop defense systems, by identifying and removing diseased and dying plants in time.

Consequence

  • The UN Environmental program conducted a study and survey on organic farming in 2008, which concluded that farming by organic methods gives small yields when compared to conventional farming methods.
  • Norman Borlaug, the Father of the Modern Green Revolution, has argued that since organic farming is capable of catering to a very small consumer group, world ecosystems are being destroyed by the expanding cropland in an alarming way.
  • Danish Environmental Protection Agency conducted a research and concluded that the organic farms which produce potatoes, seed grass and sugar beet are barely producing half of the total output produced by conventional farming, in the same area.
  • Organic agriculture is hardly contributing to addressing the issue of global climate change. It does reduce CO2 emissions to a certain extent, but there is no dramatic contribution.
  • In 1998, Denis Avery of the Hudson Institute publicized the increased risk of E. coli infection by the consumption of organic food.

Benefits of organic fertilizer

  • Organic fertilizers have been known to improve the biodiversity (soil life) and long-term productivity of soil and may prove a large depository for excess carbon dioxide
  • Organic nutrients increase the abundance of soil organisms by providing organic matter and micronutrients for organisms such as fungal mycorrhiza, (which aid plants in absorbing nutrients), and can drastically reduce external inputs of pesticides, energy and fertilizer, at the cost of decreased yield.
Comparison with inorganic fertilizer

  • Organic fertilizer nutrient content, solubility, and nutrient release rates are typically all lower than inorganic fertilizers One study[which?] found that over a 140-day period, after 7 leachings:
  • Organic fertilizers had released between 25% and 60% of their nitrogen content Controlled release fertilizers (CRFs) had a relatively constant rate of release
  • Soluble fertilizer released most of its nitrogen content at the first leaching

In general, the nutrients in organic fertilizer are both more dilute and also much less readily available to plants. According to UC IPM, all organic fertilizers are classified as 'slow-release' fertilizers, and therefore cannot cause nitrogen burn.[25]

Organic fertilizers from composts and other sources can be quite variable from one batch to the next. Without batch testing, amounts of applied nutrient cannot be precisely known. Nevertheless they are at least as effective as chemical fertilizers over longer periods of use


Advantages Of Organic Farming

  • Organic farming is more cost effective. It reduces the production cost by about 25-30%, because it does not involve the use of synthetic fertilizers and pesticides.
  • It retains 40% more topsoil, thus increasing the crop yield up to five-fold within five years.
  • Organic farming is more profitable because it reduces water use, nutrient-contamination by pesticides, and reduced soil erosion.
  • It also enables the farmers to use the soil for a longer period of time to grow crops as soil fertility is maintained for a long time.
  • Cattle grazing on organic farmlands have been found to be less prone to diseases, and they yield more healthy milk.
  • Products or foodstuffs produced from organic farming do not contain any sort of artificial flavors or preservatives.
  • Due to the absence of synthetic fertilizers and pesticides, the original nutritional content of food is preserved.
  • Organic farming also helps reduce the occurrence of many ailments, and speeds the recovery process by boosting the immune system.

Disadvantages Of Organic Farming

  • Organic farming results in smaller yields, and is more labor intensive, and time-consuming.
  • Organic fertilizers tend to release slowly, and hence may need several applications before the desired results can be brought about.
  • Farming the organic way requires deep skill and extensive knowledge.


Trace mineral depletion

  • Many inorganic fertilizers may not replace trace mineral elements in the soil which become gradually depleted by crops. This depletion has been linked to studies which have shown a marked fall (up to 75%) in the quantities of such minerals present in fruit and vegetables.
  • In Western Australia deficiencies of zinc, copper, manganese, iron and molybdenum were identified as limiting the growth of broad-acre crops and pastures in the 1940s and 1950s. Soils in Western Australia are very old, highly weathered and deficient in many of the major nutrients and trace elements. Since this time these trace elements are routinely added to inorganic fertilizers used in agriculture in this state.

Negative environmental effects

  • Runoff of soil and fertilizer during a rain storm
  • An algal bloom causing eutrophication
  • The nitrogen-rich compounds found in fertilizer run-off is the primary cause of a serious depletion of oxygen in many parts of the ocean, especially in coastal zones; the resulting lack of dissolved oxygen is greatly reducing the ability of these areas to sustain oceanic fauna. Visually, water may become cloudy and discolored (green, yellow, brown, or red). this is called bluebaby syndrome.

Sunday, September 6, 2009

First Image Of Memories Being Made

The ability to learn and to establish new memories is essential to our daily existence and identity; enabling us to navigate through the world. A new study by researchers at the Montreal Neurological Institute and Hospital (The Neuro), McGill University and University of California, Los Angeles has captured an image for the first time of a mechanism, specifically protein translation, which underlies long-term memory formation.The finding provides the first visual evidence that when a new memory is formed new proteins are made locally at the synapse - the connection between nerve cells - increasing the strength of the synaptic connection and reinforcing the memory. The study published in Science, is important for understanding how memory traces are created and the ability to monitor it in real time will allow a detailed understanding of how memories are formed.


The increase in green fluorescence represents the imaging of local translation at synapses during long-term synaptic plasticity. (Credit: Science)

When considering what might be going on in the brain at a molecular level two essential properties of memory need to be taken into account. First, because a lot of information needs to be maintained over a long time there has to be some degree of stability. Second, to allow for learning and adaptation the system also needs to be highly flexible.

For this reason, research has focused on synapses which are the main site of exchange and storage in the brain. They form a vast but also constantly fluctuating network of connections whose ability to change and adapt, called synaptic plasticity, may be the fundamental basis of learning and memory.

"But, if this network is constantly changing, the question is how do memories stay put, how are they formed? It has been known for some time that an important step in long-term memory formation is "translation", or the production, of new proteins locally at the synapse, strengthening the synaptic connection in the reinforcement of a memory, which until now has never been imaged," says Dr. Wayne Sossin, neuroscientist at The Neuro and co-investigator in the study. "Using a translational reporter, a fluorescent protein that can be easily detected and tracked, we directly visualized the increased local translation, or protein synthesis, during memory formation. Importantly, this translation was synapse-specific and it required activation of the post-synaptic cell, showing that this step required cooperation between the pre and post-synaptic compartments, the parts of the two neurons that meet at the synapse. Thus highly regulated local translation occurs at synapses during long-term plasticity and requires trans-synaptic signals."

Long-term memory and synaptic plasticity require changes in gene expression and yet can occur in a synapse-specific manner. This study provides evidence that a mechanism that mediates this gene expression during neuronal plasticity involves regulated translation of localized mRNA at stimulated synapses. These findings are instrumental in establishing the molecular processes involved in long-term memory formation and provide insight into diseases involving memory impairment.

This study was funded by the National Institutes of Health, the WM Keck Foundation and the Canadian Institutes of Health Research.

Short- And Long-term Memories Require Same Gene But In Different Circuits

Why is it that you can instantly recall your own phone number but have to struggle with your mental Rolodex to remember a new number you heard a few moments ago? The two tasks "feel" different because they involve two different types of memory – long-term and short-term, respectively – that are stored very differently in the brain. The same appears to be true across the animal kingdom, even in insects such as the fruit fly.



Two different types of memory -- long-term and short-term -- are stored very differently in the human brain. The same appears to be true across the animal kingdom, even in insects such as the fruit fly. (Credit: iStockphoto/Mads Abildgaard)

Assistant Professor Josh Dubnau, Ph.D., of Cold Spring Harbor Laboratory (CSHL) and his team have uncovered an important molecular and cellular basis of this difference using the fruit fly as a model. The results of their study appear in the August 25 issue of Current Biology.

The CSHL team has found that when fruit flies learn a task, each of two different groups of neurons that are part of the center of learning and memory in the fly brain simultaneously forms its own unique memory signal or trace. Both types of trace, the team discovered, depend on the activity of a gene called rutabaga, of which humans also have a similar version. A rapidly occurring, short-lived trace in a group of neurons that make up a structure called the "gamma" (γ) lobe produces a short-term memory. A slower, long-lived trace in the "alpha-beta" (αβ) lobe fixes a long-term memory.

A tale of two lobes

Neuroscientists call the rutabaga gene a coincidence detector because it codes for an enzyme whose activity levels get a big boost when a fly perceives two stimuli that it has to learn to associate with one another. This enzymatic activity in turn signals to other genes critical for learning and memory.

A classic experiment that teaches flies to associate stimuli – and one that the CSHL team used – is to place them in a training tube attached to an electric grid, and to administer shocks through the grid right after a certain odor is piped into the tube. Flies with normal rutabaga genes learn to associate the odor with the shock and if given a choice, buzz away from the grid. But flies that carry a mutated version of rutabaga in their brains lack both short- and long-term memory, don't learn the association, and so fail to avoid the shocks.

The team has now found, however, that this total memory deficit does not occur when flies carry the mutated version in either the γ or in the αβ lobes. Flies in which normal rutabaga function was restored within the γ lobe alone regained short-term memory but not long-term memory. Restoring the gene's function in the αβ lobe alone restored long-term memory, but not short-term memory.

Long- and short-term memory involve different circuits

"This ability to independently restore either short- or long-term memory depending on where rutabaga is expressed supports the idea that there are different anatomical and circuit requirements for different stages of memory," Dubnau explains. It also challenges a previously held notion that neurons that form short-term memory are also involved in storing long-term memory.

Previous biochemical studies have suggested that rapid, short-lived signals characteristic of short-term memory cause unstable changes in a neuron's connectivity that are then stabilized by slower, long-lasting signals that help establish long-term memory in the same neuron. But anatomy studies have long hinted at different circuits. Surgical lesions that destroy different parts of an animal's brain can separately disrupt the two kinds of memory, suggesting that the two memory types might involve different neuronal populations.

"We've now used genetics as a finer scalpel than surgery to reconcile these findings," Dubnau says. His team's results suggest that biochemical signaling for both types of memory are triggered at the same time, but in different neuron sets. Memory traces form more quickly in one set than the other, but the set that lags behind consolidates the memory and stores it long-term.

Why two mechanisms?

But why might the fly brain divide up the labor of storing different memory phases this way? Dubnau's hunch is that it might be because for every stimulus it receives, the brain creates its own representation of this information. And each time this stimulus – for example, an odor – is perceived again, the brain adds to the representation and modifies it. "Such modifications might eventually disrupt the brain's ability to accurately remember that information," Dubnau speculates. "It might be better to store long-term memories in a different place where there's no such flux."

The team's next mission is to determine how much cross talk, if any, is required between the two lobes for long-term memory to get consolidated. This work will add to the progress that scientists have already made in treating memory deficits in humans with drugs aimed at molecular members of the rutabaga-signaling pathway to enhance its downstream effects.

Tuesday, September 1, 2009

Liquid Propellant Rockets




In liquid propellant based rocket engines, the fuel and the oxidizer are liquids, which have to be stored separately. The fuel and liquid oxygen are stored in separate tanks from which they are pumped through injectors into a combustion chamber. The injectors mix the fuel and oxidizer thoroughly so that the fuel burns properly. The burning of the fuel in the combustion chamber, produces gases at a high temperature and pressure. The gases are then forced out of a narrow nozzle at the bottom of the rocket. This generates the thrust necessary for the rocket to move upward.
In addition to the weight of the propellants themselves, a liquid propellant based rocket must also carry the additional weight of the storage tanks, pumps, valves, injectors and piping. This increases the entire weight of the rocket.
The additional components, i.e., the pumps, tanks and injectors also means that the design of liquid propellant rockets is more complicated than solid propellant based ones.
However the liquid propellant rockets are very important because they are easy to control. The thrust, and therefore the speed, produced by a rocket can be easily controlled by controlling the amount of propellant that is pumped into the combustion chamber.

CHANDRAYAN VIDEOS

THE SPACE WALK VIEW OF A CRATER IN THE MOON TAKEN BY OUR PRESTIGIOUS CHANDRAYAN. ENJOY THE VIRTUAL SPACE WALK IN THE MOON HERE BY WATCHING THIS VIDEO.

CLICK THE IMAGE BELOW TO WATCH THE VIDEO


THE SPACE TOUR

HI GUYS HERE ITS IS AN EXCELLENT ANIMATED SPACE ENCYCLOPEDIA. SIMPLY CLICK THE IMAGE AND EXPERIENCE THE SPACE VISIT.

CLICK THE IMAGE BELOW TO START YOUR JOURNEY


PHOTOS TAKEN BY CHANDRAYAN














WHAT HAPPEN TO CHANDRAYAN ?



Though India’s first moon mission “Chandrayaan-1 ’’ was officially “retired’ ’ on Sunday after lost its contact with the deep space network at the Byalalu on Saturday at 1.30 am, the lunar craft will continue to go around the moon for about 1,000 days more before it crashes on its surfaceThis means that it will remain in orbit till the end of 2012 until preparations get under way for the “Chandrayaan-2 ’’ mission slated for lift-off in 2013. The 1,000-day countdown for Chandrayaan-1 ’s crash on the lunar surface began on Sunday. Isro spokesperson S Satish told TOI from Panaji on Monday that in the next 1,000 days “Chandrayaan-1 ’’ will not be operating with its propellants, but will be just going around the moon on its own without doing any workThe 1,000-day countdown for Chandrayaan-1 ’s crash on the lunar surface began on Sunday. Isro spokesperson S Satish told TOI from Panaji on Monday that in the next 1,000 days “Chandrayaan-1 ’’ will not be operating with its propellants, but will be just going around the moon on its own without doing any work.
He said that since it will be in orbit for 1,000 days Isro will explore the possibility of using the high-powered radars of the US and Russia to locate the spacecraft. “Discussions with these two countries have been initiated ,’’ he said.

Space expert Pradeep Mohandas said that “Chandrayaan-1 ’’ will be slowly pulled in by the gravity of the moon until it crashes on the lunar surface after 1,000 days. “After this it will spiral towards the moon’s surface and crash,’’ he said, while adding that it will slowly begin to lose its altitude. He cited the example of a Nasa spacecraft which was abandoned years ago, but was still in orbit.

Nasa and US government agencies have recommended placing retired spacecraft into an orbit at least 300 km above the geosynchronous orbit so that there is no danger of colliding with operating spacecraft.

Even as the space agency is trying to trace the cause of the communication breakdown, space scientists are divided on the issue regarding the lifespan of the “Chandrayaan-1 ’’ mission. There is a strong opinion that right from the beginning it should be have been declared a one-year mission and not a two-year project.

Wednesday, August 26, 2009

Trifid Nebula: A Massive Star Factory

A new image of the Trifid Nebula, shows just why it is a firm favorite of astronomers, amateur and professional alike. This massive star factory is so named for the dark dust bands that trisect its glowing heart, and is a rare combination of three nebula types, revealing the fury of freshly formed stars and presaging more star birth.

(The massive star factory known as the Trifid Nebula was captured in all its glory with the Wide-Field Imager camera attached to the MPG/ESO 2.2-metre telescope at ESO's La Silla Observatory in northern Chile. So named for the dark dust bands that trisect its glowing heart, the Trifid Nebula is a rare combination of three nebulae types that reveal the fury of freshly formed stars and point to more star birth in the future. The field of view of the image is approximately 13 x 17 arcminutes. (Credit: ESO)

Smouldering several thousand light-years away in the constellation of Sagittarius (the Archer), the Trifid Nebula presents a compelling portrait of the early stages of a star’s life, from gestation to first light. The heat and “winds” of newly ignited, volatile stars stir the Trifid’s gas and dust-filled cauldron; in time, the dark tendrils of matter strewn throughout the area will themselves collapse and form new stars.

The French astronomer Charles Messier first observed the Trifid Nebula in June 1764, recording the hazy, glowing object as entry number 20 in his renowned catalogue. Observations made about 60 years later by John Herschel of the dust lanes that appear to divide the cosmic cloud into three lobes inspired the English astronomer to coin the name “Trifid”.

Made with the Wide-Field Imager camera attached to the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in northern Chile, this new image prominently displays the different regions of the Trifid Nebula as seen in visible light. In the bluish patch to the upper left, called a reflection nebula, gas scatters the light from nearby, Trifid-born stars. The largest of these stars shines most brightly in the hot, blue portion of the visible spectrum. This, along with the fact that dust grains and molecules scatter blue light more efficiently than red light — a property that explains why we have blue skies and red sunsets — imbues this portion of the Trifid Nebula with an azure hue.

Below, in the round, pink-reddish area typical of an emission nebula, the gas at the Trifid’s core is heated by hundreds of scorching young stars until it emits the red signature light of hydrogen, the major component of the gas, just as hot neon gas glows red-orange in illuminated signs all over the world.

The gases and dust that crisscross the Trifid Nebula make up the third kind of nebula in this cosmic cloud, known as dark nebulae, courtesy of their light-obscuring effects. Within these dark lanes, the remnants of previous star birth episodes continue to coalesce under gravity’s inexorable attraction. The rising density, pressure and temperature inside these gaseous blobs will eventually trigger nuclear fusion, and yet more stars will form.

In the lower part of this emission nebula, a finger of gas pokes out from the cloud, pointing directly at the central star powering the Trifid. This is an example of an evaporating gaseous globule, or "EGG", also seen in the Eagle Nebula, another star-forming region. At the tip of the finger, which was photographed by Hubble a knot of dense gas has resisted the onslaught of radiation from the massive star.

Nanophysics: Serving Up carbon Buckyballs On A Silver Platter

Scientists at Penn State University, in collaboration with institutes in the US, Finland, Germany and the UK, have figured out the long-sought structure of a layer of C60 – carbon buckyballs – on a silver surface. The results, which could help in the design of carbon nanostructure-based electronics are reported in Physical Review Letters and highlighted in the July 27th issue of APS's online journal Physics.


Ever since the 1985 discovery of C60, this molecule, with its perfect geodesic dome shape has fascinated scientists, physicists, and chemists alike. Like a soccer ball, the molecule consists of 20 carbon hexagons and 12 carbon pentagons. The electronic properties of C60 are very unusual, and there is a massive research effort toward integrating it into molecular scale electronic devices like transistors and logic gates.

To do this, researchers need to know how the molecule forms bonds with a metal substrate, such as silver, which is commonly used as an electrode in devices. Now, Hsin-I Li, Renee Diehl, and colleagues have determined the geometry of C60 on a silver surface using a technique called low-energy electron diffraction.

They find that the silver atoms rearrange in such a way – namely, by forming a 'hole' beneath each C60 molecule - that reinforces the bonding between the carbon structure and the silver surface.

The measurements push the limits of surface science because the molecules and the re-arrangement of the underlying silver atoms are quite complex. The measurements thus open the door to studies of a large number of technologically and biologically important molecules on surfaces.

Monday, August 24, 2009

NASA Studies Cellulose for Food and Biofuel Production


MOFFETT FIELD, Calif. – For long-duration space missions, astronauts someday will grow plants for food and the air they breathe, while transforming inedible parts of the plants into useful resources, such as biofuels, food, and chemicals.

Today, scientists at NASA Ames Research Center, Moffett Field, Calif., are working on a method to transform the wasted parts of plants into food and fuel, using what is called bionanotechnology. The research team is assembling enzyme structures with multiple functions, modeled after a natural enzyme complex that breaks down inedible plant material into usable sugars.

“Turning waste into resources is our purpose,” said Chad Paavola, a research scientist at Ames. “We’re working on a process that converts cellulose into sugar. Cellulose is a common substance found in all plants, including wheat straw, corn stalks, and woody material. Its sugar can be converted into other resources, such as food, fuels or chemicals.” Paavola is a contributing author of the paper entitled “The Rosettazyme: A Synthetic Cellulosome” published in the July 30 issue of the Journal of Biotechnology.

Cellulose is an attractive raw material for producing sugar because of its abundance. However, it is difficult to access the sugar in cellulose, because it is arranged in structures called polymers that are difficult to break down. In nature, enzyme complexes, known as cellulosomes, are among the most effective ways to convert cellulose into useable sugars.

To better understand how cellulosomes work and to mimic their function, the team of NASA scientists built enzyme complexes modeled after natural cellulosomes, using protein parts from different microbes.

By placing the microbes DNA sequences, or genetic blueprints, for these component parts into a common laboratory bacterium, the scientists were able to create a protein structure to act as a scaffold to attach enzymes with different functions, allowing the enzymes to work together more efficiently. In this arrangement, the enzymes produce significantly more sugar from cellulose than the same enzymes produce when they are not attached to the scaffold.

The NASA scientists reached a milestone demonstrating the feasibility of duplicating nature by building multi-enzyme arrays on a self-assembling scaffold of their own design.

“This is an exciting result,” said Jonathan Trent, an astrobiologist at NASA Ames and contributing author of the paper, who initiated the project. “We succeeded in assembling a complex nano-scale structure with diverse components that self-assembles and serves a useful purpose. Its like a Swiss army knife of enzymes. This brings us a small step closer to functional nano-engineering.”

For further information about the research, please see: Shigenobu Mitsuzawu, Hiromi Kagawa, Yifen Li, Suzanne L. Chan, Chad D. Paavola and Jonathan D. Trent. "The Rosettazyme: A Synthetic Cellulosome," Journal of Biotechnology, July 30, 2009.

Coiled Creature in space


Coiled Creature

NASA's Spitzer Space Telescope has imaged a wild creature of the dark -- a coiled galaxy with an eye-like object at its center.The 'eye' at the center of the galaxy is actually a monstrous black hole surrounded by a ring of stars. In this color-coded infrared view from Spitzer, the area around the invisible black hole is blue and the ring of stars, white.

The galaxy, called NGC 1097 and located 50 million light-years away, is spiral-shaped like our Milky Way, with long, spindly arms of stars.

The black hole is huge, about 100 million times the mass of our sun, and is feeding off gas and dust, along with the occasional unlucky star. Our Milky Way's central black hole is tame in comparison, with a mass of a few million suns.

The ring around the black hole is bursting with new star formation. An inflow of material toward the central bar of the galaxy is causing the ring to light up with new stars. And, the galaxy's red spiral arms and the swirling spokes seen between the arms show dust heated by newborn stars. Older populations of stars scattered through the galaxy are blue. The fuzzy blue dot to the left, which appears to fit snugly between the arms, is a companion galaxy. Other dots in the picture are either nearby stars in our galaxy, or distant galaxies.

This image was taken during Spitzer's cold mission, before it ran out of liquid coolant. The observatory's warm mission is ongoing, with two infrared channels operating at about 30 degrees Kelvin (-406 degrees Fahrenheit).

Saturday, August 22, 2009

Titan: A World Much Like Earth


Saturn's moon Titan may be worlds away from Earth, but the two bodies have some characteristics in common: Wind, rain, volcanoes, tectonics and other Earth-like processes all sculpt features on Titan, but act in an environment more frigid than Antarctica.

"It is really surprising how closely Titan's surface resembles Earth's," said Rosaly Lopes, a planetary geologist at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif., who is presenting the results of two new studies at the annual meeting of the of the International Astronomical Union (IAU) in Rio de Janeiro, Brazil on Friday. "In fact, Titan looks more like the Earth than any other body in the solar system, despite the huge differences in temperature and other environmental conditions."

This view of Titan comes from observations made by the Cassini-Huygens mission, which has revealed details of Titan's geologically young surface, showing few impact craters, and featuring mountain chains, dunes and even "lakes."

The RADAR instrument on the Cassini orbiter has now allowed scientists to image a third of Titan's surface using radar beams that pierce the giant moon's thick, smoggy atmosphere. As its name implies, Titan is no small moon, with a size approaching that of Mars.

Titan gets about 1 percent the amount of sunlight Earth receives.

Titan is the only moon in the solar system known to possess a thick atmosphere, and it is the only celestial body other than Earth to have stable pools of liquid on its surface. Lakes that pool on Titan's surface are thought to be filled not with water, but with liquid hydrocarbons, such as methane and ethane.

"With an average surface temperature hovering around -180 C [-292 degrees Fahrenheit], water cannot exist on Titan except as deep-frozen ice as strong as rock," Lopes said.

On Titan, methane takes water's place in the hydrological cycle of evaporation and precipitation (rain or snow) and can appear as a gas, a liquid and a solid. Methane rain cuts channels and forms lakes on the surface and causes erosion, helping to erase the meteorite impact craters that pockmark most other rocky worlds, such as our own moon and the planet Mercury.

Other new research presented at the IAU General Assembly points to current volcanic activity on Titan, but instead of scorching hot magma, scientists think these "cryovolcanoes" eject cold slurries of water-ice and ammonia.

The ammonia signature seems to vary, which suggests that ammonia frosts are ejected and then subsequently dissipate or are covered over. Although the ammonia does not stay exposed for long, models show that it exists in Titan's interior, indicating that a process is at work delivering ammonia to the surface. RADAR imaging has indeed found structures that resemble terrestrial volcanoes near the site of suspected ammonia deposition.

New infrared images of this region, with ten times the resolution of prior mappings, will be unveiled at the IAU meeting.

"The images provide further evidence suggesting that cryovolcanism has deposited ammonia onto Titan's surface," said Robert M. Nelson, a senior research scientist, also at JPL, who presented results on Wednesday.

The presence of ammonia and hydrocarbons could have interesting implications for the possibility of life existing on Titan.

"It has not escaped our attention that ammonia, in association with methane and nitrogen, the principal species of Titan's atmosphere, closely replicates the environment at the time that life first emerged on Earth," Nelson said. "One exciting question is whether Titan's chemical processes today support a prebiotic chemistry similar to that under which life evolved on Earth?"

Yet more terrestrial-type features on Titan include dunes formed by cold winds, and mountain ranges. These mountains might have formed tectonically when Titan's crust compressed as it went into a deep freeze, in contrast to the Earth's crust, which continues to move today, producing earthquakes and rift valleys on our planet.


CLICK THE ABOVE LINK TO VEIW THE VIDEO.

Lunar Electric Rover.

    Next Generation Rover For Lunar Exploration Driving New Tech Here On Earth

    In the year 2020, NASA will be back on the moon. This time NASA will explore thousands of miles of the moon’s surface with individual missions lasting six months or longer. Just as we did during the Apollo program, NASA will be developing new concepts and technologies – concepts and technologies that will also benefit life on Earth.

    Desert RATS test

    During the 2008 Desert RATS tests at Black Point Lava Flow in Arizona, engineers, geologists and astronauts came together to test NASA's new NASA's Lunar Electric Rover. Image Credit: Regan Geeseman

    › View video

    One concept that is in NASA’s current plans is a Lunar Electric Rover. This small pressurized rover is about the size of a pickup truck (with 12 wheels) and can house two astronauts for up to 14 days with sleeping and sanitary facilities. It is designed to require little or no maintenance, be able to travel thousands of miles climbing over rocks and up 40 degree slopes during its ten year life exploring the harsh surface of the moon. The rover frame was developed in conjunction with an off-road race truck team and was field tested in the desert Southwest with 140 km of driving on rough lava.

    The view from cockpit and the ability to "kneel" make it easy for astronauts to get close to objects they want to examine without having to leave the cabin. Its wheels can move sideways in a "crabbing" motion, one of many features that make it skilled at scrambling over rough terrain. The crab style steering allows the vehicle to turn on a dime with a zero turning radius and drive in any combination of forward and sideways.

    Astronauts can work in shirtsleeves in the safety of the rover's cabin, and when they need to, or want to for exploration missions, they can quickly enter and exit their spacesuits through suitports. These suitports on the rover's aft bulkhead keep the astronauts' suits outside, allowing a spacewalk to start in ten minutes and keeping moon dust out of the cabin. By removing the cabin, the chassis can be used to carry payloads or allows astronauts to drive it in spacesuits. This capability also affords reusability and redundancy for long term, robust operations.


    Some of the new technologies to be developed include new batteries, new fuel cells, advanced regenerative brakes, and new tire technologies. These are the same technologies that are required for electric vehicles such as cars, tractors, and heavy equipment that the U.S. needs to reduce its dependency on fossil fuels. The prototype rover is a plug-in electric vehicle with a cutting edge, Lithium-ion battery with a 125 W-hr/Kg specific energy (including cells, packaging and battery management electronics). To meet NASA's requirements, the flight rover will need a 200 W-hr/Kg battery, so a big technology development push is underway. It will need the same reliability, energy storage and recharge capability that will be required for an Earth-based electric sedan that can travel 500 miles before needing to be recharged.

    To begin the development of the Lunar Electric Rover, an initial concept was built and began testing in October 2008. This concept vehicle was invited to participate in the 2009 Presidential Inaugural Parade. This Lunar Electric Rover was built using today’s most advanced technologies. As more advanced electric vehicle technologies are developed, they will be incorporated into the design.

    The development of these more advanced technologies will not be easy, so NASA has its best engineers and scientists working with the U.S. auto and heavy equipment industries, universities, other government agencies and international partners to make the program succeed. Our success will have a great impact on developing highly reliable and efficient electric cars and trucks for Earth. For each advancement NASA makes in the Lunar Electric Rover's capabilities, the world will be one step (and 12 wheels) closer to returning to the moon and one step closer to having highly reliable and efficient electric vehicles on Earth.

First Black Holes Starved at Birth





The first black holes in the universe were born starving.

A new study found that the earliest black holes lacked nearby matter to gobble up, and so lay relatively stagnant in pockets of emptiness.

The finding, based on the most detailed computer simulations to date, counters earlier ideas that these first black holes accumulated mass quickly and ballooned into the supermassive black holes that lurk at the centers of many galaxies today.

"It has been speculated that these first black holes were seeds and accreted huge amounts of matter," said the study's leader Marcelo Alvarez, an astrophysicist at the Kavli Institute for Particle Astrophysics and Cosmology in California. "We're just finding out that it could be much more complex than that."

Alvarez and colleagues constructed a computer simulation of the early universe based on measurements of the cosmic background radiation left over from the Big Bang, which scientists think started the universe 13.7 billion years ago. The model used these starting conditions, and the laws of physics, to watch how the universe may have evolved.

The study is detailed in an upcoming issue of The Astrophysical Journal Letters. The Kavli Institute is at the Stanford Linear Accelerator Center (SLAC) National Accelerator Laboratory in Menlo Park, Calif.

Hungry, hungry black holes

In the simulated young universe, clouds of gas condensed to form the first stars. Because of the chemistry of the gas at this time, these stars were much larger than today's typical stars and weighed more than a hundred times the mass of the sun.

After a short time these massive, hot stars exhausted their internal fuel and collapsed under their own immense weight to form black holes. But because the huge stars had emitted such strong radiation when they were still alive, they had blown most nearby gas away and left very little matter to be eaten by the resulting black holes.

Rather than swiftly swallowing large chunks of matter and growing into larger black holes, the simulation showed that the universe's first black holes grew by less than one percent of their original mass over the course of a hundred million years.

The scientists don't know what eventually became of these hungry black holes.

"It is possible that they merged onto larger objects that then themselves collapsed into black holes, bringing these first black holes along for the ride," Alvarez told SPACE.com. "Another possibility is that they got kicked out of the galaxy by interactions with other objects and would just be floating around in the halo of the galaxy now."

Whatever happened, the researchers think that these trailblazing black holes may have played an important part in shaping the evolution of the first galaxies.

Even on a diet, the black holes likely produced significant amounts of X-ray radiation, which is released when mass falls onto a black hole. This radiation could have reached gas even at a distance and heated it up to temperatures too high to condense and form stars. Thus the first black holes may have prevented star formation in their vicinity.

These hot gas clouds may have carried on for millions of years without creating stars, and then eventually collapsed under their own weight to create supermassive black holes.

Though this idea is only speculation, the researchers are intrigued by the possible effects of the universe's first black holes.

"This work will likely make people rethink how the radiation from these black holes affected the surrounding environment," said John Wise of NASA Goddard Space Flight Center in Greenbelt, Md. "Black holes are not just dead pieces of matter; they actually affect other parts of the galaxy."




"This work will likely make people rethink how the radiation from these black holes affected the surrounding environment," said John Wise of NASA Goddard Space Flight Center in Greenbelt, Md. "Black holes are not just dead pieces of matter; they actually affect other parts of the galaxy."

Tuesday, August 18, 2009

NASA Researchers Make First Discovery of Life's Building Block in Comet

NASA Researchers Make First Discovery of Life's Building Block in Comet

NASA Goddard Space Flight Center--By Bill Steigerwald

NASA scientists have discovered glycine, a fundamental building block of life, in samples of comet Wild 2 returned by NASA's Stardust spacecraft.

"Glycine is an amino acid used by living organisms to make proteins, and this is the first time an amino acid has been found in a comet," said Dr. Jamie Elsila of NASA's Goddard Space Flight Center in Greenbelt, Md. "Our discovery supports the theory that some of life's ingredients formed in space and were delivered to Earth long ago by meteorite and comet impacts."

Example of one of the many organic particles collected and recovered by the Stardust mission. Example of one of the many organic particles collected and recovered by the Stardust mission.

Elsila is the lead author of a paper on this research accepted for publication in the journal Meteoritics and Planetary Science. The research will be presented during the meeting of the American Chemical Society at the Marriott Metro Center in Washington, DC, August 16.

"The discovery of glycine in a comet supports the idea that the fundamental building blocks of life are prevalent in space, and strengthens the argument that life in the universe may be common rather than rare," said Dr. Carl Pilcher, Director of the NASA Astrobiology Institute which co-funded the research.

Proteins are the workhorse molecules of life, used in everything from structures like hair to enzymes, the catalysts that speed up or regulate chemical reactions. Just as the 26 letters of the alphabet are arranged in limitless combinations to make words, life uses 20 different amino acids in a huge variety of arrangements to build millions of different proteins.

Stardust passed through dense gas and dust surrounding the icy nucleus of Wild 2 (pronounced "Vilt-2") on January 2, 2004. As the spacecraft flew through this material, a special collection grid filled with aerogel – a novel sponge-like material that's more than 99 percent empty space – gently captured samples of the comet's gas and dust. The grid was stowed in a capsule which detached from the spacecraft and parachuted to Earth on January 15, 2006. Since then, scientists around the world have been busy analyzing the samples to learn the secrets of comet formation and our solar system's history.

NASA Launches New Technology: An Inflatable Heat Shield

Inflatable aircraft are not a new idea. Hot air balloons have been around for more than two centuries and blimps are a common sight over many sports stadiums. But it's hard to imagine an inflatable spacecraft.

Inflatable Re-entry Vehicle Experiment (IRVE)

NASA engineers check out the Inflatable Re-entry Vehicle Experiment (IRVE) in the lab. Credit: NASA/Sean Smith

› IRVE Fact Sheet (pdf)

Researchers from NASA's Langley Research Center in Hampton, Va., are working to develop a new kind of lightweight inflatable spacecraft outer shell to slow and protect reentry vehicles as they blaze through the atmosphere at hypersonic speeds.

They will test a technology demonstrator from a small sounding rocket to be launched at NASA's Wallops Flight Facility at Wallops Island, Va. The launch is scheduled for Aug. 17.

The Inflatable Re-entry Vehicle Experiment, or IRVE, looks like a giant mushroom when it's inflated. For the test, the silicon-coated Kevlar aeroshell is vacuum-packed inside a 16-inch (40.6 cm) diameter cylinder, but once it unfurls and is pumped full of nitrogen it is almost 10 feet (3 m) wide.

Engineers say the concept could help land bigger objects on Mars. "We'd like to be able to land more mass on Mars," said Neil Cheatwood, IRVE's principal investigator and chief scientist of the Hypersonics Project within NASA's Fundamental Aeronautics Program. "To land more mass you have to have more drag. We need to maximize the drag area of the entry system. We want to make it as big as we can, but the limitation has been the launch vehicle diameter."

According to Cheatwood, the idea of inflatable decelerators has been around for 40 years, but there were technical issues, including concerns about whether materials could withstand the heat of re-entry. Since then materials have advanced and because of numerous Mars missions, including rovers, landers and orbiters, there's more understanding of the Martian atmosphere.

That means researchers can now test a subscale model of a compact inflatable heat shield with the help of a small two-stage rocket. The vehicle is a 50-foot Black Brant 9 that will lift IRVE outside the atmosphere to an altitude of about 130 miles (209 km). Engineers want to find out what the re-entry vehicle will do on the way down.

"The whole flight will be over in less than 20 minutes," said Mary Beth Wusk, IRVE project manager. "We separate from the rocket 90 seconds after launch and we begin inflation about three-and-a-half-minutes after that. Our critical data period after it inflates and re-enters through the atmosphere is only about 30 seconds long."

Cameras and sensors on board will document the inflation and high-speed free fall and send information to researchers on the ground.

After its brief flight IRVE will fall into the Atlantic Ocean about 90 miles down range from Wallops. No efforts will be made to retrieve the experiment or the sounding rocket.

The Inflatable Re-entry Vehicle Experiment is an example of how NASA is using its aeronautics expertise to support the development of future spacecraft. NASA's Aeronautics Research Mission Directorate in Washington funded the flight experiment as part of its hypersonics research effort

THE UPDATE ON INFLATABLE SHIELD

UPDATE: 08.17.09

Inflatable Re-entry Vehicle Experiment (IRVE) mission



Inflatable Re-entry Vehicle Experiment (IRVE) launch
Click to enlarge

08.17.09: Black Brant 9 rocket carrying the Inflatable Re-entry Vehicle Experiment launches from NASA's Wallops Flight Facility. Credit: NASA/Sean Smith

WALLOPS ISLAND, Va. -- A successful NASA flight test has shown that a spacecraft returning to Earth can use an inflatable heat shield to slow and protect itself as it enters the atmosphere at hypersonic speeds. This was the first time anyone has successfully flown an inflatable reentry capsule, according to engineers at NASA's Langley Research Center.



The Inflatable Re-entry Vehicle Experiment, or IRVE, was vacuum-packed into a 15-inch diameter payload "shroud" and launched on a small sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Va. Nitrogen inflated the 10-foot (3 m) diameter heat shield, made of several layers of silicone-coated industrial fabric, to a mushroom shape in space several minutes after liftoff.


"This was a huge success," said Mary Beth Wusk, IRVE project manager, based at Langley. "IRVE was a small-scale demonstrator. Now that we've proven the concept, we'd like to build more advanced aeroshells capable of handling higher heat rates."

The Black Brant 9 rocket took about four minutes to lift the experiment to an altitude of 131 miles (211 km). Less than a minute later it was released from its cover and started inflating on schedule at 124 miles (199.5 km) up. The inflation of the shield took less than 90 seconds.

"Everything performed well even into the subsonic range where we weren't sure what to expect," said Neil Cheatwood, IRVE principal investigator and chief scientist for the Hypersonics Project of NASA's Aeronautics Research Mission Directorate's Fundamental Aeronautics Program. "The telemetry looks good. The inflatable bladder held up well."

Inflatable heat shields hold promise for future planetary missions, according to researchers. To land more mass on Mars at higher surface elevations, for instance, mission planners need to maximize the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be.

Saturday, August 15, 2009

why rainbow is always circular

Rainbows are caused by rays of sunlight that reflect back toward the sun after hitting spherical water droplets, such as those found in a raincloud or in rain itself. The light does not reflect directly back toward the sun, but rather are offset at approximately 42 degrees, the "Rainbow Angle". Thus, you will see the rainbow in a perfectly circular arc, whose radius is 42 degrees and whose center is directly opposite the sun. Since blue light travels at a slightly different speed within the water droplet then red light, the angle is just a little bit different for different colors, leading to the lovely color bands in a rainbow.

Although you don't always see the same length of the rainbow's arc, all rainbows have the same apparent angular diameter, no matter how far away the water droplets are. This is true whether the droplets come from a garden hose or a distant raincloud.

Rainbows are always circular. You can get tricky with lenses and mirrors to make a parabolic or oddly curved rainbows, but I doubt you would encounter such things naturally.

Rainbows are formed by small water droplets in the air splitting the suns light into colours. Each colour has a consistent angle to the incoming light and so makes a circle (like a compass). Interestingly, the shadow of your head is always the centre of the circle, so unless our shadow heads overlap the rainbow you are looking at is always slightly different position to the rainbow I'm looking at. This is also why there is no "end of the rainbow": circles have no ends.



A good way to check this is to look for your plane's shadow the next time you fly somewhere. When the plane's shadow passes over a cloud, you can see a perfectly circular rainbow centred on the shadow of the plane.

Water droplets and light form the basis of all rainbows, which are circular arcs of color with a common center. Because only water and light are required for rainbows, one will see them in rain, spray, or even fog.

A raindrop acts like a prism and separates sunlight into its individual color components through refraction, as light will do when it passes from one medium to another. When the white light of the sun strikes the surface of the raindrop, the light waves are bent to varying degrees depending on their wavelength. These wavelengths are reflected on the far surface of the water drop and will bend again as they exit. If the light reflects off the droplet only once, a single rainbow occurs. If the rays bounce inside and reflect twice, two rainbows will appear: a primary and a secondary. The second one will appear fainter because there is less light energy present. It will also occur at a higher angle.

Not all the light that enters the raindrop will form a rainbow. Some of the light, which hits the droplet directly at its center, will simply pass through the other side. The rays that strike the extreme lower portions of the drop will product the secondary bow, and those that enter at the top will produce the primary bow.

The formation of the arc was first discussed by Rene Descartes in 1637. He calculated the deviation for a ray of red light to be about 180 - 42, or 138°. Although light rays may exit the drop in more than one direction, a concentration of rays emerge near the minimum deviation from the direction of the incoming rays. Therefore the viewer sees the highest intensity looking at the rays that have minimum deviation, which form a cone with the vertex in the observer's eye and with the axis passing through the Sun.

The color sequence of the rainbow is also due to refraction. It was Sir Isaac Newton, however, 30 years after Descartes, who discovered that white light was made up of different wavelengths. Red light with the longest wavelength, bends the least, while violet, being the shortest wavelength, bends the most. The vertical angle above the horizon will be a little less than 41° for the violet (about 40°) and a little more for the red (about 42°). The secondary rainbow has an angular radius of about 50° and its color sequence is reversed from the primary. It is universally accepted that there are seven rainbow colors, which appear in the order: red, orange, yellow, green, blue, indigo, and violet. However, the rainbow is a whole continuum of colors from red to violet and even beyond the colors that the eye can see.

Supernumerary rainbows, faintly colored rings just inside of the primary bow, occur due to interference effects on the light rays emerging from the water droplet after one internal reflection.

No two people will see the same rainbow. If one imagines herself or himself standing at the center of a cone cut in half lengthwise and laid on the ground flatside down, the raindrops that bend and reflect the sunlight that reach the person's eye as a rainbow are located on the surface of the cone. A viewer standing next to the first sees a rainbow generated by a different set of raindrops along the surface of a different imaging cone.

Using the concept of an imaginary cone again, a viewer could predict where a rainbow will appear by standing with his back to the sun and holding the cone to his eye so that the extension of the axis of the cone intersects the sun. The rainbow will appear along the surface of the cone as the circular arc of the rainbow is always in the direction opposite to that of the sun.

A rainbow lasts only about a half-hour because the conditions that create it rarely stay steady much longer than this. In many locations, spring is the prime rainbow-viewing month. According to David Ludlum, a weather historian, rainfall becomes more localized in the spring and brief showers over limited areas are a regular feature of atmospheric behavior. This change is a result of the higher springtime sun warming the ground more effectively than it did throughout the previous winter months. This process produces local convection. These brief, irregular periods of precipitation followed by sunshine are ideal rainbow conditions. Also, the sun is low enough for much of the day to allow a rainbow to appear above the horizon—the lower the sun, the higher the top of a rainbow.

The "purity" or brightness of the colors of the rainbow depends on the size of the raindrops. Large drops or those with diameters of a few millimeters, create bright rainbows with well defined colors; small droplets with diameters of about 0.01 mm produce rainbows of overlapping colors that appear nearly white.

For refraction to occur, the light must intersect the raindrops at an angle. Therefore no rainbows are seen at noon when the sun is directly overhead. Rainbows are more frequently seen in the afternoon because most showers occur in mid day rather than morning. Because the horizon blocks the other half of a rainbow, a full 360° rainbow can only be viewed from an airplane.

The sky inside the arc will appear brighter than that surrounding it because of the number of rays emerging from a raindrop at angles smaller that those that are visible. But there is essentially no light from single internal reflections at angles greater than those of the rainbow rays. In addition to the fact that there is a great deal of light directed within the arc of the bow and very little beyond it, this light is white because it is a mixture of all the wavelengths that entered the raindrop. This is just the opposite in the case of a secondary rainbow, where the rainbow ray is the smallest angle and there are many rays that emerge at angles greater than this one. A dark band forms where the primary and secondary bows combine. This is known as the Alexander's dark band, in honor of Alexander of Aphrodisias who discovered this around 200 B.C.

If a viewer had a pair of polarizing sunglasses, he or she would see that light from the rainbow is polarized. Light vibrating horizontally at the top of the bow is much more intense than the light vibrating perpendicularly to it across the bow and it may be as much as 20 times as strong.

Although rare, a full moon can produce a lunar rainbow when it is bright enough to have its light refracted by raindrops just as is the case for the sun.


for more details download this document click here

Ares I-X- LARGEST ROCKET EVER, IS PLANNED TO TAKE OFF BY 31 0CT

Ares I-X- LARGEST ROCKET EVER

Ares I-X Complete

Standing tall at its fully assembled height of 327 feet, the Ares I-X is one of the largest rockets ever processed in the Vehicle Assembly Building's High Bay 3, Super Stack 5 at the Kennedy Space Center.

Ares I-X rivals the height of the Apollo Program's 364-foot-tall Saturn V. Five super stacks make up the rocket's upper stage that is integrated with the four-segment solid rocket booster first stage. Ares I-X is the test vehicle for the Ares I, which is part of the Constellation Program to return humans to the moon and beyond.

The Ares I-X flight test currently is targeted for Oct. 31.


New Development in electronics-->The spin Electronics, design and updates

Physicists Devise Viable Design For Spin-Based Electronics

at the University of California, San Diego have proposed a design for a semiconductor computer circuit based on the spin of electrons. They say the device would be more scalable and have greater computational capacity than conventional silicon circuits.





Diagram of spin-based electronic system developed by UCSD (Credit: Image courtesy of University of California, San Diego)

The “spintronic”—or spin-based electronic—device, described this week in the journal Nature, would extend the scope of conventional electronics by encoding information with the magnetic—or spin—state of electrons, in addition to the charge of the electrons. The researchers used a novel geometry to overcome the weakness of the magnetic signal, the current limitation to developing spintronics in silicon semiconductors.

“The breakthrough of our research is the device geometry, the way it is activated, and the way it could be integrated in electronic circuits,” said Lu J. Sham, a professor of physics at UCSD and the senior author on the paper. “All of these features are novel and our results show for the first time a spin-based semiconductor circuit.”

One advantage of spintronics is that it shrinks the size of the circuit that is needed to perform a given logic operation. The researchers say that their proposed device has other important advantages compared with conventional electronics.

“Spin-based electronic devices allow the construction of reprogrammable circuits without hindering performance,” explained Hanan Dery a postdoctoral fellow working with Sham and the lead author on the paper. “This will allow flexible electronic devices which fit into any application while providing the best performance. For example, the same circuit can serve as i-Pod, cellular phone, microprocessor, et cetera.”

The proposed spintronic circuit is an interconnected series of logic gates. Each logic gate consists of five magnetic contacts lying on top of a semiconductor layer. The magnetic state of each of these contacts, determined by the electrons’ spins, corresponds to the “0” and “1” in each bit of information. The logic operation is performed by moving electrons between four of the magnetic contacts and the semiconductor. The result of the operation is read by the fifth magnetic contact.

The proposed device has not yet been made, but according to the researchers it should be feasible with currently available technology.


New Development in Spintronics: Spin-polarized Electrons OnDemand, With A Single Electron Pump

ScienceDaily (Jan. 21, 2009) — Many hopes are pinned on spintronics. In the future it could replace electronics, which in the race to produce increasingly rapid computer components, must at sometime reach its limits. Different from electronics, where whole electrons are moved (the digital "one" means "an electron is present on the component", zero means "no electron present"), here it is a matter of manipulating a certain property of the electron, its spin.



Schematic. The goal of spintronics (also called spin electronics) is to systemically control and manipulate single spins in nanometer-sized semiconductor components in order to thus utilize them for information processing. (Credit: A. Müller, PTB)

For this reason, components are needed in which electrons can be injected successively into the electron, and one must be able to manipulate the spin of the single electrons, e.g. with the aid of magnetic fields. Both are possible with a single electron pump, as scientists of the Physikalisch-Technische Bundesanstalt (PTB) in Germany have, together with colleagues from Latvia, now shown.

Electrons can do more than be merely responsible for current flow and digital information. If one succeeds in utilizing their spin, then many new possibilities would open up. The spin is an inner rotational direction, a quantum-mechanical property which is shown by a rotation around its own axis. An electron can rotate counterclockwise or clockwise. This generates a magnetic moment. One can regard the electron as a minute magnet in which either the magnetic North or South Pole "points upwards" (spin-up or spin-down condition). The electronic spins in a material determine its magnetic properties and are systematically controllable by an external magnetic field.

This is precisely the goal of spintronics (also called spin electronics): systemically control and manipulate single spins in nanometer-sized semiconductor components in order to thus utilize them for information processing. This would even have several advantages: The components would be clearly faster than those that are based on the transport of charges. Furthermore, the process would require less energy than a comparable charge transfer with the same information content. And with the value and direction of the expected spin value, further degrees of freedom would come into play, which could be used additionally for information representation.

In order to be able to manipulate the spins for information processing, it is necessary to inject the electrons singly with predefined spin into a semiconductor structure. This has now been achieved by researchers of the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig and the University of Latvia in Riga. In the current issue of the physics journal Applied Physics Letters, they present investigations of a so-called single electron pump. This semiconductor device allows the ejection of exactly one single electron per clock cycle into a semiconductor channel.

In the measurements presented it was shown for the first time that such a single electron pump can also be reliably operated in high magnetic fields. For sufficiently high applied fields, the pump then delivers exactly one single electron with predefined spin polarization per pumping cycle. It thus delivers spin-polarized electrons virtually on demand. The robust design and the high achievable clock rate in the gigahertz range makes such a spin-polarized single electron pump a promising candidate especially also for future spintronic applications

Thursday, August 13, 2009

Danger- avoid using excess mobile phone talks

A Must-See Documentary on the Dangers of Cell Phone Use

Approximately 60,000 to 70,000 cell phones are sold each day in the United States. Over 110 million Americans use cell phones. And worldwide, it is estimated that approximately 1 billion people use cell phones. As the number of cell phones, cell phone towers, and other wireless antennas increase rapidly in industrialized nations, should you be concerned about the effects that regular exposure to radio frequency radiation can have on your health?

If you're not concerned about the effect that wireless devices and broadcasting antennas can have on your health, I encourage you to view "Public Exposure: DNA, Democracy and the Wireless Revolution," a documentary that provides the best overall look at the connection between radio frequency radiation and human health that I have ever come across.

The full documentary can be viewed below in two parts, courtesy of Google video and http://energyfields.org.

In case you don't have time to view this documentary, here are some highlights that I jotted down during my first viewing:

  1. Regular exposure to radio frequency radiation may interfere with the electrical fields of our cells. Common health challenges that have been linked to regular exposure to radio frequency radiation include:

    • Abnormal cell growth and damage to cellular DNA
    • Difficulty sleeping, depression, anxiety, and irritability
    • Childhood and adult leukemia
    • Eye cancer
    • Immune system suppression
    • Attention span deficit and memory loss
    • Infertility
  2. Children are at much higher risk than adults of experiencing health problems related to regular exposure to radio frequency radiation; thinner and smaller skulls translate to greater absorption of radio frequency.

  3. From the early 1950's to the mid 1970's, the U.S. embassy in Moscow was purposefully bombarded by radio frequency radiation 24 hours a day. The U.S. embassy workers experienced what the perpetrators identified as "Radio Frequency Sickness Syndrome."

    After some time of concentrated radio frequency radiation exposure, the American ambassador developed leukemia. The next American ambassador also developed leukemia. Blood tests performed on embassy staff members showed irreversible DNA damage.

  4. Dr. Jerry Phillips, a biochemist researcher, began studying cell phone safety for Motorola more than a decade ago. When he started generating data that indicated that cell phones have negative effects on human health, Motorola took a number of steps to delay publication of Dr. Phillips' work.

    According to Dr. Phillips, Motorola's main concerns with his data were how to handle public relations and how to spin the results in a way that was favorable to the industry.

    Dr. Phillips also indicates that the only significant money that is available to do research on cell phone safety issues is industry money. This is why he has no faith in studies that are coming out.

  5. You can use a radio and microwave detector to measure the amount of harmful radiation that your living and work spaces are penetrated by. The detector used in the documentary is called the "Microalert Radio/Microwave Alarm."

    Silver mesh curtains and copper flat paint can block significant amounts of radio frequency radiation.

If the information provided above has you concerned, I encourage you to view the full documentary here:

In our household, we own one cell phone - it's a pay-as-you-go phone that we carry with us for emergency purposes whenever we go out. Rather than put our faith in any of the products on the market that claim to provide protection against radio frequency radiation, we feel that it's prudent to stay away from cell phones whenever possible.

Unfortunately, many of us have little control over the location of cell phone towers and other broadcasting antennas that emit powerful radio frequency waves. If you know of or discover any resources that our readers can use to locate such towers and antennas in their local areas, please share this information in the comments section below.

By increasing public awareness of this issue, we stand a greater chance of having municipal, state/provincial, and federal governments do a better job of regulating the placement of cell phone towers and antennas. Governments in Austria, Switzerland, and many Eastern European countries have already created protective standards for human exposure to radio frequency radiation. In Scotland, towers are not allowed to be located near hospitals, schools, and homes.

Please consider sharing this documentary with family members and friends. Thank you.

P.S. If you're interested in getting a simple device - called a Gauss meter - that can help you discover any EMF "hot spots" that might exist in your living and work areas, have a look at the Cell Sensor EMF Detector - it's relatively inexpensive, and it's what I use to test our home and office from time to time.