Tuesday, September 20, 2011

Energy Issues in India



            India, the third-largest coal producing country in the world, is also the third largest coal consuming country, accounting for around 9% of total coal consumption in the world (Renewable, 2007). With 8.6% of the global coal reserves, India has the fourth largest share in the world, behind only the United States, Russia, and China. The fast pace of India’s development in the most recent decade will soon lead to an increase in the use of coal, and other non-renewable resources, such as oil and gas. The Indian Renewable Energy Status Report predicts that for India to “provide adequate electricity to its population, it needs to more than double its current installed capacity to over 300 GW by 2017.” A rapidly expanding population requires more energy to meet their daily demands; in a country where the population is already over one billion people, these energy demands will be quite high.
However, India’s leaders have chosen to directly tackle this impending problem. The Eleventh Five-Year Plan, for 2007-2012, establishes a target that 10% of power generating capacity will be from renewable sources by 2012. India has already reached this goal, through efforts in solar, wind, biogas, biomass, and small hydropower projects (Arora, 2010). In 2007, wind energy accounted for 65.15% of total energy produced by renewable energy technology. Though solar energy has yet to reach its full potential in a sun-drenched country such as India, government funds are promoting research and development, further promoting the use of renewable energy technologies. Hydropower, which has come close to reaching capacity in the United States, still has a prominent future in India, as the myriad of streams in villages could potentially be harnessed for small-scale energy production.
Biomass, one of the most interesting methods of renewable energy, is the creation of energy from organic materials. In India, the use of cow manure as fuel is a prime example of biomass generated renewable energy. 40% of India’s non-commercial energy resources come from organic materials such as manure or wood (Arora, 2010). The vast majority of India’s population utilizes organic fuel for cooking; harnessing these already ingrained capabilities would be to India’s great advantage. Biomass uses agricultural output and organic waste, thus reducing the amount of waste that could end up in a landfill, while also decreasing dependence on non-renewable resources, such as coal. Also, biomass has much fewer toxic byproducts, unlike coal or gas, which have the potential to produce carcinogens such as carbon monoxide.
Biogas, the most prevalent use of biomass for energy in India, “is obtained via an anaerobic process of digesting organic material such as animal waste, crop residues, and waste from industrial and domestic activities to produce the combustible gas methane” (Arora, 2010). This method has primarily been used in India for small-scale projects in rural areas, “as it is the cheapest and the most widely available fuel” (Biomass, 2010). Currently, 4 million small-scale plants have been installed in India, with the potential for many more to be created. With about 28% of the world’s total cattle population, fuel such as manure is not difficult to come by, further perpetuating ease of household use. The gas produced by burning organic material is most often used on a family basis for cooking and providing lighting.
Liquid biofuels, most notably ethanol and biodiesel, are primarily used to offset dependence on transportation fuels, such as gas. As discussed in previous classes, one of the primary debates about the production of ethanol is the use of food crops for energy creation. In a world where millions are starving every day, many of them in India, how can fuel that utilizes vast quantities of food crops be justified? However, India is now investigating the potential of non-food sources, such as sugar molasses and non-edible oilseeds. In India, there are about 320 distilleries for producing ethanol from fermenting molasses, though India is also experimenting with producing liquid biofuel from other sources such as sweet sorghum, sugar beet, and sweet potatoes. There is also increasing research in the use of forest and agricultural residue, which would be able to preserve the food crops for feeding the population (Arora, 2010).
India’s investment in renewable energy technology is one that should be commended on an international scale. Though only a few methods were touched upon in this article, there are a multitude of research projects and development strategies that are being implemented for future use. Having previously visited a developing country, China, I can almost visualize the potential for biomass in India. With village streets strewn with trash and agricultural byproducts, methods that could both reduce such unsightly and unsanitary conditions while also utilizing it as an energy resource would be highly beneficial. Based on the amount that India has invested in renewable energy and the outcomes thus far, it appears that the country has the potential, with its abundance of natural resources, to address upcoming energy issues while also reducing its dependence on non-renewable resources.

References
Arora, D.S., et al., (2010). Indian Renewable Energy Status Report, National Renewable Energy
Biomass Consumption in India (2010). International Energy Agency. 
Renewable Energy and Energy Efficiency status in India (2007), ICLEI Report.

Thursday, September 15, 2011

India's Green Revolution

The Green Revolution: Long Term Consequences?
            Norman Borlaug, an agronomist working in Mexico developed a strain of wheat that was partially disease resistant, had a higher yield, and was dwarfed so that the crop would not fall over and be wasted. The winner of a Nobel Peace Prize, Borlaug launched what is now known as the “Green Revolution,” a period between the 1940s and 1970s filled with the creation of agricultural technology and an increase in research and development. Crops such as wheat and rice became more productive as farmers began using more resistant types, known as High Yield Varieties (HYVs). Irrigation techniques were also developed, enabling more arid areas to also increase agricultural productivity.
            The Green Revolution diffused from the West to India around the 1960s, starting in Punjab and working its way across the country. Irrigation, pesticides, technology transfer, introduction of “semi dwarf high-yielding varieties of wheat and rice, which could yield 2 to 3 times more” than other strains, increase in education, and the creation of a National Bank for Agriculture and Rural Development (NABARD) all contributed to India’s revolution (Swaminatham, 2010).  When India was plagued by drought and famine, only about “12 million tons of wheat were produced in the country. By 1968–69, after the [high yielding varieties] had been introduced, wheat production jumped to around 16 million tons and by the early 1980s it was double that of the mid 1960s” (Baker, 2007).
            Starting in the wheat growing lands of Northern India, the Green Revolution slowly worked its way farther south, spreading different farming technologies and irrigation practices. More often embraced by large scale farmers who were willing to take risks, use of the HYVs was able to greatly increase production and thus increased the farmers’ profits. However, there is now evidence that the Green Revolution may have sparked “a widening gap between rich and poor, even though the ‘bottom line’ was encouragingly higher than it used to be” (Baker, 2007). Though this may have been a byproduct, such income discrepancies exist all over the world and the good effects, such as averting hunger and famine, are well noted.
            Another possibly unexpected result of Green Revolution technology is contamination of water supply due to the liberal use of pesticides. Villagers in Uttar Pradesh reported “increases in formerly unknown ailments such as strokes, heart disease and ‘mystery illnesses,’ particularly of children. These were attributed to the poisoning of water supplies by overuse of chemical fertilizers, insecticides and pesticides” (Baker, 2007). For the rural poor, who live in such close contact with the land, using too many chemicals too close to the living space has proven detrimental. When harsh chemicals seep into the groundwater or drain into surface water, villagers have no other source of hydration to rely upon. They neither have access to filtration systems nor access to tap water and often do not even know that their water is contaminated.
            As mentioned in an earlier blog, our environment directly affects the minerals and chemicals that we consume. The presence of fertilizer and pesticide chemicals in the water supply is a serious matter and cannot easily be remedied. Villagers may also be ingesting residues of applied chemicals once the crops are harvested, and consumers buying the exports will also be affected. In time, increased build up of chemicals on and below the soil could decrease crop yields, thus reversing the original purposes of the Green Revolution.
            Although crop yields, production, and exports dramatically increased during the Green Revolution, with the new technology also came new problems. For every issue, one must always examine the long term effects though the immediate benefits may be dazzling. For example, creating large scale dam projects are often exciting for those living downstream who may have future protection from flooding and an increase in available energy through hydropower. However, what will happen if that dam breaks in twenty-five, fifty, or even one hundred years? What if the dam has to be removed because the technology or infrastructure has become outdated? What then will happen to the river ecosystem or the residents who once felt so well protected? Every action has a consequence, and when dealing with one’s environment, the consequence is often far more complex than first imagined. The Green Revolution saved India from widespread famine in the 1960s, but more sustainable agricultural practices must now be spread and utilized before adverse effects begin to outweigh the benefits.

 References

Baker, K., & Jewitt, S. (2007). Evaluating 35 years of Green Revolution technology in villages of Bulandshahr district, western UP, North India. Journal of Development Studies, 43(2), 312-339. doi:10.1080/00220380601125180.

Swaminatham, M. (2010). Beyond the Green Revolution. In M.S. Swaminathan’s, From Green to Evergreen Revolution: Indian Agriculture: performance and Challenges. New Delhi: Academic Foundation.

Sunday, September 11, 2011

Nitrates in Leafy Greens

Nitrogen in Leafy Greens: Hazards and Health Benefits
Different foods contain enormous varieties of nutrients and minerals, essential to a long, healthy life. While growing up, children often hear about the need to have calcium for strong bones, Vitamin A for good eyesight, and iron for healthy blood. The lack of these essential nutrients leads to a host of other problems that, in developed countries, can easily be taken care of by good nutrition during childhood. Unfortunately, good nutrition may not be easy to come by in developing nations and the lack of essential nutrients is leading to widespread malnutrition and anemia among both children and adults. However, in some cases, too much of one mineral is consumed, leading to a whole other set of problems.  Nitrate, an essential compound for protein production, has been shown to be carcinogenic when overly ingested. Nitrate rich vegetables, such as leafy greens, are significant contributors to high levels of nitrate in the body.
Nitrogen, commonly found in nature as nitrite, nitrate, or ammonia, “enters the human body through drinking water, food and air. Ingested nitrates converted to nitrate” can lead to increased absorption of sodium, increased production of oxygen, and the dilation of blood cells (Gupta, 2008). Nitrate can react with amines and amides in the body, sometimes forming carcinogenic compounds, depending upon the pH level.  Low acidity favors the creation of these carcinogenic compounds, known as N-nitroso compounds. A toxic dose of ingested nitrate is around 2-5 grams, with reportedly lethal doses at 4-50 grams. Over consumption can result in acute toxicity symptoms, such as cyanosis, severe gastroenteritis with abdominal pain, blood in urine or feces, mental depression, and headache and weakness. Cancers associated with high nitrate ingestion include colon cancer, stomach cancer, and non-Hodgkin’s lymphoma. There is also a potential correlation between type-1 diabetes and high nitrate levels in drinking water, though more research on the matter is needed (Gupta, 2008).
            Fruits and vegetables account for 70% of the total nitrate intake, with drinking water accounting for 21%. Though “nitrate is present in most vegetables to a degree, the critical driver for a high-dietary exposure to nitrate is not the absolute amount of vegetables consumed but the type of vegetables, and the respective concentration of nitrate” (Anonymous, 2008). Spinach, lettuce and rucola, all leafy greens, have been found to have the highest levels of nitrate, but other factors, such as fertilizers, also affect concentration levels. Though these types of vegetables may increase levels of nitrates in the body, their health benefits appear to outweigh the risks. Spinach is high in iron and is often promoted as a ‘super-food’ in relation to nutritional value. In 2008, the European Food Safety Authority Contaminants Panel concluded that “the benefits of eating fresh produce outweigh any risk posed to human health from exposure to nitrate through vegetables,” finding only a small percentage of people eats enough leafy greens to actually experience negative effects (Anonymous, 2008).
            Though leafy greens have comparatively high levels of nitrates, how the vegetables are prepared is also an influential factor. According to the Department of Environmental Botany in New Delhi, India, “at least 50% of the nitrate can be removed by cooking vegetables in [low nitrate level] water” (Gupta, 2008). Also, avoiding aluminum pans and cooking utensils can significantly reduce nitrate levels in foods, as the aluminum “enhances reduction of nitrates to nitrite, and hence increases the toxicity” (Gupta, 2008). Washing, peeling, and cooking high nitrate level vegetables will generally reduce levels enough for continuous, safe consumption. Thus one should still be encouraged to consume these vegetables, as it is easy to reduce nitrate levels while still gaining other nutritional benefits.
            Certain farming practices may also reduce levels of nitrates in vegetables, allowing consumers to have even more confidence when picking which greens to cook. New Delhi’s Department of Environmental Botany recommends harvesting plants at noon time, as they have minimum levels of nitrate in the middle of the day. Also, the petioles of the plants show increased levels of nitrates as opposed to the broad, leaf surface. Therefore, removing the petioles could decrease nitrate levels in the vegetables when later sold to consumers. The most interesting method of decreasing nitrates in vegetables before it even reaches the consumers is monitoring and carefully selection genotypes “based on their relative levels of  nitrate content and nitrate reductase activity” (Gupta, 2008). Nitrate reductase is an enzyme that can significantly reduce nitrate accumulation in leafy vegetables. With an increasing number of technological solutions, as well as common, practical ones such as cooking vegetables in water, the threat of over consumption of nitrates through vegetables is fairly low, though other factors, such as contaminated drinking water, may pose a greater threat to human health.
            
References
Anonymous, (2008, July 3). Risk to Consumers from Nitrate found in Vegetables is Minimal.
            Horticulture Week, 36.
Gupta, S. K., Gupta, R. C., Chhabra, S. K., Eskiocak, S., Gupta, A.B., and Gupta, R., (2008).
            Health Issues Related to Nitrogen Pollution in water and air. Current Science, 94 (11),
            1460-1477.

Thursday, September 1, 2011

Medical Geology in India


Medical Geology: Arsenic Poisoning

            There is often a perceived disconnect between subjects such as geology and others such as health or disease. One might not take the time to understand the underlying connection between the two, or even deem that connection to be important. However, geology, and more generally the environment in which one lives, directly affects the status of one’s health. Potential causes, such as arsenic in groundwater, can cause diseases or even mortality when that water is ingested. In a country such as India, where a vast majority of the population lives in close contact with their environment, geological factors have a greatly magnified effect on the health of the residents.

            Arsenic poisoning in western India and Bangladesh, classified as one of the world’s worst environmental disasters, is due to the leaching of arsenic from rocks such as those found in the Himalayas. After arsenic is separated from its original source, it “is transported either in solution or in suspension along with detrital sediment particles, by the rivers originating in the Himalayan mountains, Shillong Plateau and Bihar Plateau and flowing into the Ganga–Brahmaputra deltaic region” (Dissanyake 2010). After being deposited as sediment in the delta, where most coastal cities are located, the arsenic is then accidentally tapped into by the millions of people who own wells, dipping into the groundwater supply every day. They are then more susceptible to develop cancer, cardiovascular diseases, skin disorders, and many other issues. Such is the close relationship between the geological environment of India and the health problems of its citizens.

            Though geological origin is well understood in relation to arsenic poisoning, scientists in some regions are still trying to understand exactly how the arsenic is being leached. The two leading theories are that “oxygen might be desorbed and dissolved from iron oxide minerals by anaerobic groundwater or it might be derived from the dissolution of arsenic bearing sulfide minerals such as pyrite by oxygenated waters” (Bunnell 2007).  It is possible that the “shallow water table aquifers and degradation of organic matter contained in the sediments caused the reduction of adsorbed arsenic associated with it,” thus releasing toxic arsenic into the groundwater. When more organic material accumulates, it only takes a few weeks for more arsenic to enter the water supply (Dissanyake 2010).

Other countries, such as China, Taiwan, Vietnam, and Mexico also have high rates of arsenic poisoning, though Bangladesh appears to have one of the most severe cases. The Bengal basin is one of the largest in the world, created by the water and sediments carried by the Ganges, Brahmaputra and Meghna river systems. Arsenic is often found in such fine sediment as that of the Bengal basin, thus contributing to the estimation that 95% of the population of Bangladesh is susceptible to arsenic poisoning, primarily through use of their wells (Dissanyake 2010).  Though there may not be huge concentrations of arsenic in the sediment, it is “the vast amount of sediments involved in this process [that] makes even low levels of arsenic quite important” (Dissanyake 2010). Also, because certain crops such as rice utilize large amounts of contaminated water, the rice plant soaks up arsenic along with the other nutrients found within the soil. Therefore, when the crop is harvested, the population is also ingesting arsenic in their primary food source, as well as their primary water source.

Possible solutions vary and can be divided into short and long term remedies. Ideas include treating surface water sources, relying more on rain water harvesting, replacing contaminated water sources, and installing arsenic filters into the groundwater pumps. These measures are potentially costly and inconvenient, especially for the poorer areas in which contaminated areas might be located. Other measures, geared more toward long term solutions, are to create deeper tube wells to bypass the arsenic laden groundwater, include arsenic removal systems, or to have surface water treatment plants. Long term solutions, though more efficient and useful than the short term solutions, may allow hundreds of people to be exposed to the arsenic before they even take effect or are built. In cases where lives are at risk, especially in impoverished societies, the balance between efficient solutions and cost can be difficult to maintain.

Though arsenic poisoning is just one of many problems found in countries such as India, medical geology has applications in all countries and regions. Never having thought of the field before, I was incredibly intrigued by the complex relationship between the mineral composition of our environment and the food and water that we consume. In a situation like India’s, where so many live so close to their environment, the theoretical intrigue I experienced while reading the article is subdued by the realization that these case studies represent only a few of the struggles that thousands are experiencing every day. For those of us living in the United States, fear of being poisoned by our groundwater certainly exists, but not nearly at such dramatic levels. All this to show that the study of the environment is not just for those who want to “save the whales,” or hug trees or live on organic farms (though the latter option sounds lovely) –it’s also about studying the complex relationship between humans and the environment in which we live. In some cases, such as medical geology, being aware of the environment will even provide you with the knowledge to find solutions and save lives.




Bunnell, J.E., Finkelman, R. B., Centeno, J.A., Selinus, O., (2007). Medical Geology: A globally emerging discipline. Geologica Acta (5) 3, 273-381.

Dissanyake, C. B., Rao, C. R. M., and Chandrajith, R., (2010). Some Aspects of the Medical Geology of the Indian Subcontinent and Neighboring Regions. In Selinus, O., Finkelman, R. B., and Centeno J. A., (Eds.). Medical Geology: A Regional Synthesis. Springer.