Saturday 3 December 2011

Interior of the Sun

Interior of the Sun
Regions of the Sun include the core, radiation zone, convection zone, and photosphere. Gases in the core are about 150 times as dense as water and reach temperatures as high as 16 million degrees C (29 million degrees F). The Sun’s energy is produced in the core through nuclear fusion of hydrogen atoms into helium. In the radiation zone, heat flows outward through gases that are about as dense as water. The radiation zone is cooler than the core, about 2.5 million degrees C (4.5 million degrees F). In the convection zone, churning motions of the gases carry the Sun’s energy further outward. The convection zone is slightly cooler, about 2 million degrees C (3.6 million degrees F), and less dense, about one-tenth as dense as water. The photosphere is much cooler, about 5500° C (10,000° F) and much less dense, about one-millionth that of water. The turbulence of this region is visible from earth in the form of sunspots, solar flares, and small patches of gas called granules.
Encarta Encyclopedia
© Microsoft Corporation. All Rights Reserved.
Microsoft ® Encarta ® 2008. © 1993-2007 Microsoft Corporation. All rights reserved.

Paramagnetism

Paramagnetism
Liquid oxygen becomes trapped in an electromagnet’s magnetic field because oxygen (O2) is paramagnetic. Oxygen has two unpaired electrons whose magnetic moments align with external magnetic field lines. When this occurs, the O2 molecules themselves behave like tiny magnets, and become trapped between the poles of the electromagnet.
Encarta Encyclopedia
Yoav Levy/Phototake NYC
Microsoft ® Encarta ® 2008. © 1993-2007 Microsoft Corporation. All rights reserved.

Hepatitis B Virus


Hepatitis B Virus
The hepatitis B virus (HBV) causes inflammation of the liver. The virus is recognizable under magnification by the round, infectious “Dane particles” accompanied by tube-shaped, empty viral envelopes. Symptoms of hepatitis B infection include jaundice and a flulike illness, while chronic infection can lead to serious problems such as cirrhosis and cancer of the liver.
Institute Pasteur/CNRI/Phototake NYC
Microsoft ® Encarta ® 2008. © 1993-2007 Microsoft Corporation. All rights reserved.


Viruses

Virus (life science)
I
INTRODUCTION
Virus (life science), infectious agent found in virtually all life forms, including humans, animals, plants, fungi, and bacteria. Viruses consist of genetic material—either deoxyribonucleic acid (DNA) or ribonucleic acid (RNA)—surrounded by a protective coating of protein, called a capsid, with or without an outer lipid envelope. Viruses are between 20 and 100 times smaller than bacteria and hence are too small to be seen by light microscopy. Viruses vary in size from the largest poxviruses of about 450 nanometers (about 0.000014 in) in length to the smallest polioviruses of about 30 nanometers (about 0.000001 in). Viruses are not considered free-living, since they cannot reproduce outside of a living cell; they have evolved to transmit their genetic information from one cell to another for the purpose of replication.
Viruses often damage or kill the cells that they infect, causing disease in infected organisms. A few viruses stimulate cells to grow uncontrollably and produce cancers. Although many infectious diseases, such as the common cold, are caused by viruses, there are no cures for these illnesses. The difficulty in developing antiviral therapies stems from the large number of variant viruses that can cause the same disease, as well as the inability of drugs to disable a virus without disabling healthy cells. However, the development of antiviral agents is a major focus of current research, and the study of viruses has led to many discoveries important to human health.
II
STRUCTURE AND CLASSIFICATION
Individual viruses, or virus particles, also called virions, contain genetic material, or genomes, in one of several forms. Unlike cellular organisms, in which the genes always are made up of DNA, viral genes may consist of either DNA or RNA. Like cell DNA, almost all viral DNA is double-stranded, and it can have either a circular or a linear arrangement. Almost all viral RNA is single-stranded; it is usually linear, and it may be either segmented (with different genes on different RNA molecules) or nonsegmented (with all genes on a single piece of RNA).
The viral protective shell, or capsid, can be either helical (spiral-shaped) or icosahedral (having 20 triangular sides). Capsids are composed of repeating units of one or a few different proteins. These units are called protomers or capsomers. The proteins that make up the virus particle are called structural proteins. Viruses also carry genes for making proteins that are never incorporated into the virus particle and are found only in infected cells. These viral proteins are called nonstructural proteins; they include factors required for the replication of the viral genome and the production of the virus particle.
Capsids and the genetic material (DNA or RNA) they contain are together referred to as nucleocapsids. Some virus particles consist only of nucleocapsids, while others contain additional structures.
Some icosahedral and helical animal viruses are enclosed in a lipid envelope acquired when the virus buds through host-cell membranes. Inserted into this envelope are glycoproteins that the viral genome directs the cell to make; these molecules bind virus particles to susceptible host cells.
The most elaborate viruses are the bacteriophages, which use bacteria as their hosts. Some bacteriophages resemble an insect with an icosahedral head attached to a tubular sheath. From the base of the sheath extend several long tail fibers that help the virus attach to the bacterium and inject its DNA to be replicated and to direct capsid production and virus particle assembly inside the cell.
Viroids and prions are smaller than viruses, but they are similarly associated with disease. Viroids are plant pathogens that consist only of a circular, independently replicating RNA molecule. The single-stranded RNA circle collapses on itself to form a rodlike structure. The only known mammalian pathogen that resembles plant viroids is the deltavirus (hepatitis D), which requires hepatitis B virus proteins to package its RNA into virus particles. Co-infection with hepatitis B and D can produce more severe disease than can infection with hepatitis B alone. Prions are mutated forms of a normal protein found on the surface of certain animal cells. The mutated protein, known as a prion, has been implicated in some neurological diseases such as Creutzfeldt-Jakob disease and Bovine Spongiform Encephalopathy. There is some evidence that prions resemble viruses in their ability to cause infection. Prions, however, lack the nucleic acid found in viruses.
Viruses are classified according to their type of genetic material, their strategy of replication, and their structure. The International Committee on Nomenclature of Viruses (ICNV), established in 1966, devised a scheme to group viruses into families, subfamilies, genera, and species. The ICNV report published in 1995 assigned more than 4000 viruses into 71 virus families. Hundreds of other viruses remain unclassified because of the lack of sufficient information.
III
REPLICATION
The first contact between a virus particle and its host cell occurs when an outer viral structure docks with a specific molecule on the cell surface. For example, a glycoprotein called gp120 on the surface of the human immunodeficiency virus (HIV, the cause of acquired immunodeficiency syndrome, or AIDS) virion specifically binds to the CD4 molecule found on certain human T lymphocytes (a type of white blood cell). Most cells that do not have surface CD4 molecules generally cannot be infected by HIV.
After binding to an appropriate cell, a virus must cross the cell membrane. Some viruses accomplish this goal by fusing their lipid envelope to the cell membrane, thus releasing the nucleocapsid into the cytoplasm of the cell. Other viruses must first be endocytosed (enveloped by a small section of the cell’s plasma membrane that pokes into the cell and pinches off to form a bubblelike vesicle called an endosome) before they can cross the cell membrane. Conditions in the endosome allow many viruses to change the shape of one or more of their proteins. These changes permit the virus either to fuse with the endosomal membrane or to lyse the endosome (cause it to break apart), allowing the nucleocapsid to enter the cell cytoplasm.
Once inside the cell, the virus replicates itself through a series of events. Viral genes direct the production of proteins by the host cellular machinery. The first viral proteins synthesized by some viruses are the enzymes required to copy the viral genome. Using a combination of viral and cellular components, the viral genome can be replicated thousands of times. Late in the replication cycle for many viruses, proteins that make up the capsid are synthesized. These proteins package the viral genetic material to make newly formed nucleocapsids.
To complete the virus replication cycle, viruses must exit the cell. Some viruses bud out of the cell’s plasma membrane by a process resembling reverse endocytosis. Other viruses cause the cell to lyse, thereby releasing newly formed virus particles ready to infect other cells. Still other viruses pass directly from one cell into an adjacent cell without being exposed to the extracellular environment. The virus replication cycle can be as short as a couple of hours for certain small viruses or as long as several days for some large viruses.
Some viruses kill cells by inflicting severe damage resulting in cell lysis; other viruses cause the cell to kill itself in response to virus infection. This programmed cell suicide is thought to be a host defense mechanism to eliminate infected cells before the virus can complete its replication cycle and spread to other cells. Alternatively, cells may survive virus infection, and the virus can persist for the life of its host. Virtually all people harbor harmless viruses.
Retroviruses, such as HIV, have RNA that is transcribed into DNA by the viral enzyme reverse transcriptase upon entry into the cell. (The ability of retroviruses to copy RNA into DNA earned them their name because this process is the reverse of the usual transfer of genetic information, from DNA to RNA.) The DNA form of the retrovirus genome is then integrated into the cellular DNA and is referred to as the provirus. The viral genome is replicated every time the host cell replicates its DNA and is thus passed on to daughter cells.
Hepatitis B virus can also transcribe RNA to DNA, but this virus packages the DNA version of its genome into virus particles. Unlike retroviruses, hepatitis B virus does not integrate into the host cell DNA.
IV
DISEASE
Most viral infections cause no symptoms and do not result in disease. For example, only a small percentage of individuals who become infected with Epstein-Barr virus or western equine encephalomyelitis virus ever develop disease symptoms. In contrast, most people who are infected with measles, rabies, or influenza viruses develop the disease. A wide variety of viral and host factors determine the outcome of virus infections. A small genetic variation can produce a virus with increased capacity to cause disease. Such a virus is said to have increased virulence.
Viruses can enter the body by several routes. Herpes simplex virus and poxviruses enter through the skin by direct contact with virus-containing skin lesions on infected individuals. Ebola, hepatitis B, and HIV can be contracted from infected blood products. Hypodermic needles and animal and insect bites can transmit a variety of viruses through the skin. Viruses that infect through the respiratory tract are usually transmitted by airborne droplets of mucus or saliva from infected individuals who cough or sneeze. Viruses that enter through the respiratory tract include orthomyxovirus (influenza), rhinovirus and adenovirus (common cold), and varicella-zoster virus (chicken pox). Viruses such as rotavirus, coronavirus, poliovirus, hepatitis A, and some adenoviruses enter the host through the gastrointestinal tract. Sexually transmitted viruses, such as herpes simplex, HIV, and human papilloma viruses (HPV), gain entry through the genitourinary route. Other viruses, including some adenoviruses, echoviruses, Coxsackie viruses, and herpesviruses, can infect through the eye.
Virus infections can be either localized or systemic. The path of virus spread through the body in systemic infections differs among different viruses. Following replication at the initial site of entry, many viruses are spread to their target organs by the bloodstream or the nervous system.
The particular cell type can influence the outcome of virus infection. For example, herpes simplex virus undergoes lytic replication in skin cells around the lips but can establish a latent or dormant state in neuron cell bodies (located in ganglia) for extended periods of time. During latency, the viral genome is largely dormant in the cell nucleus until a stimulus such as a sunburn causes the reactivation of latent herpesvirus, leading to the lytic replication cycle. Once reactivated, the virus travels from the ganglia back down the nerve to cause a cold sore on the lip near the original site of infection. The herpesvirus genome does not integrate into the host cell genome.
Virus-induced illnesses can be either acute, in which the patient recovers promptly, or chronic, in which the virus remains with the host or the damage caused by the virus is irreparable. For most acute viruses, the time between infection and the onset of disease can vary from three days to three weeks. In contrast, onset of AIDS following infection with HIV takes an average of 7 to 11 years.
Several human viruses are likely to be agents of cancer, which can take decades to develop. The precise role of these viruses in human cancers is not well understood, and genetic and environmental factors are likely to contribute to these diseases. But because a number of viruses have been shown to cause tumors in animal models, it is probable that many viruses have a key role in human cancers.
Some viruses—alphaviruses and flaviviruses, for example—must be able to infect more than one species to complete their life cycles. Eastern equine encephalomyelitis virus, an alphavirus, replicates in mosquitoes and is transmitted to wild birds when the mosquitoes feed. Thus, wild birds and perhaps mammals and reptiles serve as the virus reservoir, and mosquitoes serve as vectors essential to the virus life cycle by ensuring transmission of the virus from one host to another. Horses and people are accidental hosts when they are bitten by an infected mosquito, and they do not play an important role in virus transmission.
V
DEFENSE
Although viruses cannot be treated with antibiotics, which are effective only against bacteria, the body’s immune system has many natural defenses against virus infections. Infected cells produce interferons and other cytokines (soluble components that are largely responsible for regulating the immune response), which can signal adjacent uninfected cells to mount their defenses, enabling uninfected cells to impair virus replication. Some cytokines can cause a fever in response to viral infection; elevated body temperature retards the growth of some types of viruses. B lymphocytes produce specific antibodies that can bind and inactivate viruses. Cytotoxic T cells recognize virus-infected cells and target them for destruction. However, many viruses have evolved ways to circumvent some of these host defense mechanisms.
The development of antiviral therapies has been thwarted by the difficulty of generating drugs that can distinguish viral processes from cellular processes. Therefore, most treatments for viral diseases simply alleviate symptoms, such as fever, dehydration, and achiness. Nevertheless, antiviral drugs for influenza virus, herpesviruses, and HIV are available, and many others are in the experimental and developmental stages.
Prevention has been a more effective method of controlling virus infections. Viruses that are transmitted by insects or rodent excretions can be controlled with pesticides. Successful vaccines are currently available for poliovirus, influenza, rabies, adenovirus, rubella, yellow fever, measles, mumps, and chicken pox. Vaccines are prepared from killed (inactivated) virus, live (attenuated or weakened) virus, or isolated viral proteins (subunits). Each of these types of vaccines elicits an immune response while causing little or no disease, and there are advantages and disadvantages to each. (For a more complete discussion of vaccines, see the Immunization article.)
The principle of vaccination was discovered by British physician Edward Jenner. In 1796 Jenner observed that milkmaids in England who contracted the mild cowpox virus infection from their cows were protected from smallpox, a frequently fatal disease. In 1798 Jenner formally demonstrated that prior infection with cowpox virus protected those that he inoculated with smallpox virus (an experiment that would not meet today’s protocol standards because of its use of human subjects). In 1966 the World Health Organization (WHO) initiated a program to eradicate smallpox from the world. Because it was impossible to vaccinate the entire world population, the eradication plan was to identify cases of smallpox and then vaccinate all of the individuals in that vicinity. The last reported case of smallpox was in Somalia in October 1977. An important factor in the success of eradicating smallpox was that humans are the only host and there are no animal reservoirs for smallpox virus. The strain of poxvirus used for immunization against smallpox was called vaccinia. Introduction of the Salk (inactivated) and Sabin (live, attenuated) vaccines for poliovirus, developed in the 1950s by the American physician and epidemiologist Jonas Salk and the American virologist Albert Bruce Sabin, respectively, was responsible for a significant worldwide decline in paralytic poliomyelitis. However, polio has not been eradicated, partly because the virus can mutate and escape the host immune response. Influenza viruses mutate so rapidly that new vaccines are developed for distribution each year.
Viruses undergo very high rates of mutation (genetic alteration) largely because they lack the repair systems that cells have to safeguard against mutations. A high mutation rate enables the virus to continually adapt to new intracellular environments and to escape from the host immune response. Co-infection of the same cell with different related viruses allows for genetic reassortment (exchange of genome segments) and intramolecular recombination. Genetic alterations can alter virulence or allow viruses to gain access to new cell types or new animal hosts. Many scientists believe that HIV is derived from a closely related monkey virus, SIV (simian immunodeficiency virus), that acquired the ability to infect humans. Many of today’s emerging viruses may have similar histories.
VI
DISCOVERY
By the last half of the 19th century, the microbial world was known to consist of protozoa, fungi, and bacteria, all visible with a light microscope. In the 1840s, the German scientist Jacob Henle suggested that there were infectious agents too small to be seen with a light microscope, but for the lack of direct proof, his hypothesis was not accepted. Although the French scientist Louis Pasteur was working to develop a vaccine for rabies in the 1880s, he did not understand the concept of a virus.
During the last half of the 19th century, several key discoveries were made that set the stage for the discovery of viruses. Pasteur is usually credited for dispelling the notion of spontaneous generation and proving that organisms reproduce new organisms. The German scientist Robert Koch, a student of Jacob Henle, and the British surgeon Joseph Lister developed techniques for growing cultures of single organisms that allowed the assignment of specific bacteria to specific diseases.
The first experimental transmission of a viral infection was accomplished in about 1880 by the German scientist Adolf Mayer, when he demonstrated that extracts from infected tobacco leaves could transfer tobacco mosaic disease to a new plant, causing spots on the leaves. Because Mayer was unable to isolate a bacterium or fungus from the tobacco leaf extracts, he considered the idea that tobacco mosaic disease might be caused by a soluble agent, but he concluded incorrectly that a new type of bacteria was likely to be the cause. The Russian scientist Dimitri Ivanofsky extended Mayer’s observation and reported in 1892 that the tobacco mosaic agent was small enough to pass through a porcelain filter known to block the passage of bacteria. He too failed to isolate bacteria or fungi from the filtered material. But Ivanofsky, like Mayer, was bound by the dogma of his times and concluded in 1903 that the filter might be defective or that the disease agent was a toxin rather than a reproducing organism.
Unaware of Ivanofsky’s results, the Dutch scientist Martinus Beijerinck, who collaborated with Mayer, repeated the filter experiment but extended this finding by demonstrating that the filtered material was not a toxin because it could grow and reproduce in the cells of the plant tissues. In his 1898 publication, Beijerinck referred to this new disease agent as a contagious living liquid—contagium vivum fluid—initiating a 20-year controversy over whether viruses were liquids or particles.
The conclusion that viruses are particles came from several important observations. In 1917 the French-Canadian scientist Félix H. d’Hérelle discovered that viruses of bacteria, which he named bacteriophage, could make holes in a culture of bacteria. Because each hole, or plaque, developed from a single bacteriophage, this experiment provided the first method for counting infectious viruses (the plaque assay). In 1935 the American biochemist Wendell Meredith Stanley crystallized tobacco mosaic virus to demonstrate that viruses had regular shapes, and in 1939 tobacco mosaic virus was first visualized using the electron microscope.
In 1898 the German bacteriologists Friedrich August Johannes Löffler and Paul F. Frosch (both trained by Robert Koch) described foot-and-mouth disease virus as the first filterable agent of animals, and in 1900, the American bacteriologist Walter Reed and colleagues recognized yellow fever virus as the first human filterable agent. For several decades viruses were referred to as filterable agents, and gradually the term virus (Latin for “slimy liquid” or “poison”) was employed strictly for this new class of infectious agents. Through the 1940s and 1950s many critical discoveries were made about viruses through the study of bacteriophages because of the ease with which the bacteria they infect could be grown in the laboratory. Between 1948 and 1955, scientists at the National Institutes of Health (NIH) and at Johns Hopkins Medical Institutions revolutionized the study of animal viruses by developing cell culture systems that permitted the growth and study of many animal viruses in laboratory dishes.
VII
EVOLUTION
Three theories have been put forth to explain the origin of viruses. One theory suggests that viruses are derived from more complex intracellular parasites that have eliminated all but the essential features required for replication and transmission. A more widely accepted theory is that viruses are derived from normal cellular components that gained the ability to replicate autonomously. A third possibility is that viruses originated from self-replicating RNA molecules. This hypothesis is supported by the observation that RNA can code for proteins as well as carry out enzymatic functions. Thus, viroids may resemble “prehistoric” viruses.
VIII
IMPORTANCE OF VIRUSES
Because viral processes so closely resemble normal cellular processes, abundant information about cell biology and genetics has come from studying viruses. Basic scientists and medical researchers at university and hospital laboratories are working to understand viral mechanisms of action and are searching for new and better ways to treat viral illnesses. Many pharmaceutical and biotechnology companies are actively pursuing effective antiviral therapies. Viruses can also serve as tools. Because they are efficient factories for the production of viral proteins, viruses have been harnessed to produce a wide variety of proteins for industrial and research purposes. A new area of endeavor is the use of viruses for gene therapy. Because viruses are programmed to carry genetic information into cells, they have been used to replace defective cellular genes. Viruses are also being altered by genetic engineering to kill selected cell populations, such as tumor cells. The use of genetically engineered viruses for medical intervention is a relatively new field, and none of these therapies is widely available. However, this is a fast-growing area of research, and many clinical trials are now in progress. The use of genetically engineered viruses extends beyond the medical field. Recombinant insect viruses have agricultural applications and are currently being tested in field trials for their effectiveness as pesticides.

Contributed By:
J. Marie Hardwick

DNA

Deoxyribonucleic Acid
I
INTRODUCTION
Deoxyribonucleic Acid (DNA), genetic material of all cellular organisms and most viruses. DNA carries the information needed to direct protein synthesis and replication. Protein synthesis is the production of the proteins needed by the cell or virus for its activities and development. Replication is the process by which DNA copies itself for each descendant cell or virus, passing on the information needed for protein synthesis. In most cellular organisms, DNA is organized on chromosomes located in the nucleus of the cell.
II
STRUCTURE
A molecule of DNA consists of two chains, strands composed of a large number of chemical compounds, called nucleotides, linked together to form a chain. These chains are arranged like a ladder that has been twisted into the shape of a winding staircase, called a double helix. Each nucleotide consists of three units: a sugar molecule called deoxyribose, a phosphate group, and one of four different nitrogen-containing compounds called bases. The four bases are adenine (A), guanine (G), thymine (T), and cytosine (C). The deoxyribose molecule occupies the center position in the nucleotide, flanked by a phosphate group on one side and a base on the other. The phosphate group of each nucleotide is also linked to the deoxyribose of the adjacent nucleotide in the chain. These linked deoxyribose-phosphate subunits form the parallel side rails of the ladder. The bases face inward toward each other, forming the rungs of the ladder.
The nucleotides in one DNA strand have a specific association with the corresponding nucleotides in the other DNA strand. Because of the chemical affinity of the bases, nucleotides containing adenine are always paired with nucleotides containing thymine, and nucleotides containing cytosine are always paired with nucleotides containing guanine. The complementary bases are joined to each other by weak chemical bonds called hydrogen bonds.
In 1953 American biochemist James D. Watson and British biophysicist Francis Crick published the first description of the structure of DNA. Their model proved to be so important for the understanding of protein synthesis, DNA replication, and mutation that they were awarded the 1962 Nobel Prize for physiology or medicine for their work.
III
PROTEIN SYNTHESIS
DNA carries the instructions for the production of proteins. A protein is composed of smaller molecules called amino acids, and the structure and function of the protein is determined by the sequence of its amino acids. The sequence of amino acids, in turn, is determined by the sequence of nucleotide bases in the DNA. A sequence of three nucleotide bases, called a triplet, is the genetic code word, or codon, that specifies a particular amino acid. For instance, the triplet GAC (guanine, adenine, and cytosine) is the codon for the amino acid leucine, and the triplet CAG (cytosine, adenine, and guanine) is the codon for the amino acid valine. A protein consisting of 100 amino acids is thus encoded by a DNA segment consisting of 300 nucleotides. Of the two polynucleotide chains that form a DNA molecule, only one strand contains the information needed for the production of a given amino acid sequence. The other strand aids in replication.
Protein synthesis begins with the separation of a DNA molecule into two strands. In a process called transcription, a section of one strand acts as a template, or pattern, to produce a new strand called messenger RNA (mRNA). The mRNA leaves the cell nucleus and attaches to the ribosomes, specialized cellular structures that are the sites of protein synthesis. Amino acids are carried to the ribosomes by another type of RNA, called transfer RNA (tRNA). In a process called translation, the amino acids are linked together in a particular sequence, dictated by the mRNA, to form a protein.
A gene is a sequence of DNA nucleotides that specify the order of amino acids in a protein via an intermediary mRNA molecule. Substituting one DNA nucleotide with another containing a different base causes all descendant cells or viruses to have the altered nucleotide base sequence. As a result of the substitution, the sequence of amino acids in the resulting protein may also be changed. Such a change in a DNA molecule is called a mutation. Most mutations are the result of errors in the replication process. Exposure of a cell or virus to radiation or to certain chemicals increases the likelihood of mutations.
IV
REPLICATION
In most cellular organisms, replication of a DNA molecule takes place in the cell nucleus and occurs just before the cell divides. Replication begins with the separation of the two polynucleotide chains, each of which then acts as a template for the assembly of a new complementary chain. As the old chains separate, each nucleotide in the two chains attracts a complementary nucleotide that has been formed earlier by the cell. The nucleotides are joined to one another by hydrogen bonds to form the rungs of a new DNA molecule. As the complementary nucleotides are fitted into place, an enzyme called DNA polymerase links them together by bonding the phosphate group of one nucleotide to the sugar molecule of the adjacent nucleotide, forming the side rail of the new DNA molecule. This process continues until a new polynucleotide chain has been formed alongside the old one, forming a new double-helix molecule.
V
TOOLS AND PROCEDURES
Several tools and procedures facilitate are used by scientists for the study and manipulation of DNA. Specialized enzymes, called restriction enzymes, found in bacteria act like molecular scissors to cut the phosphate backbones of DNA molecules at specific base sequences. Strands of DNA that have been cut with restriction enzymes are left with single-stranded tails that are called sticky ends, because they can easily realign with tails from certain other DNA fragments. Scientists take advantage of restriction enzymes and the sticky ends generated by these enzymes to carry out recombinant DNA technology, or genetic engineering. This technology involves removing a specific gene from one organism and inserting the gene into another organism.
Another tool for working with DNA is a procedure called polymerase chain reaction (PCR). This procedure uses the enzyme DNA polymerase to make copies of DNA strands in a process that mimics the way in which DNA replicates naturally within cells. Scientists use PCR to obtain vast numbers of copies of a given segment of DNA.
DNA fingerprinting, also called DNA typing, makes it possible to compare samples of DNA from various sources in a manner that is analogous to the comparison of fingerprints. In this procedure, scientists use restriction enzymes to cleave a sample of DNA into an assortment of fragments. Solutions containing these fragments are placed at the surface of a gel to which an electric current is applied. The electric current causes the DNA fragments to move through the gel. Because smaller fragments move more quickly than larger ones, this process, called electrophoresis, separates the fragments according to their size. The fragments are then marked with probes and exposed on X-ray film, where they form the DNA fingerprint—a pattern of characteristic black bars that is unique for each type of DNA.
A procedure called DNA sequencing makes it possible to determine the precise order, or sequence, of nucleotide bases within a fragment of DNA. Most versions of DNA sequencing use a technique called primer extension, developed by British molecular biologist Frederick Sanger. In primer extension, specific pieces of DNA are replicated and modified, so that each DNA segment ends in a fluorescent form of one of the four nucleotide bases. Modern DNA sequencers, pioneered by American molecular biologist Leroy Hood, incorporate both lasers and computers. Scientists have completely sequenced the genetic material of several microorganisms, including the bacterium Escherichia coli.
In 1998 scientists achieved the milestone of sequencing the complete genome of a multicellular organism—a roundworm identified as Caenorhabditis elegans. The Human Genome Project, an international research collaboration, was established to determine the sequence of all of the 3 billion nucleotide base pairs that make up the human genetic material. In 2003 scientists completed the sequencing of the human genome. The project identified nearly all of the estimated 20,000 to 25,000 genes in the nucleus of a human cell. The project also mapped the location of these genes on the 23 pairs of human chromosomes.
An instrument called an atomic force microscope enables scientists to manipulate the three-dimensional structure of DNA molecules. This microscope involves laser beams that act like tweezers—attaching to the ends of a DNA molecule and pulling on them. By manipulating these laser beams, scientists can stretch, or uncoil, fragments of DNA. This work is helping reveal how DNA changes its three-dimensional shape as it interacts with enzymes.
VI
APPLICATIONS
Research into DNA has had a significant impact on medicine. Through recombinant DNA technology, scientists can modify microorganisms so that they become so-called factories that produce large quantities of medically useful drugs. This technology is used to produce insulin, which is a drug used by diabetics, and interferon, which is used by some cancer patients. Studies of human DNA are revealing genes that are associated with specific diseases, such as cystic fibrosis and breast cancer. This information is helping physicians to diagnose various diseases, and it may lead to new treatments. For example, physicians are using a technology called chimeraplasty, which involves a synthetic molecule containing both DNA and RNA strands, in an effort to develop a treatment for a form of hemophilia.
Forensic science uses techniques developed in DNA research to identify individuals who have committed crimes. DNA from semen, skin, or blood taken from the crime scene can be compared with the DNA of a suspect, and the results can be used in court as evidence.
DNA has helped taxonomists determine evolutionary relationships among animals, plants, and other life forms. Closely related species have more similar DNA than do species that are distantly related. One surprising finding to emerge from DNA studies is that vultures of the Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa (see Classification).
Techniques of DNA manipulation are used in farming, in the form of genetic engineering and biotechnology. Strains of crop plants to which genes have been transferred may produce higher yields and may be more resistant to insects. Cattle have been similarly treated to increase milk and beef production, as have hogs, to yield more meat with less fat.
VII
SOCIAL ISSUES
Despite the many benefits offered by DNA technology, some critics argue that its development should be monitored closely. One fear raised by such critics is that DNA fingerprinting could provide a means for employers to discriminate against members of various ethnic groups. Critics also fear that studies of people’s DNA could permit insurance companies to deny health insurance to those people at risk for developing certain diseases. The potential use of DNA technology to alter the genes of embryos is a particularly controversial issue.
The use of DNA technology in agriculture has also sparked controversy. Some people question the safety, desirability, and ecological impact of genetically altered crop plants. In addition, animal rights groups have protested against the genetic engineering of farm animals.
Despite these and other areas of disagreement, many people agree that DNA technology offers a mixture of benefits and potential hazards. Many experts also agree that an informed public can help assure that DNA technology is used wisely.

Common Molecules



Common Molecules
Molecules are compounds made up of specific combinations of atoms. Familiar substances may theoretically be divided into single molecules, as modeled here, but no further. Like a strict recipe in which atoms are the ingredients, each molecule has a chemical formula. If any ingredients are subtracted or changed, the molecule becomes something completely different.
Encarta Encyclopedia
© Microsoft Corporation. All Rights Reserved.
Microsoft ® Encarta ® 2008. © 1993-2007 Microsoft Corporation. All rights reserved.

Television Picture Tube

Television Picture Tube
A color television picture tube contains three electron guns, one corresponding to each of the three primary colors of light—red, green, and blue. Electromagnets direct the beams of electrons emerging from these guns to continuously scan the screen. As the electrons strike red, green, and blue phosphor dots on the screen, they make the dots glow. A screen with holes in it, called a shadowmask, ensures that each electron beam only strikes phosphor dots of its corresponding color. The glow of all the dots together forms the television picture.
Encarta Encyclopedia
© Microsoft Corporation. All Rights Reserved.
Microsoft ® Encarta ® 2008. © 1993-2007 Microsoft Corporation. All rights reserved.

Thursday 1 December 2011

Electromagnetic Radiation

Electromagnetic Radiation
I
INTRODUCTION
Electromagnetic Radiation, energy waves produced by the oscillation or acceleration of an electric charge. Electromagnetic waves have both electric and magnetic components. Electromagnetic radiation can be arranged in a spectrum that extends from waves of extremely high frequency and short wavelength to extremely low frequency and long wavelength (see Wave Motion). Visible light is only a small part of the electromagnetic spectrum. In order of decreasing frequency, the electromagnetic spectrum consists of gamma rays, hard and soft X rays, ultraviolet radiation, visible light, infrared radiation, microwaves, and radio waves.
II
PROPERTIES
There are three phenomena through which energy can be transmitted: electromagnetic radiation, conduction, and convection (see Heat Transfer). Unlike conduction and convection, electromagnetic waves need no material medium for transmission. Thus, light and radio waves can travel through interplanetary and interstellar space from the sun and stars to the earth. Regardless of the frequency, wavelength, or method of propagation, electromagnetic waves travel at a speed of 3 × 1010 cm (186,272 mi) per second in a vacuum. All the components of the electromagnetic spectrum, regardless of frequency, also have in common the typical properties of wave motion, including diffraction and interference. The wavelengths range from millionths of a centimeter to many kilometers. The wavelength and frequency of electromagnetic waves are important in determining heating effect, visibility, penetration, and other characteristics of the electromagnetic radiation.
III
THEORY
British physicist James Clerk Maxwell laid out the theory of electromagnetic waves in a series of papers published in the 1860s. He analyzed mathematically the theory of electromagnetic fields and predicted that visible light was an electromagnetic phenomenon.
Physicists had known since the early 19th century that light is propagated as a transverse wave (a wave in which the vibrations move in a direction perpendicular to the direction of the advancing wave front). They assumed, however, that the wave required some material medium for its transmission, so they postulated an extremely diffuse substance, called ether, as the unobservable medium. Maxwell's theory made such an assumption unnecessary, but the ether concept was not abandoned immediately, because it fit in with the Newtonian concept of an absolute space-time frame for the universe. A famous experiment conducted by the American physicist Albert Abraham Michelson and the American chemist Edward Williams Morley in the late 19th century served to dispel the ether concept and was important in the development of the theory of relativity. This work led to the realization that the speed of electromagnetic radiation in a vacuum is an invariant.
IV
QUANTA OF RADIATION
At the beginning of the 20th century, however, physicists found that the wave theory did not account for all the properties of radiation. In 1900 the German physicist Max Planck demonstrated that the emission and absorption of radiation occur in finite units of energy, known as quanta. In 1904, German-born American physicist Albert Einstein was able to explain some puzzling experimental results on the external photoelectric effect by postulating that electromagnetic radiation can behave like a particle (see Quantum Theory).
Other phenomena, which occur in the interaction between radiation and matter, can also be explained only by the quantum theory. Thus, modern physicists were forced to recognize that electromagnetic radiation can sometimes behave like a particle, and sometimes behave like a wave. The parallel concept—that matter also exhibits the same duality of having particlelike and wavelike characteristics—was developed in 1923 by the French physicist Louis Victor, Prince de Broglie.

Magnetism

Magnetism
I
INTRODUCTION
Magnetism, an aspect of electromagnetism, one of the fundamental forces of nature. Magnetic forces are produced by the motion of charged particles such as electrons, indicating the close relationship between electricity and magnetism. The unifying frame for these two forces is called electromagnetic theory (see Electromagnetic Radiation). The most familiar evidence of magnetism is the attractive or repulsive force observed to act between magnetic materials such as iron. More subtle effects of magnetism, however, are found in all matter. In recent times these effects have provided important clues to the atomic structure of matter.
II
HISTORY OF STUDY
The phenomenon of magnetism has been known of since ancient times. The mineral lodestone (see Magnetite), an oxide of iron that has the property of attracting iron objects, was known to the Greeks, Romans, and Chinese. When a piece of iron is stroked with lodestone, the iron itself acquires the same ability to attract other pieces of iron. The magnets thus produced are polarized—that is, each has two sides or ends called north-seeking and south-seeking poles. Like poles repel one another, and unlike poles attract.
The compass was first used for navigation in the West some time after ad1200. In the 13th century, important investigations of magnets were made by the French scholar Petrus Peregrinus. His discoveries stood for nearly 300 years, until the English physicist and physician William Gilbert published his book Of Magnets, Magnetic Bodies, and the Great Magnet of the Earth in 1600. Gilbert applied scientific methods to the study of electricity and magnetism. He pointed out that the earth itself behaves like a giant magnet, and through a series of experiments, he investigated and disproved several incorrect notions about magnetism that were accepted as being true at the time. Subsequently, in 1750, the English geologist John Michell invented a balance that he used in the study of magnetic forces. He showed that the attraction and repulsion of magnets decrease as the squares of the distance from the respective poles increase. The French physicist Charles Augustin de Coulomb, who had measured the forces between electric charges, later verified Michell's observation with high precision.
III
ELECTROMAGNETIC THEORY
In the late 18th and early 19th centuries, the theories of electricity and magnetism were investigated simultaneously. In 1819 an important discovery was made by the Danish physicist Hans Christian Oersted, who found that a magnetic needle could be deflected by an electric current flowing through a wire. This discovery, which showed a connection between electricity and magnetism, was followed up by the French scientist André Marie Ampère, who studied the forces between wires carrying electric currents, and by the French physicist Dominique François Jean Arago, who magnetized a piece of iron by placing it near a current-carrying wire. In 1831 the English scientist Michael Faraday discovered that moving a magnet near a wire induces an electric current in that wire, the inverse effect to that found by Oersted: Oersted showed that an electric current creates a magnetic field, while Faraday showed that a magnetic field can be used to create an electric current. The full unification of the theories of electricity and magnetism was achieved by the English physicist James Clerk Maxwell, who predicted the existence of electromagnetic waves and identified light as an electromagnetic phenomenon.
Subsequent studies of magnetism were increasingly concerned with an understanding of the atomic and molecular origins of the magnetic properties of matter. In 1905 the French physicist Paul Langevin produced a theory regarding the temperature dependence of the magnetic properties of paramagnets (discussed below), which was based on the atomic structure of matter. This theory is an early example of the description of large-scale properties in terms of the properties of electrons and atoms. Langevin's theory was subsequently expanded by the French physicist Pierre Ernst Weiss, who postulated the existence of an internal, “molecular” magnetic field in materials such as iron. This concept, when combined with Langevin's theory, served to explain the properties of strongly magnetic materials such as lodestone.
After Weiss's theory, magnetic properties were explored in greater and greater detail. The theory of atomic structure of Danish physicist Niels Bohr, for example, provided an understanding of the periodic table and showed why magnetism occurs in transition elements such as iron and the rare earth elements, or in compounds containing these elements. The American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck showed in 1925 that the electron itself has spin and behaves like a small bar magnet. (At the atomic level, magnetism is measured in terms of magnetic moments—a magnetic moment is a vector quantity that depends on the strength and orientation of the magnetic field, and the configuration of the object that produces the magnetic field.) The German physicist Werner Heisenberg gave a detailed explanation for Weiss's molecular field in 1927, on the basis of the newly-developed quantum mechanics (see Quantum Theory). Other scientists then predicted many more complex atomic arrangements of magnetic moments, with diverse magnetic properties.
IV
THE MAGNETIC FIELD
Objects such as a bar magnet or a current-carrying wire can influence other magnetic materials without physically contacting them, because magnetic objects produce a magnetic field. Magnetic fields are usually represented by magnetic flux lines. At any point, the direction of the magnetic field is the same as the direction of the flux lines, and the strength of the magnetic field is proportional to the space between the flux lines. For example, in a bar magnet, the flux lines emerge at one end of the magnet, then curve around the other end; the flux lines can be thought of as being closed loops, with part of the loop inside the magnet, and part of the loop outside. At the ends of the magnet, where the flux lines are closest together, the magnetic field is strongest; toward the side of the magnet, where the flux lines are farther apart, the magnetic field is weaker. Depending on their shapes and magnetic strengths, different kinds of magnets produce different patterns of flux lines. The pattern of flux lines created by magnets or any other object that creates a magnetic field can be mapped by using a compass or small iron filings. Magnets tend to align themselves along magnetic flux lines. Thus a compass, which is a small magnet that is free to rotate, will tend to orient itself in the direction of the magnetic flux lines. By noting the direction of the compass needle when the compass is placed at many locations around the source of the magnetic field, the pattern of flux lines can be inferred. Alternatively, when iron filings are placed around an object that creates a magnetic field, the filings will line up along the flux lines, revealing the flux line pattern.
Magnetic fields influence magnetic materials, and also influence charged particles that move through the magnetic field. Generally, when a charged particle moves through a magnetic field, it feels a force that is at right angles both to the velocity of the charged particle and the magnetic field. Since the force is always perpendicular to the velocity of the charged particle, a charged particle in a magnetic field moves in a curved path. Magnetic fields are used to change the paths of charged particles in devices such as particle accelerators and mass spectrometers.
V
KINDS OF MAGNETIC MATERIALS
The magnetic properties of materials are classified in a number of different ways.
One classification of magnetic materials—into diamagnetic, paramagnetic, and ferromagnetic—is based on how the material reacts to a magnetic field. Diamagnetic materials, when placed in a magnetic field, have a magnetic moment induced in them that opposes the direction of the magnetic field. This property is now understood to be a result of electric currents that are induced in individual atoms and molecules. These currents, according to Ampere's law, produce magnetic moments in opposition to the applied field. Many materials are diamagnetic; the strongest ones are metallic bismuth and organic molecules, such as benzene, that have a cyclic structure, enabling the easy establishment of electric currents.
Paramagnetic behavior results when the applied magnetic field lines up all the existing magnetic moments of the individual atoms or molecules that make up the material. This results in an overall magnetic moment that adds to the magnetic field. Paramagnetic materials usually contain transition metals or rare earth elements that possess unpaired electrons. Paramagnetism in nonmetallic substances is usually characterized by temperature dependence; that is, the size of an induced magnetic moment varies inversely to the temperature. This is a result of the increasing difficulty of ordering the magnetic moments of the individual atoms along the direction of the magnetic field as the temperature is raised.
A ferromagnetic substance is one that, like iron, retains a magnetic moment even when the external magnetic field is reduced to zero. This effect is a result of a strong interaction between the magnetic moments of the individual atoms or electrons in the magnetic substance that causes them to line up parallel to one another. In ordinary circumstances these ferromagnetic materials are divided into regions called domains; in each domain, the atomic moments are aligned parallel to one another. Separate domains have total moments that do not necessarily point in the same direction. Thus, although an ordinary piece of iron might not have an overall magnetic moment, magnetization can be induced in it by placing the iron in a magnetic field, thereby aligning the moments of all the individual domains. The energy expended in reorienting the domains from the magnetized back to the demagnetized state manifests itself in a lag in response, known as hysteresis.
Ferromagnetic materials, when heated, eventually lose their magnetic properties. This loss becomes complete above the Curie temperature, named after the French physicist Pierre Curie, who discovered it in 1895. (The Curie temperature of metallic iron is about 770° C/1300° F.)
VI
OTHER MAGNETIC ORDERINGS
In recent years, a greater understanding of the atomic origins of magnetic properties has resulted in the discovery of other types of magnetic ordering. Substances are known in which the magnetic moments interact in such a way that it is energetically favorable for them to line up antiparallel; such materials are called antiferromagnets. There is a temperature analogous to the Curie temperature called the Neel temperature, above which antiferromagnetic order disappears.
Other, more complex atomic arrangements of magnetic moments have also been found. Ferrimagnetic substances have at least two different kinds of atomic magnetic moments, which are oriented antiparallel to one another. Because the moments are of different size, a net magnetic moment remains, unlike the situation in an antiferromagnet where all the magnetic moments cancel out. Interestingly, lodestone is a ferrimagnet rather than a ferromagnet; two types of iron ions, each with a different magnetic moment, are in the material. Even more complex arrangements have been found in which the magnetic moments are arranged in spirals. Studies of these arrangements have provided much information on the interactions between magnetic moments in solids.
VII
APPLICATIONS
Numerous applications of magnetism and of magnetic materials have arisen in the past 100 years. The electromagnet, for example, is the basis of the electric motor and the transformer. In more recent times, the development of new magnetic materials has also been important in the computer revolution. Computer memories can be fabricated using bubble domains. These domains are actually smaller regions of magnetization that are either parallel or antiparallel to the overall magnetization of the material. Depending on this direction, the bubble indicates either a one or a zero, thus serving as the units of the binary number system used in computers. Magnetic materials are also important constituents of tapes and disks on which data are stored.
In addition to the atomic-sized magnetic units used in computers, large, powerful magnets are crucial to a variety of modern technologies. Powerful magnetic fields are used in nuclear magnetic resonance imaging, an important diagnostic tool used by doctors. Superconducting magnets are used in today's most powerful particle accelerators to keep the accelerated particles focused and moving in a curved path. Scientists are developing magnetic levitation trains that use strong magnets to enable trains to float above the tracks, reducing friction.

Contributed By:
Martin Blume

Air

Air
I
INTRODUCTION
Air, mixture of gases that composes the atmosphere surrounding Earth. These gases consist primarily of the elements nitrogen, oxygen, argon, and smaller amounts of hydrogen, carbon dioxide, water vapor, helium, neon, krypton, xenon, and others. The most important attribute of air is its life-sustaining property. Human and animal life would not be possible without oxygen in the atmosphere. In addition to providing life-sustaining properties, the various atmospheric gases can be isolated from air and used in industrial and scientific applications, ranging from steelmaking to the manufacture of semiconductors. This article discusses how atmospheric gases are isolated and used for industrial and scientific purposes. For more information about air and the atmosphere, see Meteorology and Atmosphere.
II
GASES IN THE ATMOSPHERE
The atmosphere begins at sea level, and its first layer, the troposphere, extends from 8 to 16 km (5 and 10 mi) from Earth’s surface. The air in the troposphere consists of the following proportions of gases: 78 percent nitrogen, 21 percent oxygen, 0.9 percent argon, 0.03 percent carbon dioxide, and the remaining 0.07 percent is a mixture of hydrogen, water, ozone, neon, helium, krypton, xenon, and other trace components. Companies that isolate gases from air use air from the troposphere, so they produce gases in these same proportions.
The various atmospheric gases have many industrial and scientific uses. By far the most commercially important air gases are nitrogen, oxygen, and argon, each of which has valuable industrial applications. For example, fertilizers are manufactured from compounds made from nitrogen gas, steelmaking furnaces are heated with oxygen, and incandescent light bulbs are filled with argon.
Scientists first isolated oxygen from air in 1774. They did not develop a commercial process for separating air into its component gases, however, until the turn of the 20th century. German professor Carl von Linde developed a process known as cryogenic (cold-temperature) distillation. This process purifies and liquefies air at very cold temperatures. The liquid air is then boiled to isolate the gases (a process called fractional distillation). Liquid nitrogen boils at –195.79° C (-320.42° F), argon at –185.86° C (-302.55° F), and oxygen at –182.96° C (-297.33° F). As the boiling temperature is increased, nitrogen vaporizes from the liquid air first, followed by argon, and then oxygen. Modern air-separation plants can isolate samples of these gases that are up to 99.9999 percent pure.
Today many smaller air-separation plants (those that produce 200 metric tons or less of oxygen per day) employ alternative methods to isolate oxygen and nitrogen from air. Some of these plants use specialized membranes that selectively filter certain air gases. Others utilize beds of special pellets that selectively adsorb oxygen and nitrogen from the air (see adsorption).
III
PURIFYING AIR
Most larger air-separation plants continue to use cryogenic distillation to separate air gases. Before pure gases can be isolated from air, unwanted components such as water vapor, dust, and carbon dioxide must be removed. First, the air is filtered to remove dust and other particles. Next, the air is compressed as the first step in liquefying the air. However, as the air is compressed, the molecules begin striking each other more frequently, raising the air’s temperature (see see Gases; Kinetic Energy). To offset the higher temperatures, water heat exchangers cool the air both during and after compression. As the air cools, most of its water vapor content condenses into liquid and is removed.
After being compressed, the air passes through beds of adsorption beads that remove carbon dioxide, the remaining water vapor, and molecules of heavy hydrocarbons, such as acetylene, butane, and propylene. These compounds all freeze at a higher temperature than do the other air gases. They must be removed before the air is liquefied or they will freeze in the column where distillation occurs.
IV
LIQUEFYING AND SEPARATING AIR
After filtering the air, a portion of the air stream is decompressed in a device called a centrifugal expander (which is basically a compressor that runs in reverse). As the air expands, it loses kinetic energy (energy resulting from the motion of the molecules), which lowers its temperature. The air expands and cools until it liquefies at about -190° C (about -310° F).
After a portion of the air stream is liquefied, the liquid is fed into the top of a distillation column filled with perforated trays (or other structured packing assemblies). These trays or assemblies allow the liquid to trickle down through the column. At the same time, the gaseous portion of the air stream (the part that is still compressed) is fed into the bottom of the column. As the gaseous air rises up through the column, it bubbles up through the liquid trickling down through the trays or packing. The gas is slightly warmer than the liquid is, so as it rises, it heats and eventually boils the surrounding liquid.
The gaseous air also cools as it rises up through the column. The cooling of the gas as it rises creates a temperature difference along the column. The gas heats the liquid at the bottom of the column the most, raising it to a temperature higher than that of the liquid at the top of the column. As the liquid trickles down, it heats up and reaches the boiling point of nitrogen first. The nitrogen boils off near the top of the column and quickly rises to the top. Argon has a boiling point between that of nitrogen and oxygen, so it boils off near the middle of the column. Oxygen has a higher boiling point than that of argon or nitrogen, so it remains a liquid until it reaches the bottom of the column, where the temperature is highest, before boiling away. See also Fractional Distillation.
Krypton, xenon, helium, and neon also separate from the other gases in the column but remain a mixture because the temperature of the column is not cold enough to liquefy these gases. If operators decide to recover these rare gases in the air-separation process and save them for future use, they withdraw the mixture of these gases from the column. They can then separate and purify the krypton, xenon, helium, and neon from the mixture. With the exception of helium, there is little commercial demand for these gases, so operators usually do not recover them. The majority of the world’s helium supply is recovered from natural gas by a similar distillation process.
V
SHIPMENT AND STORAGE
Oxygen, nitrogen, and argon are shipped and stored either as liquids or as compressed gases. As liquids, they are stored in insulated containers; as compressed gases, they are held in steel cylinders that are pressurized up to 170 kg/cm2 (2,400 lb/in2). When recovered, neon, krypton, and xenon are packaged as gases in steel cylinders or glass flasks. Because industries can obtain helium at lower costs from other sources, it is generally returned to the atmosphere after the separation process.
VI
INDUSTRIAL USES OF THE GASES IN AIR
Oxygen, nitrogen, argon, neon, krypton, and xenon are used in making industrial products essential to modern living. These products include steel, petrochemicals, lighting systems, fertilizers, and semiconductors (substances used to make the chips in computers, calculators, televisions, microwave ovens, and many other electronic devices).
A
Oxygen
More than half of the oxygen produced in the United States is used by the steel industry, which injects the gas into basic oxygen furnaces to heat and produce steel (see Iron and Steel Manufacture: Basic Oxygen Process). Metalworkers also combine oxygen with acetylene to produce high-temperature torch flames that cut and weld steel.
Oxygen is also important in the aerospace industry. Oxygen reacts with fuel, such as hydrogen, burning the fuel and supplying energy for launching and powering rockets. The oxygen is stored aboard the rocket as a liquid and converted to gas before reacting with the propellant fuel ( See also Combustion).
B
Nitrogen
About 36 million metric tons of nitrogen are produced each year in the United States, and about 4 million metric tons are produced in Canada each year. Nearly a third of the nitrogen produced in the United States is used as a cryogenic liquid to instantly freeze and preserve the flavor and moisture content of a wide range of foods, including hamburger and shrimp. Nitrogen is also used extensively in the chemical industry to produce ammonia (NH3), which in turn is used to produce urea fertilizers, nitric acid, and many other important chemical products. During oil drilling, nitrogen is used to help force petroleum up from underground deposits. Due to its chemical stability, nitrogen is added to various manufacturing processes to prevent fires and explosions. For example, manufacturers often blanket highly flammable petroleum, chemicals, and paint in a protective layer of nitrogen during processing.
Nitrogen is used in the electronics industry to flush air from vacuum tubes before the tubes are sealed. Incandescent lamp bulbs are flushed with nitrogen gas before being filled with a nitrogen-argon gas mixture. In metalworking operations, nitrogen is used to control furnace atmospheres during annealing (heating and slowly cooling metal for strengthening). Metalworkers also use nitrogen to remove dissolved hydrogen from molten aluminum and to refine scrap aluminum.
C
Argon
In contrast to nitrogen, which reacts with certain metallic elements at higher temperatures, argon is completely unreactive (see Noble Gases). In addition to being extremely stable, argon is a good insulator and does not conduct heat well. Because of these properties, argon gas (in combination with less expensive nitrogen gas) is used to fill incandescent lamp bulbs. The stable, insulating gas allows bulb filaments to reach higher temperatures and therefore produce more light without overheating the bulb.
Argon has the unusual ability to ionize, or become electrically conductive, at much lower voltages than most other gases can. When ionized, argon emits brightly colored light. As a result, argon is also used to make brightly colored “neon” display signs and fluorescent tubes used to light building interiors. Argon is also used in the electronics industry to produce the highly purified semiconductor metals silicon and germanium, both of which are used to make transistor (see also Metalloids).
D
Neon, Krypton, and Xenon
Like argon, the noble gases neon, krypton, and xenon have the ability to ionize at relatively low voltages. As a result, these gases are also used to light “neon” display signs. In addition, the atomic industry uses neon, krypton, and xenon as the “fill gas” for ionization chambers. Ionization chambers are containers filled with gas and grids of wires that scientists use for measuring radiation and for studying subatomic particles.
VII
COMPRESSED AIR
Not all industrial uses of air require it to be separated into its component gases. Compressed air—plain air that has been pressurized by squeezing it into a smaller-than-normal volume—is used in many industrial applications. When air is compressed, the gas molecules collide with each other more frequently and with more force, producing higher kinetic energy. The kinetic energy in compressed air can be converted into mechanical energy or it can be used to produce a powerful air flow or an air cushion. Compressed air is easily transmitted through pipes and hoses with little loss of energy, so it can be utilized at a considerable distance from the compressor or pressure tank.
The first large-scale application of compressed-air energy occurred in 1871, during the excavation of the Mont Cenis railroad tunnel through the Alps. Engineers developed a water-wheel-driven air compressor that powered the rock drills used to dig the tunnel. Before the invention of air compressors, miners used steam-powered rock drills, but exhaust steam made working conditions in underground mines unbearable. After the development of the air compressor, the mining industry began using compressed-air energy to drill mines. Soon other industries were utilizing compressed-air energy for a variety of uses.
Modern compressors can pressurize air up to 1,025 kg/cm2 (15,000 lbs/in2). Modern pneumatic (compressed-air-driven) tools include nail guns, grinders, rotary drills, and jackhammers. Compressed air drives conveyers that transport grain, powdered coal, and other materials. Compressed air also powers pneumatic cylinders that apply the brakes on railroad trains. It is used to furnish the forced draft for blast furnaces and other combustion processes, to ventilate mines and buildings, and to operate control equipment in processing plants.

Contributed By:
Dennis L. Derr

Petroleum

Petroleum
I
INTRODUCTION
Petroleum, or crude oil, naturally occurring oily, bituminous liquid composed of various organic chemicals. It is found in large quantities below the surface of Earth and is used as a fuel and as a raw material in the chemical industry. Modern industrial societies use it primarily to achieve a degree of mobility—on land, at sea, and in the air—that was barely imaginable less than 100 years ago. In addition, petroleum and its derivatives are used in the manufacture of medicines and fertilizers, foodstuffs, plastics, building materials, paints, and cloth and to generate electricity.
In fact, modern industrial civilization depends on petroleum and its products; the physical structure and way of life of the suburban communities that surround the great cities are the result of an ample and inexpensive supply of petroleum. In addition, the goals of developing countries—to exploit their natural resources and to supply foodstuffs for the burgeoning populations—are based on the assumption of petroleum availability. In recent years, however, the worldwide availability of petroleum has steadily declined and its relative cost has increased. Many experts forecast that petroleum will no longer be a common commercial material by the mid-21st century. World Energy Supply.
II
CHARACTERISTICS
The chemical composition of all petroleum is principally hydrocarbons, although a few sulfur-containing and oxygen-containing compounds are usually present; the sulfur content varies from about 0.1 to 5 percent. Petroleum contains gaseous, liquid, and solid elements. The consistency of petroleum varies from liquid as thin as gasoline to liquid so thick that it will barely pour. Small quantities of gaseous compounds are usually dissolved in the liquid; when larger quantities of these compounds are present, the petroleum deposit is associated with a deposit of natural gas (see Gases, Fuel).
Three broad classes of crude petroleum exist: the paraffin types, the asphaltic types, and the mixed-base types. The paraffin types are composed of molecules in which the number of hydrogen atoms is always two more than twice the number of carbon atoms. The characteristic molecules in the asphaltic types are naphthenes, composed of twice as many hydrogen atoms as carbon atoms. In the mixed-base group are both paraffin hydrocarbons and naphthenes.
See also Asphalt; Naphtha.
III
FORMATION
Petroleum is formed under Earth’s surface by the decomposition of marine organisms. The remains of tiny organisms that live in the sea—and, to a lesser extent, those of land organisms that are carried down to the sea in rivers and of plants that grow on the ocean bottoms—are enmeshed with the fine sands and silts that settle to the bottom in quiet sea basins. Such deposits, which are rich in organic materials, become the source rocks for the generation of crude oil. The process began many millions of years ago with the development of abundant life, and it continues to this day. The sediments grow thicker and sink into the seafloor under their own weight. As additional deposits pile up, the pressure on the ones below increases several thousand times, and the temperature rises by several hundred degrees. The mud and sand harden into shale and sandstone; carbonate precipitates and skeletal shells harden into limestone; and the remains of the dead organisms are transformed into crude oil and natural gas.
Once the petroleum forms, it flows upward in Earth’s crust because it has a lower density than the brines that saturate the interstices of the shales, sands, and carbonate rocks that constitute the crust of Earth. The crude oil and natural gas rise into the microscopic pores of the coarser sediments lying above. Frequently, the rising material encounters an impermeable shale or dense layer of rock that prevents further migration; the oil has become trapped, and a reservoir of petroleum is formed. A significant amount of the upward-migrating oil, however, does not encounter impermeable rock but instead flows out at the surface of Earth or onto the ocean floor. Surface deposits also include bituminous lakes and escaping natural gas.
IV
HISTORICAL DEVELOPMENT
These surface deposits of crude oil have been known to humans for thousands of years. In the areas where they occurred, they were long used for limited purposes, such as caulking boats, waterproofing cloth, and fueling torches. By the time the Renaissance began in the 14th century, some surface deposits were being distilled to obtain lubricants and medicinal products, but the real exploitation of crude oil did not begin until the 19th century. The Industrial Revolution had by then brought about a search for new fuels, and the social changes it effected had produced a need for good, cheap oil for lamps; people wished to be able to work and read after dark. Whale oil, however, was available only to the rich, tallow candles had an unpleasant odor, and gas jets were available only in then-modern houses and apartments in metropolitan areas.
The search for a better lamp fuel led to a great demand for “rock oil”—that is, crude oil—and various scientists in the mid-19th century were developing processes to make commercial use of it. Thus British entrepreneur James Young, with others, began to manufacture various products from crude oil, but he later turned to coal distillation and the exploitation of oil shales. In 1852 Canadian physician and geologist Abraham Gessner obtained a patent for producing from crude oil a relatively clean-burning, affordable lamp fuel called kerosene; and in 1855 an American chemist, Benjamin Silliman, published a report indicating the wide range of useful products that could be derived through the distillation of petroleum.
Thus the quest for greater supplies of crude oil began. For several years people had known that wells drilled for water and salt were occasionally infiltrated by petroleum, so the concept of drilling for crude oil itself soon followed. The first such wells were dug in Germany from 1857 to 1859, but the event that gained world fame was the drilling of an oil well near Oil Creek, Pennsylvania, by “Colonel” Edwin L. Drake in 1859. Drake, contracted by the American industrialist George H. Bissell—who had also supplied Silliman with rock-oil samples for producing his report—drilled to find the supposed “mother pool” from which the oil seeps of western Pennsylvania were assumed to be emanating. The reservoir Drake tapped was shallow—only 21.2 m (69.5 ft) deep—and the petroleum was a paraffin type that flowed readily and was easy to distill.
Drake’s success marked the beginning of the rapid growth of the modern petroleum industry. Soon petroleum received the attention of the scientific community, and coherent hypotheses were developed for its formation, migration upward through the earth, and entrapment. With the invention of the automobile and the energy needs brought on by World War I (1914-1918), the petroleum industry became one of the foundations of industrial society.
V
EXPLORATION
In order to find crude oil underground, geologists must search for a sedimentary basin in which shales rich in organic material have been buried for a sufficiently long time for petroleum to have formed. The petroleum must also have had an opportunity to migrate into porous traps that are capable of holding large amounts of fluid. The occurrence of crude oil in Earth’s crust is limited both by these conditions, which must be met simultaneously, and by the time span of tens of millions to a hundred million years required for the oil’s formation.
Petroleum geologists and geophysicists have many tools at their disposal to assist in identifying potential areas for drilling. Thus, surface mapping of outcrops of sedimentary beds makes possible the interpretation of subsurface features, which can then be supplemented with information obtained by drilling into the crust and retrieving cores or samples of the rock layers encountered. In addition, increasingly sophisticated seismic techniques—the reflection and refraction of sound waves propagated through Earth—reveal details of the structure and interrelationship of various layers in the subsurface. Ultimately, however, the only way to prove that oil is present in the subsurface is to drill a well. In fact, most of the oil provinces in the world have initially been identified by the presence of surface seeps, and most of the actual reservoirs have been discovered by so-called wildcatters who relied perhaps as much on intuition as on science. (The term wildcatter comes from West Texas, where in the early 1920s drilling crews encountered many wildcats as they cleared locations for exploratory wells. Shot wildcats were hung on the oil derricks, and the wells became known as wildcat wells.)
An oil field, once found, may comprise more than one reservoir—that is, more than one single, continuous, bounded accumulation of oil. Several reservoirs may be stacked one above the other, isolated by intervening shales and impervious rock strata. Such reservoirs may vary in size from a few tens of hectares to tens of square kilometers, and from a few meters in thickness to several hundred or more. Most of the oil that has been discovered and exploited in the world has been found in a relatively few large reservoirs. In the United States, for example, 60 of approximately 10,000 oil fields have accounted for half of the productive capacity and reserves.
VI
PRIMARY PRODUCTION
Most oil wells in the United States are drilled by the rotary method that was first described in a British patent in 1844 assigned to R. Beart. In rotary drilling, the drill string, a series of connected pipes, is supported by a derrick. The string is rotated by being coupled to the rotating table on the derrick floor. The drill bit at the end of the string is generally designed with three cone-shaped wheels tipped with hardened teeth. Drill cuttings are lifted continually to the surface by a circulating-fluid system driven by a pump.
Trapped crude oil is under pressure; were it not trapped by impermeable rock it would have continued to migrate upward, because of the pressure differential caused by its buoyancy, until it escaped at the surface of Earth. When a well bore is drilled into this pressured accumulation of oil, the oil expands into the low-pressure sink created by the well bore in communication with Earth’s surface. As the well fills up with fluid, however, a back pressure is exerted on the reservoir, and the flow of additional fluid into the well bore would soon stop, were no other conditions involved. Most crude oils, however, contain a significant amount of natural gas in solution, and this gas is kept in solution by the high pressure in the reservoir. The gas comes out of solution when the low pressure in the well bore is encountered, and the gas, once liberated, immediately begins to expand. This expansion, together with the dilution of the column of oil by the less dense gas, results in the propulsion of oil up to Earth’s surface.
Nevertheless, as fluid withdrawal continues from the reservoir, the pressure within the reservoir gradually decreases, and the amount of gas in solution decreases. As a result, the flow rate of fluid into the well bore decreases, and less gas is liberated. The fluid may not reach the surface, so a pump (artificial lift) must be installed in the well bore to continue producing the crude oil.
Eventually, the flow rate of the crude oil becomes so small, and the cost of lifting the oil to the surface becomes so great, that the well costs more to operate than the revenues that can be gained from selling the crude oil (after discounting the price for operating costs, taxes, insurance, and return on capital). The well’s economic limit has then been reached and it is abandoned.
VII
ENHANCED OIL RECOVERY
In primary production, no extraneous energy is added to the reservoir other than that required for lifting fluids from the producing wells. Most reservoirs are developed by numerous wells; and as primary production approaches its economic limit, perhaps only a few percent and no more than about 25 percent of the crude oil has been withdrawn from a given reservoir.
The oil industry has developed methods for supplementing the production of crude oil that can be obtained mostly by taking advantage of the natural reservoir energy. These supplementary methods, collectively known as enhanced oil recovery technology, can increase the recovery of crude oil, but only at the additional cost of supplying extraneous energy to the reservoir. In this way, the recovery of crude oil has been increased to an overall average of 33 percent of the original oil. Two successful supplementary methods are in use at this time: water injection and steam injection.
A
Water Injection
In a completely developed oil field, the wells may be drilled anywhere from 60 to 600 m (200 to 2,000 ft) from one another, depending on the nature of the reservoir. If water is pumped into alternate wells in such a field, the pressure in the reservoir as a whole can be maintained or even increased. In this way the rate of production of the crude oil also can be increased; in addition, the water physically displaces the oil, thus increasing the recovery efficiency. In some reservoirs with a high degree of uniformity and little clay content, water flooding may increase the recovery efficiency to as much as 60 percent or more of the original oil in place. Water flooding was first introduced in the Pennsylvania oil fields, more or less accidentally, in the late 19th century, and it has since spread throughout the world.
B
Steam Injection
Steam injection is used in reservoirs that contain very viscous oils, those that are thick and flow slowly. The steam not only provides a source of energy to displace the oil, it also causes a marked reduction in viscosity (by raising the temperature of the reservoir), so that the crude oil flows faster under any given pressure differential. This scheme has been used extensively in the states of California, in the United States, and of Zulia, in Venezuela, where large reservoirs exist that contain viscous oil. Experiments are also under way to attempt to prove the usefulness of this technology in recovering the vast accumulations of viscous crude oil (bitumens) along the Athabasca River in north central Alberta, Canada, and along the Orinoco River in eastern Venezuela.
VIII
OFFSHORE DRILLING
Another method to increase oil-field production has been the construction and operation of offshore drilling rigs. The drilling rigs are installed, operated, and serviced on an offshore platform in water up to a depth of several hundred meters; the platform may either float or sit on legs planted on the ocean floor, where it is capable of resisting waves, wind, and—in Arctic regions—ice floes.
As in traditional rigs, the derrick is basically a device for suspending and rotating the drill pipe, to the end of which is attached the drill bit. Additional lengths of drill pipe are added to the drill string as the bit penetrates farther and farther into Earth’s crust. The force required for cutting into the earth comes from the weight of the drill pipe itself. To facilitate the removal of the cuttings, mud is constantly circulated down through the drill pipe, out through nozzles in the drill bit, and then up to the surface through the space between the drill pipe and the bore through the earth (the diameter of the bit is somewhat greater than that of the pipe). Successful bore holes have been drilled right on target, in this way, to depths of more than 6.4 km (more than 4 mi) from the surface of the ocean. Offshore drilling has resulted in the development of a significant additional reserve of petroleum—in the United States, about 5 percent of the total reserves.
IX
REFINING
Once oil has been produced from an oil field, it is treated with chemicals and heat to remove water and solids, and the natural gas is separated. The oil is then stored in a tank, or battery of tanks, and later transported to a refinery by truck, railroad tank car, barge, or pipeline. Large oil fields all have direct outlets to major, common-carrier pipelines.
A
Basic Distillation
The basic refining tool is the distillation unit. In the United States after the Civil War (1861-1865), more than 100 still refineries were already in operation. Crude oil begins to vaporize at a temperature somewhat less than that required to boil water. Hydrocarbons with the lowest molecular weight vaporize at the lowest temperatures, whereas successively higher temperatures are required to distill larger molecules. The first material to be distilled from crude oil is the gasoline fraction, followed in turn by naphtha and then by kerosene. The residue in the kettle, in the old still refineries, was then treated with caustic and sulfuric acid, and finally steam distilled thereafter. Lubricants and distillate fuel oils were obtained from the upper regions and waxes and asphalt from the lower regions of the distillation apparatus.
In the later 19th century the gasoline and naphtha fractions were actually considered a nuisance because little need for them existed, and the demand for kerosene also began to decline because of the growing production of electricity and the use of electric lights. With the introduction of the automobile, however, the demand for gasoline suddenly burgeoned, and the need for greater supplies of crude oil increased accordingly.
B
Thermal Cracking
In an effort to increase the yield from distillation, the thermal cracking process was developed. In this process, the heavier portions of the crude oil were heated under pressure and at higher temperatures. This resulted in the large hydrocarbon molecules being split into smaller ones, so that the yield of gasoline from a barrel of crude oil was increased. The efficiency of the process was limited, however, because at the high temperatures and pressures that were used, a large amount of coke was deposited in the reactors. This in turn required the use of still higher temperatures and pressures to crack the crude oil. A coking process was then invented in which fluids were recirculated; the process ran for a much longer time, with far less buildup of coke. Many refiners quickly adopted the process of thermal cracking.
C
Alkylation and Catalytic Cracking
Two additional basic processes, alkylation and catalytic cracking, were introduced in the 1930s and further increased the gasoline yield from a barrel of crude oil. In alkylation small molecules produced by thermal cracking are recombined in the presence of a catalyst. This produces branched molecules in the gasoline boiling range that have superior properties—for example, higher antiknock ratings—as a fuel for high-powered engines such as those used in today’s commercial planes.
In the catalytic-cracking process, the crude oil is cracked in the presence of a finely divided catalyst. This permits the refiner to produce many diverse hydrocarbons that can then be recombined by alkylation, isomerization, and catalytic reforming to produce high antiknock engine fuels and specialty chemicals. The production of these chemicals has given birth to the gigantic petrochemical industry, which turns out alcohols, detergents, synthetic rubber, glycerin, fertilizers, sulfur, solvents, and the feedstocks for the manufacture of drugs, nylon, plastics, paints, polyesters, food additives and supplements, explosives, dyes, and insulating materials. The petrochemical industry uses about 5 percent of the total supply of oil and gas in the United States.
D
Product Percentages
In 1920 a U.S. barrel of crude oil, containing 42 gallons, yielded 11 gallons of gasoline, 5.3 gallons of kerosene, 20.4 gallons of gas oil and distillates, and 5.3 gallons of heavier distillates. In recent years, by contrast, the yield of crude oil has increased to almost 21 gallons of gasoline, 3 gallons of jet fuel, 9 gallons of gas oil and distillates, and somewhat less than 4 gallons of lubricants and 3 gallons of heavier residues.
X
PETROLEUM ENGINEERING
The disciplines employed by exploration and petroleum engineers are drawn from virtually every field of science and engineering. Thus the exploration staffs include geologists who specialize in surface mapping in order to try to reconstruct the subsurface configuration of the various sedimentary strata that will afford clues to the presence of petroleum traps. Subsurface specialists then study drill cuttings and interpret data on the subsurface formations that is relayed to surface recorders from electrical, sonic, and nuclear logging devices lowered into the bore hole on a wire line. Seismologists interpret sophisticated signals returning to the surface from sound waves that are propagated through Earth’s crust. Geochemists study the transformation of organic matter and the means for detecting and predicting the occurrence of such matter in subsurface strata. In addition, physicists, chemists, biologists, and mathematicians all support the basic research and development of sophisticated exploration techniques.
Petroleum engineers are responsible for the development of discovered oil accumulations. They usually specialize in one of the important categories of production operation: drilling and surface facilities, petrophysical and geological analysis of the reservoir, reserve estimation and specification of optimal development practices, or production control and surveillance. Although many of these specialists have formal training as petroleum engineers, many others are drawn from the ranks of chemical, mechanical, electrical, and civil engineers; physicists, chemists, and mathematicians; and geologists.
The drilling engineer specifies and supervises the actual program by which a well will be bored into the Earth, the kind of drilling mud to be used, the way in which the steel casing that isolates the productive strata from all other subsurface strata will be cemented, and how the productive strata will be exposed to the well bore. The facilities-engineering specialists specify and design the surface equipment that must be installed to support the production operation, the well-head pumps, the field measurement and collection of produced fluids and gas separation systems, the storage tankage, the dehydration system for removing water from the produced oil, and the facilities for enhanced recovery programs.
The petrophysical and geological engineer, after interpreting the data supplied by analysis of cores and by various logging devices, develops a description of the reservoir rock and its permeability, porosity, and continuity. The reservoir engineer then develops the plan for the number and location of the wells to be drilled into the reservoir, the rates of production that can be sustained for optimum recovery, and the need for supplementary recovery technology. The reservoir engineer also estimates the productivity and ultimate recovery (reserves) that can be achieved from the reservoir, in terms of time, operating costs, and value of the crude oil produced.
Finally, the production engineer monitors the performance of the wells. The engineer recommends and implements remedial tasks such as fracturing, acidizing, deepening, adjusting gas to oil and water to oil ratios, and any other measures that will improve the economic performance of the reservoir.
XI
PRODUCTION VOLUMES AND RESERVES
Crude oil is perhaps the most useful and versatile raw material that has become available for exploitation. By 2003, the United States was using 7 billion barrels of petroleum per year, and worldwide consumption of petroleum was 29.3 billion barrels per year.
A
Reserves
The world’s technically recoverable reserves of crude oil—the amount of oil that experts are certain of being able to extract without regard to cost from Earth—add up to about 1,000 billion barrels, of which some 73 billion barrels are in North America. However, only a small fraction of this can be extracted at current prices. Of the known oil reserves that can be profitably extracted at current prices, more than half are in the Middle East; only a small fraction are in North America.
B
Projections
It is likely that some additional discoveries will be made of new reserves in coming years, and new technologies will be developed that permit the recovery efficiency from already known resources to be increased. The supply of crude oil will at any rate extend into the early decades of the 21st century. Virtually no expectation exists among experts, however, that discoveries and inventions will extend the availability of cheap crude oil much beyond that period. For example, the Prudhoe Bay field on the North Slope of Alaska is the largest field ever discovered in the Western Hemisphere. The ultimate recovery of crude oil from this field is anticipated to be about 10 billion barrels, which is sufficient to supply the current needs of the United States for less than two years, but only one such field was discovered in the West in more than a century of exploration. Furthermore, drilling activity has not halted the steady decline of North American crude oil reserves that began during the 1970s.
C
Alternatives
In light of the reserves available and the dismal projections, it is apparent that alternative energy sources will be required to sustain the civilized societies of the world in the future. The options are indeed few, however, when the massive energy requirements of the industrial world come to be appreciated. Commercial oil shale recovery and the production of a synthetic crude oil have yet to be demonstrated successfully, and serious questions exist as to the competitiveness of production costs and production volumes that can be achieved by these potential new sources.
Although alternative energy sources, such as geothermal energy, solar energy, and nuclear energy, hold much promise, none has proved an economically viable replacement for petroleum products.
XII
ENVIRONMENTAL EFFECTS OF USING PETROLEUM
Adding to the urgency of finding alternatives to petroleum and other fossil fuels is the problem of global warming. Petroleum combustion releases carbon dioxide, a greenhouse gas, into the atmosphere, and most atmospheric scientists believe that rising levels of greenhouse gases are driving climate change. These changes could cause numerous environmental problems, including disrupted weather patterns and polar ice cap melting. Disrupted weather patterns could lead to extensive drought and desertification. Polar ice cap melting could cause flooding and profound changes in ocean circulation. Many environmental organizations are urging governments and individuals to reduce greenhouse gas emissions by conserving energy with fuel-efficient technologies and other measures. In the United States most environmental groups have urged the U.S. government to ratify the Kyōto Protocol, a global treaty that sets a specific timetable for reducing greenhouse gas emissions. See also Ocean and Oceanography.
Drilling oil wells also creates environmental problems because the petroleum pumped up from deep reservoir rocks is often accompanied by large volumes of salt water. This brine contains numerous impurities, so it must either be injected back into the reservoir rocks or treated for safe surface disposal.
Petroleum usually must also be transported long distances by tanker or pipeline to reach a refinery. Transport of petroleum occasionally leads to accidental spills. Oil spills, especially in large volumes, can be detrimental to wildlife and habitat.

Contributed By:
Todd M. Doscher