Home / we can rebuild it wow / Super Easy Reading 2nd 1 - WJ Compass

Super Easy Reading 2nd 1 - WJ Compass - we can rebuild it wow

|[pic] |Transcripts |[pic] |

Unit 1 The Piltdown Man
One of the most famous (or infamous) frauds in the history of science is known as the Piltdown Man, the remains of a supposed primitive hominid found in 1912 by an amateur paleontologist named Charles Dawson and a professional paleontologist named Arthur Smith Woodward. In fact, two sets of these fossils were discovered between the years 1912 and 1915. The first was found in the Piltdown gravel pit in Sussex, England. While digging in the pit, the paleontologists found a humanlike skull with a jawbone similar to that of an ape. This finding appeared to be the remains of a “missing link,” the evolutionary step that connects apes and humans. The discoverers named the remains Eoanthropus dawsoni, or “Dawson’s Dawn Man,” but it was later commonly known as the Piltdown Man due to the location of its finding.
The Piltdown Man was an immediate sensation. He seemed to fit all of the criteria expected in the missing link—a mixture of human and ape with the noble brow of Homo sapiens and a primitive jaw. Best of all, he was British! However, the reactions to the findings were mixed. On the whole, British paleontologists were enthusiastic, but French and American paleontologists tended to be skeptical of the origins of the Piltdown Man, some objecting to its credibility quite vociferously. The objectors held that the jawbone and the skull were obviously from two different animals, and the fact they were discovered together was simply an accident of placement. At first, fraud wasn’t suspected. After all, Dawson and Woodward had no expectation of financial gain from the discovery. In addition, there had been other European finds related to the missing links of modern-day man, such as the Neanderthal, Cro-Magnon Man, and Heidelberg Man. So the existence of another “missing link” in the history of man’s evolution was not so surprising to some researchers.
However, some investigators remained doubtful of the origins of the Piltdown Man, continuing to express strong doubt that the skull and the jaw were from the same species. The perpetrators of the hoax solved this problem by planting a second jaw and a second skull at another nearby location. The subsequent report in 1915 of the discovery of “Piltdown Man II” converted many of the skeptics. Their reasoning was that one accident of placement was plausible, but two were not. So after this second finding, most of the doubters were satisfied. Moreover, some prominent British scientists failed to perform tests that they should have done and obstructed other scientists’ access to the fossils. Some historians believe that the discoverers of the Piltdown Man and these scientists may have been coconspirators in the hoax.
The fame of the Piltdown Man continued for forty years. It was featured in professional articles and books, in newspaper reports, and even in high school biology textbooks. In the decades from 1915 to 1950, there was, of course, some opposition from scientific critics who claimed that the skull was human but the jaw was that of an ape.
During the 1950s, the validity of the Piltdown Man discovery was questioned further. Several researchers concluded that almost all, if not all, of the fossils had been planted in the pit in modern times and that several of these items had even been fabricated. These scientific detectives, among them Joseph Weiner and Kenneth Oakley, discredited the Piltdown Man fossils with technical evidence showing that the skull belonged to an English woman and the jaw to an Asian orangutan. Chemical tests in 1953 further proved that everything was in fact fake. The discovered pieces had been stained, filed, smashed, and so on, in a clever way, thus leading people to initially believe that the Piltdown Man was real.
Despite this, the question still remains: Who did it? Even though it has been over a century since the discovery of the Piltdown Man, there is still no certainty about who created one of the greatest hoaxes in the history of science.

Unit 1 The Curse of the Mummy
“Death shall come on swift wings to him who disturbs the peace of the King.” These are the words Howard Carter was reported to have seen carved in stone as he entered the tomb of King Tutankhamun, the famous pharaoh who ruled Egypt from 1333 to 1325 BCE. Egyptian sepulchers like that of King Tutankhamun, contain curses to frighten those who would violate the tombs, and in what has come to be known as “the curse of the mummy,” it is believed that tragedy and death befall those who disturb the graves of Egyptian kings.
In the 1930s the belief in a mummy’s curse was rekindled after the deaths of Carter’s colleagues. Years earlier, in 1891, Howard Carter, then a young archaeologist from England, went to Egypt to study ancient Egyptian culture and to try to locate the unopened tomb of an ancient king. Because Egyptian kings were buried with gold and other valuable items, by the end of the 19th century, most tombs in the Valley of the Kings had been plundered. Therefore, many archaeologists believed that there was nothing left to excavate. Carter, however, believed there was at least one more undiscovered tomb, and he wanted to find it.
The great burial chambers in the Valley of the Kings contained the wrapped bodies of pharaohs, as well as items Egyptians believed would aid the kings in their next life. Before being buried for the afterlife, the bodies of the kings were carefully preserved by a process of embalming called mummification. When a body was mummified, the brain and other organs were removed and stored in large jars; then the skin, muscles, and bones were covered in a special salt for three months. At the end of three months, after the salt had absorbed the water from the body, the body was wrapped in pieces of cotton soaked in pine resin (the liquid from pine trees). Through this process, the bodies of Egyptian kings have been preserved for thousands of years, and bodies that undergo this embalming process are called mummies.
After years of working in Egypt studying various sites, and still convinced he would find an unopened tomb, Howard Carter approached wealthy British businessman Lord Carnarvon, who agreed to finance the search. After five unsuccessful years, however, Lord Carnarvon threatened to withdraw his patronage. Carnarvon gave Carter just one more year within which to make a discovery. Returning to Egypt, Carter brought with him a canary, which was later believed to have been the harbinger of both success and disaster. It was in that year, 1922, that Carter discovered the tomb of King Tutankhamun, but days before the discovery Carter’s canary was killed by a cobra—once symbolic of the pharaohs.
In November of 1922, Howard Carter cut a hole in the stone door that stood in front of Tutankhamun’s tomb. With him were Lord Carnarvon and twenty others, including archaeologists, workers, and servants. Upon opening the tomb, they found wonderful treasures, including a solid gold mask that covered the face and upper torso of King Tutankhamun’s body. Soon, however, their celebrations were dampened by a number of tragedies.
Lord Carnarvon died in Egypt a few months after the opening of the tomb. At first, doctors could not identify the cause of his death, but they finally determined it to be pneumonia and blood poisoning caused by an infected mosquito bite. The British press reported that at the exact moment of his death, back in England, Lord Carnarvon’s dog howled at the moon and then died. Rumors of a mummy’s curse shook the British public when the mummy of Tutankhamun was unwrapped and a wound was discovered on the pharaoh’s left cheek in the same spot as the insect bite that had caused Lord Carnarvon’s death.
By 1929, London newspapers reported that eleven people connected with the discovery of King Tutankhamun’s tomb had died of unnatural causes, including relatives of Lord Carnarvon, Carter’s personal secretary Richard Bethell, and Bethell’s father. The latter leapt to his death, leaving a suicide note alluding to “horrors” he had seen. Did the mummy’s curse cause these deaths? If so, why didn’t the man who opened the tomb succumb tothe curse of the mummy? Howard Carter, who never believed in the curse, survived into his mid-60s, dying of apparently natural causes in 1939.

Unit 2 Is the Internet Ruining Your Memory?
A prominent educator once warned that a popular new technology was becoming a crutch, with a negative impact on his students’ memories. That educator was Socrates, and the new technology he disliked was writing—on wax tablets and papyrus scrolls, to be exact. The great orators of his time delivered memorized speeches without notes. Socrates saw writing as a threat to that tradition, and by extension, those mental faculties. Or so reported his student Plato in Phaedrus, anyway. True to his word, Socrates himself stubbornly refused to write his thoughts down.
It’s no great leap, therefore, to suppose Socrates would similarly disapprove of the Internet today. His attitude is echoed in growing concerns that the Internet is changing our brains. Many of these concerns center on the so-called “Google Effect,” which some researchers and a growing number of journalists believe may have an adverse effect on our memories.
At the heart of specific concerns about memory is a study authored by psychologist Betsy Sparrow. It was published in 2011 as “Google Effects on Memory” in the journal Science. In experiments at Harvard University, Sparrow’s team found that subjects exposed to detailed, trivial information were more likely to forget it if told they could look it up online later. Subjects also tended to include the Internet among their own cognitive tools. It was as if the computer were part of their intellectual abilities. Hence, Sparrow concluded, the Internet has become a central player in our “transactive memory.” This is the sharing of information retention among persons— or in this case, digital networks—in a group. In short, Google has become everyone’s brainy friend, the “walking encyclopedia.” Sparrow hypothesized this may have farreaching effects on the way we think, and perhaps even the physiology of our brains.
Critics of the study and of many of the other “Google Effect” articles that followed it point out what they see as significant flaws. The first is the questionable validity of the assumption that forgetting something because we can google it later is any different from forgetting a phone number immediately after writing it down. The same study showed frequent Internet users were adept at remembering where to find information, if not the information itself. Moreover, Sparrow herself admits that transactive memory is nothing new. Long before Google, we had libraries with librarians and card catalogues to direct our searches.
Is there proof that our memories are in fact getting worse because of Internet search engines, or that relying on them rather than the library has demonstrable physiological effects? So far, cognitive neuroscience has revealed no such data. And in the US, a country with one of the highest Internet usage rates, average IQ scores continue to steadily rise three points per decade. Standard tests of IQ measure fluid working memory and long-term retention. It may be too soon for a quantifiable negative effect to emerge, but until it does, the sticklers for evidence will likely remain unconvinced. “Such panics often fail basic reality checks,” the Harvard University research psychologist Steven Pinker states in a New York Times article on the subject. “The effects of consuming electronic media are … likely to be far more limited than the panic implies.”
What we do know about the plasticity—or changeability—of human memory should make us think twice about placing it in such high esteem. Recent findings in neuroscience have proven that we alter memories every time we access them. Therefore, even the most accurate memory is subject to plasticity; over time, connected memories will change each other. This highlights the difference between accessibility and accuracy: some humans may recall information well, but plasticity will affect the accuracy of that information over time. The Internet, by contrast, is not subject to plasticity. That does not mean the information is static, however. It’s constantly being fact-checked and updated, with a cumulative effect that’s mostly positive, lowquality Web content notwithstanding.
There may be sociological consequences of Google’s power as the main “keeper” of information. But Internet users aren’t really consulting Google. They’re using it as a tool to access the same scientific journals and respected news sources they might find in the library—if they had all sorts of spare time.

Unit 2 The Robot’s First Law
In his classic 1950 short story collection, I, Robot, Isaac Asimov introduced his Three Laws of Robotics. The first law is that robots may not harm or allow harm to a human. For some, imagining what might happen if a robot broke this law feeds our deepest fears about artificial intelligence. AI has not yet advanced to the point where this is an issue, and may never do so. But of more concern presently is the possibility of accidents; thus, in engineering humanoid robots with autonomy of movement, the number-one goal is safety.
Several basic safety challenges have to do with controlling robots’ physicality and gross motor skills. Weight and stability are two aspects that could have disastrous consequences in the wrong combinations: imagine a robot that weighs several hundred pounds falling over onto a small child! With metallic parts and movement systems using multiple motors and hydraulics, the first autonomous humanoid robots were extremely heavy. In recent decades, however, carbon fiber, plastic parts, and elastic systems have come into use, as have engineering solutions that use motors for more than one purpose, or harness more natural kinetic energy in place of motorized power. Overall weights have dropped considerably as a result.
In their early attempts at humanlike motion, mechanical engineers quickly learned that one of their biggest obstacles was something we take for granted: balance. Humans have highly sophisticated genetic wiring for equilibrium and 244 different degrees of freedom (possible directions of movement for all our joints). Early robotic engineers had to start from scratch. It was not difficult to make a machine take a step; the trick was keeping it from falling on its face as it did so.
To solve this problem, engineers needed a simplified mechanical equivalent of the human system for balance. They arrived at two devices. The first is the gyroscope, a wheel or disc that maintains its orientation to gravity when spinning rapidly. Robotic engineers now use the mechanical force of internal gyroscopes to add general stability. They also use gyroscopes to help robots sense the position of their own “bodies” in relation to the directional pull of gravity, allowing them to maintain balance.
The second device is an accelerometer. Accelerometers use sensors to detect the force of motion (g-force) in several directions. These devices orient the images in smart phones to always remain upright and are integral to missile guidance systems. In robots they can provide more detailed information about external forces, allowing them to compensate for a wide range of situations, like walking uphill or even being pushed. The latest generation of the French robot NAO, for instance, can extend its arms to protect itself when falling and then shift its weight to stand up again—actions that require a sophisticated knowledge of its body in relation to gravity and space.
In human terms, fine motor skills are the coordinated movements that make up our manual dexterity and hand-eye coordination. For robots to interact with humans in human environments—as caregivers, for instance—they must be able to perform certain tasks with skills comparable to humans’. You need only imagine a robot nurse wildly stabbing at a patient with a syringe to understand why. As with walking, the basic range of motion and actuation of movement was not particularly difficult. The first industrial robots that appeared in the 1960s were equipped with simple, claw-like gripping mechanisms with only two positions—open and closed—and a constant level of pressure that couldn’t be adjusted. Since that time, engineers have added joints and digits, eventually creating fully articulated hands based on human models. The problem has been varying the amount of pressure for specific tasks; the pressure necessary to grip a heavy tool would crush a Styrofoam cup of coffee, for instance.
But we’ve come a long way from the earliest robots. Dennis Hong leads a robotics research team at UCLA. One of their recently developed experimental prototypes is RAPHaEL (Robotic Air-Powered Hand with Elastic Ligaments). As the name suggests, the hand is operated by air pressure and works much the same way as a human hand. Using a sophisticated system for calibrating air pressure, RAPHaEL can crush an empty aluminum can or hold a light bulb without breaking it.

Unit 3 The Uncommon Cold
Catching colds is a common complaint for people all over the world. While even a nasty cold won’t kill you, no one enjoys the accompanying symptoms: a sore, scratchy throat, runny nose, constant sneezing, and headaches. Colds are uncomfortable and often inconvenient, usually lasting about seven days but often lingering for up to fourteen days. On average, human adults contract between two and five colds annually, while children catch as many as six to ten.
It’s no surprise that developing and selling cold medication has become big business for pharmaceutical companies. Each year, consumers spend billions of dollars on medicines to alleviate this recurring problem. From overthe-counter remedies to expensive prescription products, they are more than happy to hand over money for something that could help accelerate a recovery. The irony is that most available medicines are only palliatives, meaning they may help relieve cold symptoms but do not cure the illness itself.
The fact is that currently there is no cure available for the common cold. Not even a suitable vaccine has been developed. In the case of influenza, commonly known as the flu, vaccines do exist, and getting one yearly is recommended. By contrast, the reason that a cold is so hard to vaccinate against or cure is because it isn’t caused by any single disease. There are actually about 200 viruses responsible for our cold symptoms. In other words, a cold may not necessarily be as “common” as you imagine.
Each cold virus carries specific antigens, substances that trigger immune responses. Immune responses cause our bodies to create protective proteins called antibodies to fight off harmful diseases. So far it has proved impossible to create one vaccine that can produce the disparate antibodies required to fight so many different antigens. Another problem is that cold viruses have the ability to change their molecular structure—in other words, to undergo mutations. That means that even if a suitable vaccine were developed, cold viruses could alter in a relatively short space of time, making the vaccine obsolete. Even flu vaccines, which target a specific, known virus, must be updated frequently for this reason.
In the last two decades, medical research has concentrated on developing medicines to fight a family of viruses called the rhinoviruses, which are responsible for causing about thirty-five percent of all colds. In the late 1990s, researchers seemed to have some initial success with an anti-viral molecule called BIRR4. This substance appeared to prevent rhinoviruses from binding with cells in our noses, thus blocking an infection—if taken just before getting sick. Unfortunately, people don’t know when they are about to catch a cold, so they wouldn’t have known when to take the BIRR4. As a result, research into the product was dropped in 2000.
Between 1997 and 2001, a company called ViroPharma tried to get approval to market an antiviral drug called pleconaril which worked in a similar way to BIRR4. Studies indicated that pleconaril prevented rhinoviruses from attaching themselves to human cells by binding with the outer shell of the viral molecules. An application to commercialize an oral form of pleconaril was turned down by the Federal Drug Administration in the USA. The reason given was that the safety and efficacy of the drug had not been proven in a convincing manner.
ViroPharma decided to carry on with research and developed a pleconaril nasal spray. They believed this to be an improved version, which could be used to combat colds and asthma. In 2003, the pharmaceutical giant Schering Plough entered into an agreement with ViroPharma giving Schering Plough an option to license and market the spray to prevent cold viruses from exacerbating asthma. By 2007, pleconaril spray had undergone its second phase of clinical trials. As of 2015, the results of these trials have not been released, but according to Schering Plough, pleconaril is still under development.
For now, perhaps the safest way to fight a cold is simply to follow conventional wisdom: get plenty of bed rest, take over-the-counter remedies to combat symptoms, and drink plenty of fluids. If you do these things, your cold should be gone in seven days. Or do absolutely nothing, and it should be gone within a couple of weeks.

Unit 3 Gene Therapy
The field of molecular genetics is progressing at a rapid pace, with our ability to manipulate genes and understand the complex processes involved in genetics developing on almost a daily basis. Understandably, people have fears about this powerful technology and are worried that we may use it in ways to change our humanity. In particular, gene therapy is one aspect of molecular genetics that is causing a lot of concern. Gene therapy is defined as a way of curing or preventing disease by changing the behavior of a person’s genes. Currently, gene therapy is still in its early stages, with most of it still experimental. There are actually two types of gene therapy: somatic and germline. Somatic gene therapy targets genes in the soma, or body cells. In this way, the genome of the recipient is changed, but this change is not passed on to the next generation. For example, experimental trials in treating cystic fibrosis treat the genes only in the cells of the lungs, and, consequently, the patient’s children would still be at risk of the disease.
In germline gene therapy, genetic changes are made to reproductive cells. The egg or sperm cells of the patient are genetically changed with the goal of passing on these changes to his or her children. In practice, this would mean changing the fertilized egg, the embryo, so that the genetic changes would be reproduced in every cell of the future adult, including the reproductive cells. In fact, germline genetic engineering is not being actively investigated in humans or even large animals at this point. Thus far, the procedures are still too risky and undeveloped. Experimentation has occurred with mice in which genes were added or deleted and the effects have been observed to help better understand gene functions.
Many people falsely assume that germline genetic engineering is already performed all the time, due to news reports about genetic manipulation. But in fact, these reports are either of somatic gene therapy trials or of cloning, which in itself does not alter any genes but merely copies them. Furthermore, even in the field of somatic gene therapy, many factors have prevented researchers from developing successful techniques.
The first problem is in the gene delivery tool—that is, how a new gene is inserted into the body. Scientists have tried to remove the disease-causing genes and insert healthy genes for therapy instead. Most vehicles used these days are viruses. Although the viruses can be effective, other problems may arise. Often, the body reacts against the virus in an immune and inflammatory response. Additionally, the viruses don’t always target the right area.
Another obstacle to successful gene therapy is our limited understanding of gene function. Scientists don’t know all the functions of our genes and only know some of the genes involved in genetic diseases. Also, many of the genes involved in genetic diseases may have more than one function. For example, sickle cell anemia is a genetic disease that is caused by an error in the gene for hemoglobin, the oxygen-carrying protein in our blood. A child with two copies of this faulty gene will have this disease, but a child with only one copy of the faulty gene will not. The prevalence of this disease is greatest in Africa, where there is also a deadly form of malaria.
Studies have reported that in areas where malaria is endemic, children with a single copy of the sickle cell gene had a survival advantage over children who inherited two healthy genes. They went on to grow up and pass on their genes to their own children, conferring on them their resistance to malaria. Initial studies have suggested that the gene that causes the defect in sickle cell hemoglobin also produces an enzyme that repels plasmodium—the pathogen that causes malaria. The point is that this secondary gene function was discovered quite by accident.
Finally, environmental factors play a pivotal role in the expression of many diseases. This is illustrated in studies with identical twins—two people with identical genes—who have not developed the same diseases. Epigenetics is an entire sub-field that has developed to study how factors outside of our DNA can interact with genetic traits. But as environmental factors are much more difficult to pinpoint, progress in epigenetics tends to be slow.

Unit 4 Teenage Runaways
Mark Twain’s book The Adventures of Huckleberry Finn is considered one of the greatest works of American literature. It is the story of a boy who runs away from home, in part because of his abusive father. In keeping with the American concept of individualism, the boy’s experiences as a runaway, both good and bad, help him grow as a person and establish his independence and maturity. The plight of modern runaways, however, differs greatly from Twain’s narrative.
A runaway, or “youth in crisis,” is a child or teen who chooses to leave home without parental consent; most are unprepared for such independence. According to the Children’s Defense Fund, as many as 7,000 young Americans run away every day. Seventy-five percent of these youths depend on friends or relatives for food and shelter. For the remaining twenty-five percent, life on the street is anything but romantic. In fact, it is even prohibited by law in some parts of the United States and other countries. Habitual runaways who are under the age of 18 may be sent to a facility for wards of the state, or even juvenile detention centers if they are caught breaking other laws, such as those against vagrancy, trespassing, or petty theft. Many runaways become involved in crime as a result of their circumstances; often, the only ones willing to help them have predatory motives. The trauma that teenagers face in this situation would be difficult enough without these added troubles from people around them.
Regardless of whether they are caught for minor crimes, homeless life is unpleasant and dangerous. In the United States, for example, social services for runaways tend to be underfunded and understaffed. Runaways often become the victims of violence or theft at insufficiently monitored shelters—even more so on the streets. And homelessness is often accompanied by health threats, such as hygiene issues, poor nutrition, food poisoning, and exposure to cold.
The rates of substance abuse among runaways are far above national averages. Alcohol use, for instance, is at seventy-nine percent for US runaways, compared with thirty-five percent among their non-runaway peers. This is in part because many runaways began with addictions that preceded and sometimes precipitated their leaving home. Young girls are particularly at risk for rape, sexually transmitted diseases like AIDS, and pregnancy. And the longer a teenager remains on the streets, the less likely he or she will be to go to college or learn a trade later on. While running away may seem to be an escape from an intolerable situation, homeless life provides neither shelter nor relief.
For runaways, the motivation behind the act is usually less the assertion of free will than the urgent need to escape, as they are almost always escaping from something or someone. The most commonly cited reason for running away, at thirty percent of youths polled by the National Runaway Safeline (NRS), is family dynamics. One or both parents may suffer from alcoholism or some other addiction. Youths from families with one or more parents who have substance abuse problems are particularly at risk of neglect or abuse, whether physical or emotional. In situations of chronic abuse, running away may seem reasonable. While a teenager’s desire to flee an abusive home life is understandable, there are cases where the source of motivation is less obvious.
Teenagers occasionally run away from stable households, too. When contacted, youths in crisis also cite problems with peers, economic problems, or psychological problems. According to data collected by the US National Institutes of Health, homeless and runaway youth are six times more likely than their non-runaway peers of the same age to meet the diagnostic criteria for at least two mental disorders. They are seventeen times more likely to meet the criteria for one disorder.
Runaways who require psychiatric treatment, which in most countries the state is not obliged to provide, present a unique problem. If the family cannot provide this sort of treatment, it is likely to lead to a vicious circle. While improved social programs can help in keeping runaways physically safe, this alone does nothing to address psychological issues. Although there are more questions than answers about appropriate treatment options, one thing is certain: runaways need more help than they are receiving.

Unit 4 Tough on Drugs
The widespread sale and use of illegal drugs is a major challenge to governments throughout the world. A UN report estimated that the total value of the international illegal drug trade is $400 billion per year. This is larger than the value of international trade in iron and steel and motor vehicles. And the trade is growing. In the war on drugs, several countries, including Singapore, have adopted a “zero tolerance” law regarding drug possession and trafficking.
Certainly, part of Singapore’s approach toward dealing with the use of illegal drugs is related to the government’s intense concern over national security since gaining independence from Great Britain. The political system that has developed in Singapore depends on the continued use of powers established to deal with communist threats in the Southeast Asian peninsula in the 1950s.
A key instrument in wielding this power is the Internal Security Act (ISA). The ISA was created in 1960 and modeled on the British government’s Preservation of Public Security Ordinance of 1955. The ISA has remained part of Singapore’s domestic laws since that time. Though the country has been accused of denying basic human rights to its people, there has been little serious challenge to Singapore’s legal practices due to other instruments of state control. These measures include controls over the freedom of the press, restrictions on trade unions and associations, and the abolition of jury trials.
In addition to suppressing political dissent by defining it as a threat to Singapore’s national security, the ISA allows citizens to be arrested without warrant and detained without trial if they are “suspected of criminal activity.” Such criminal activity includes, of course, the sale or use of illegal drugs. The government agency in charge of dealing with drug users is the Central Narcotics Bureau (CNB), which employs Singapore’s Misuse of Drugs Act to require anyone to submit to a urine test for drugs. A positive drug test is sufficient justification for detention in a Drug Rehabilitation Center (DRC) for six months. Singapore’s DRCs are run by the Prisons Department, which does not subscribe to the idea that drug addiction is a medical problem. Rather, drug addiction is seen as a social and behavioral problem. Therefore, addicts are held responsible for the consequences of their own actions.
From 1975 to 2012, the penalty in Singapore for anyone caught trafficking in illegal drugs was death. As of 2012, the death penalty is no longer mandatory (but remains enforceable), and life sentences are now the norm. In addition to harsh penalties for drug trafficking, Singaporean law also imposes a “presumption of intent” to be a drug trafficker in all cases in which the amount of drugs in the possession of a person exceeds a certain limit, such as 100 grams of opium or three grams of cocaine.
Observers from other countries with common-law systems tend to take for granted that a person is “presumed innocent until proven guilty beyond a reasonable doubt.” In drug trafficking cases in countries where the presumption of innocence is mandated, the prosecution has to prove either the physical act of trafficking or the intent to traffic the drug. However, under Singaporean law, the prosecution only has to prove the possession of the drug by the accused. The burden of proof is on the accused to show there was no intent to distribute the drug. Putting this burden on the accused makes it much harder to successfully defend the case.
Singapore’s government justifies these harsh laws as one of the few ways to keep drugs out of the country. Singapore is in a rather unique geographical position as an air, land, and sea hub for Southeast Asia. This fact makes it particularly susceptible to becoming a transit point for drug traffickers. In addition, according to supporters of this law, it is extremely hard, if not impossible, to prove intent to traffic drugs without the presumption of intent followed in Singaporean law.

Unit 5 Deforestation
It would be difficult to imagine life without the beauty and richness of forests. But scientists warn we cannot take our forests for granted. By some estimates, deforestation has already resulted in the loss of as much as eighty percent of the natural forests of the world. Currently, deforestation is a global problem, affecting wilderness regions such as the temperate rainforests of the Pacific Northwest area of the US and Canada’s British Columbia, and more seriously, the tropical rainforests of Central and South America, Africa, Southeast Asia, and Australia.
Deforestation occurs for many reasons. In the temperate rainforests of the US and Canada, large areas of forest have been cleared for logging and urban expansion. In tropical rainforests, one of the most common reasons for deforestation aside from logging is agriculture. Because the soil in many tropical regions is often nutrient-poor, and since ninety percent of nutrients in tropical forests are found in the vegetation and not in the soil, many farmers practice an agricultural method known as slash and burn.
This method consists of cutting down the trees of an area in the rainforest and burning them to release their rich nutrients into the soil. This method is sustainable only if the population density does not exceed four people per square kilometer of land. When this is the case, each farm has enough land to let sections of it lie fallow for ten years or more, which is enough time for the land to renew itself. In recent years, however, the population density has often reached three times the optimum number. This results in land being used in a more intensive manner with no chance to recover. Under these conditions, slash-and-burn farming becomes only a temporary solution. Within two or three years, the soil becomes depleted and the farmer must repeat the slash-and-burn process elsewhere.
Deforestation causes changes in the earth’s atmosphere. For example, deforestation in tropical areas disrupts the cycle of rain and evaporation by removing the moist canopy of foliage that trees provide. Undisturbed, this canopy traps about twenty percent of the precipitation in the area; when this moisture evaporates, it causes clouds to form, promoting future precipitation. When trees are cleared away, the canopy is lost and the cycle is disturbed. Rainfall sinks into the earth rather than evaporating into the air, leading to a drier local environment. This can cause the creation of deserts, ultimately raising atmospheric temperatures.
Deforestation is also partially responsible for rising atmospheric levels of carbon dioxide (CO2). Forests normally decrease the amount of carbon dioxide because the trees consume it and release oxygen. Less forest, therefore, means more CO2 in the atmosphere, especially when trees are burned, which releases even more CO2. About 1.6 billion metric tons of CO2 enter the atmosphere this way every year. For comparison, the burning of fossil fuels releases approximately 6 billion metric tons of CO2 per year. These rising levels are a cause for concern because they are expected to be responsible for fifteen percent of the increase in global temperatures up through 2025.
In addition, deforestation causes the extinction of thousands of species of wildlife annually. It is estimated that worldwide, as many as 80 million kinds of animals and plants make up the total species on earth, but only about 1.5 million have been studied and named by scientists. Tropical rainforests, which cover about seven percent of the earth's land, are home to over half of these plant and animal species. If the rainforests disappear, many of these species will become extinct. This means many species will vanish before we can discover them.
Is it possible to reverse the devastating effects of deforestation? Many experts think so, but it will require a concerted international effort to protect the remaining forests. It will also require increased awareness, more sustainable consumer habits, and solutions that replace financial incentives for local economies.

Unit 5 Genetically Modified Organisms (GMOs)
Genetically modified crops are plants that have been altered by adding genes from other organisms into their DNA. Such modifications might include the addition of genetic material from bacteria, animals, or other plants to enhance desired traits or eradicate negative qualities. Genetic modification can make crops more resistant to bad weather or less reliant on pesticides. Many agricultural scientists see genetic modification for helpful traits—especially, in the future, enhanced nutrition—as the most viable solution to the threat of global food supply problems.
Yet GM organisms, commonly labeled GMOs, have been controversial since their introduction to consumers. Many people are worried about potential health risks. Although tests confirm their safety, many remain doubtful that scientists can detect potential long-term effects. And many critics still believe GMOs have been proven to harm animals, without realizing that the studies they refer to have been discredited.
When it comes to GMOs, a significant divide exists between scientists and the public. Popular anti-GMO feeling has been fueled by environmental activist groups like Greenpeace, which opposes all genetic modification of crops. Many common but false claims about GMO safety issues began with a 2012 paper by the disgraced French biologist Gilles-Éric Séralini. His study reported a high incidence of tumors in rats that were fed GM corn engineered by Monsanto, the world’s largest agrochemical company and the center of anti-GMO criticism. Séralini chairs an anti-GMO organization known for organizing protests against Monsanto.
The popular media reported this news with sensationalized headlines linking GMOs to cancer. But the actual paper was immediately criticized within the scientific community. Among other serious errors, Séralini had used a type of lab rat that is naturally prone to tumors. One separate, peer-reviewed study found that as much as eighty percent of this species gets tumors no matter what they eat. Six French national academies of science published critical reviews of the study and of the journal that published it. The journal soon retracted the article, but not before Séralini announced the release of his book on the subject. Many of his peers suspected that his true goal, the entire time, was publicity.
Reviews of the scientific literature from the thirty years of GMOs in the food supply have found no adverse health effects. They have also found statistically insignificant or inconclusive findings of minor problems in lab animals. One 2012 meta-study of over 1,700 studies was published in 2013. The authors failed to find a single credible example of harm to humans or animals caused by GM crops. State regulatory commissions in the US, Japan, and throughout Western Europe, along with the World Health Organization, have concluded GMOs are safe.
And sometimes they are the safer alternative. Public opposition to GMOs, for instance, has forced GM potato brands off the market, to be replaced by conventional varieties requiring larger amounts of a dangerous pesticide. Contrary to public perception, even organic crops are grown with pesticides. Organic farmers are merely limited to naturally occurring substances. Yet many of these pesticides are potentially dangerous—such as rotenone, which has been linked to Parkinson’s disease. And to the frustration of many molecular biologists, many organic farmers spray crops with the organic pesticide Bt (a protein toxic to specific insects). Meanwhile, the same farmers oppose the addition of the same Bt protein through genetic modification, which greatly reduces the amount of spray necessary to protect the crop!
A 2015 poll of members of the American Association for the Advancement of Sciences (AAAS) found that eighty-eight percent of members have concluded that GMOs are safe for humans—compared with just thirty-seven percent of the public. According to the AAAS, that is the largest gap in attitudes between scientists and the public on any major issue, including human-caused global warming. Why is there such a gap? The answer may be found in the philosophical questions surrounding GMOs. Many people think it is wrong for scientists to “play God,” and that large, unpopular corporations like Monsanto should not be allowed to own patents on living things. But most scientists would say that these are ethical and political positions which do not belong in a scientific debate about safety.

Unit 6 Lie Detectors
Polygraphs, or instruments used to discover whether a person is lying, are commonly called lie detectors. Polygraphs are utilized in courts, in the government, and in private businesses. However, they are controversial. Many people don’t believe that polygraphs can accurately identify whether an individual is lying, while others believe that polygraphs are simply tools to intimidate people into confessing guilt, regardless of whether they’re really lying.
The older, analog polygraph machine consists of three styluses—or pens— and a roll of paper that slides across the machine. The styluses, which draw lines on the paper and record changes in the subject’s condition, are connected to wires, which are in turn connected to the test subject. A mostly straight line indicates there is minimal variation in the subject’s body. A jagged line with multiple peaks and valleys illustrates a large amount of variation. Modern digital polygraphs are interpreted in a fashion identical to analog polygraphs; however, instead of paper, the lines are displayed on a computer monitor.
Polygraph tests are interviews. Examiners ask subjects questions, and when subjects answer, their body reactions are recorded by the polygraph. During the interviews, examiners document the behavior of the lines on the paper subsequent to each question. Later, examiners use these results to assess the likelihood that the subject is telling the truth. When lying, individuals are often apprehensive about being caught, and this uneasiness produces stress. Stress triggers elevated heart and breathing rates and an increase in perspiration, all of which are then detected by the polygraph.
To detect changes in respiration, rubber tubes filled with air are positioned around subjects’ torsos, and as subjects breathe, air in the tubes is compressed and the tubes expand. When the tubes expand, they push against a part of the polygraph called the bellows. The greater the expansion of the tubes, the greater the contraction of the bellows, which moves an arm on the polygraph. Breathing quickly results in an irregular line on the test. To measure heart rate, subjects wear bands around their wrists. A tube connects the band to a second arm on the polygraph. As blood travels through the wrist, it creates very small sounds, and when subjects experience stress, the sounds become louder and faster. These sounds move the air in the tube, and the air pushes the bellows, moving the arm on the polygraph. A stronger, faster heart rate results in a more jagged line. Finally, the polygraph measures perspiration on the fingertips with metal plates called galvanometers, which are attached to two fingers. The galvanometers measure the skin's conduction of electricity. When people perspire, the skin becomes wet, and if subjects’ skin is wetter than usual, it conducts more electricity. The galvanometers are connected to the third arm of the polygraph, and if the skin starts to increase in electrical conduction, movement of the arm increases. A comparison of the movements of all three arms should reveal to the examiner increases in stress for certain questions.
Many people debate the reliability of polygraphs. They believe that although these tests measure variations in the body associated with stress, these variations could be the result of other kinds of emotions. While some people experience minimal to no stress when lying, some honest people may experience intense stress from the exam itself. Because of this, it is possible for the examiner to wrongly deduce that the subject is lying, and the subject might then be subjected to unfair punishment. Even companies that produce polygraphs indicate that the mechanisms cannot detect lies, suggesting that polygraphs can only identify behavior that should not be trusted. Because of these criticisms, polygraph data cannot be used as evidence in American courts. Also, American law prohibits private companies from forcing employees to take polygraph tests. Despite this, both the American government and businesses continue to utilize polygraphs. In fact, workers for government agencies can be fired if they fail—or refuse to take—a polygraph test. Various opponents of polygraphs have concluded from this fact that the tests are used to intimidate workers, to make them confess to wrongdoing or prevent them from complaining about company policies. Regardless, it’s clear that polygraph tests will continue to affect the lives of employees.

Unit 6 Patents
After years of backbreaking work and research, you have finally invented something that will solve many people’s problems and make the world a better place. Now what do you do? Your next step, if you are smart, is to obtain a patent. Patents are agreements between inventors and the government that give inventors ownership of their creations for a certain amount of time.
Patents also allow a person to own the idea for a not-yet-completed invention. US patent law states that an invention is “any new and useful process, machinery, manufacture, or composition of matter, or any new and useful improvement thereof.” Basically, anything that someone creates can be protected from theft by patenting.
Most things people use every day are or have been protected by patents. Examples include the first commercially viable electric light, which was patented by Thomas Edison in 1879; the drug aspirin, which was patented by Felix Hoffmann in 1899; and the rubber band, patented by Stephen Perry in 1845. Less obvious inventions have also been patented, such as special motors, gears, and machinery used in manufacturing settings. Strangely enough, living things can also be patented, as illustrated in 1988, when two Harvard University doctors were issued the first-ever patent for a new animal life form—a genetically altered mouse.
But not everything can be patented. One must first have an original idea for an invention. The invention does not have to be something that has never been thought of before, as most patents are for new adaptations or improvements on existing technology and not wholly new items. For example, the camcorder is a combination of the video camera and the tape recorder, but it was a new idea to combine them into one machine. Theorems regarding “natural laws” cannot be patented. Even though the principle of relativity was developed by Albert Einstein, he could not patent it because he did not invent it—it was already part of the natural world. Patents are important because they protect a person’s idea from others who might want to steal it. “If you work hard inventing a new machine or procedure, then you should be able to benefit from it financially,” says Timothy Elkins, a patent attorney. “You shouldn’t do the work and then have someone make money off of your idea.”
Patents also help to share ideas and technological information with other inventors and researchers. A description of all patented inventions is put into a database, which can be accessed by other people in the same field of interest. Patents also help to stimulate research by large companies. Finally, companies, especially large ones like IBM, Samsung, and Sony, can make a lot of money by holding patents on certain inventions and charging other companies money, called royalties, to use their ideas.
Patenting an idea can be quite time-consuming and expensive. After an inventor finishes creating a new invention, the next step is to fill out patent paperwork, which can be a long and complicated process. Usually, this type of research is done by a patent attorney, a lawyer whose specialty is patents. Patent attorneys can be costly, but they are almost always necessary because they make sure the patents are filed correctly. After all the paperwork has been reviewed by the inventor and the inventor’s attorney, it is submitted to the patent office along with a registration fee. The patent is sometimes rejected and must be resubmitted, and in this way, the process can be drawn out for many years. In fact, it often takes up to five years to complete the patent process.
Once the patent process is completed, an inventor’s idea still may not be completely protected. Patents only apply in the country in which they are issued. So if a machine is patented in the US, the patent only applies to the US. Unless patents are applied for in other countries as well, there is nothing to stop imitators around the world from copying that idea.

Unit 7 Ever-Evolving English
Like all languages, English has undergone many fundamental changes over time. The first form of English, now referred to as Old English (OE) or Anglo-Saxon, was first spoken in England and parts of Scotland. It was spoken between the middle of the fifth and twelfth centuries and was characterized by a comparatively limited vocabulary, as well as numerous endings that marked the gender, number, and case of words.
When the Normans invaded the British Isles in 1066, English came under the influence of the French-speaking conquerors, who became the new aristocracy. The class differences of this period are still reflected in the language. The words “beef,” “pork,” and “poultry,” for instance, all come from French, yet the words “cow,” “pig,” and “chicken” all have OE origins. This reveals who was taking care of these expensive animals and who was eating them: as one popular saying puts it, “French for the table, English for the stable.” Besides adding many French words, the language also gradually lost many of its OE endings. This new form of English, referred to as Middle English (ME), developed from 1066 to around 1500.
Meanwhile, between 1200 and 1600 a major alteration occurred in the way people pronounced many vowel sounds, particularly the long vowels. The completion of this change, dubbed “the Great Vowel Shift,” marks the birth of the modern English language.
Modern English developed rapidly during the reign of Elizabeth I (1558-1603), which was also the period when the great playwright William Shakespeare lived and wrote. His work had a profound influence on the language, introducing many new words and phrases that we now take for granted— like “uncomfortable,” which comes from Romeo and Juliet. Modern English was characterized by more active attempts at standardization of English usage and spelling. During the 1600s and 1700s, many writers called for English to follow more regular patterns, as French and Latin did. They also proposed that an English academy be created. No such academy was ever established, but numerous grammar texts and dictionaries started appearing.
The first official book of grammar rules was written by William Loughton in 1734, and in 1761 Joseph Priestley wrote and published The Rudiments of English Grammar. These texts were based on what the writers considered “correct” grammar rather than on an analysis of how people actually spoke and wrote; thus, their approach is referred to as prescriptive grammar. In later years, linguists argued that grammar should describe how people really use language and not how writers think it should be used, adopting an approach called descriptive grammar.
Today, English is spoken by so many people in so many different countries across the globe that it has become even harder to standardize. The contemporary consensus lies somewhere between the prescriptive and descriptive approaches. Educators generally try to follow something called Standard English (SE), so as to avoid a complete lack of order in the use of English. However, as one can see in the differences in spelling and vocabulary between, say, British and American English, rules can be difficult to enforce. Think of examples such as the American spelling of words like “organization” or “color,” which are spelled “organisation” and “colour” in the UK. Both forms are seen as correct, as long as they are used in the right country.
To help ensure that Standard English stays up to date, respected dictionaries publish annual lists of new words that have been accepted into the lexicon. These may be words that started out as slang, and many are new technological terms: in 2014 the popular Merriam-Webster Collegiate Dictionary added “selfie” and “hashtag.”
There have also been some significant changes in grammar over the years. For decades, prescriptivists argued that English sentences should never end in prepositions and that splitting infinitives was incorrect. (An example of a split infinitive is “to really want” instead of “really to want.”) These days such rules are generally considered invalid because they come from Latin, which is no longer viewed as a model for English. Correct English grammar is now considered largely a question of functionality, style, and taste; and the context in which language functions is taken into account before passing judgment.
As long as people are able to communicate effectively and have a basic standard to guide them, English serves its linguistic purpose. After all, the only languages not in flux are those that are no longer in use.

Unit 7 Pride and Prejudice by Jane Austen [excerpted and adapted]
Elizabeth was suddenly roused by the sound of the doorbell, and her spirits were a little fluttered by the idea of its being Colonel Fitzwilliam himself. He had once before called late in the evening and might now again have come to inquire particularly after her. But this idea was soon banished, and her spirits were very differently affected when, to her utter amazement, she saw Mr. Darcy walk into the room. In a hurried manner, he immediately began an inquiry after her health, imputing his visit to a wish of hearing if she felt better. She answered him with cold civility. He sat down for a few moments, then got up and walked around the room, which surprised Elizabeth, who said nothing.
After a silence of several minutes, he came toward her in an agitated manner and said, “In vain I have struggled, and it will not do. My feelings will not be repressed, and you must allow me to tell you how ardently I admire and love you.”
Elizabeth’s astonishment at his admission was so great that she stared, colored, doubted, and was silent. Darcy considered this sufficient encouragement; and so, he continued to tell her all he felt and had long felt for her. He spoke well, but there were feelings besides those of the heart that he detailed. And he was not more eloquent on the subject of tenderness than of pride. He dwelt on his sense of her inferiority, and also on the family obstacles which stood in their way, and told her that his better judgment had wrestled with his feelings.
In spite of her deeply rooted dislike for Darcy, Elizabeth could not be insensible to the compliment of such a powerful man’s affection, and though her intentions did not vary for an instant, she was sorry for the pain he was about to receive until, roused to resentment by his insensitive words, she lost her compassion and became angry. She tried, however, to compose herself to answer him patiently, once he had finished talking.
He concluded by telling her of the strength of his love for her, which, in spite of all his endeavors, he had found impossible to conquer, and with his expression of hope that it would now be rewarded by her acceptance of his hand in marriage. As he said this, she could easily see that he had no doubt that she would answer affirmatively. He spoke of apprehension and anxiety, but his countenance expressed real security that she would accept him.
This further exasperated her, and when he ceased talking, the color rose into her cheeks and she said, “In such cases as this, it is, I believe, the established mode to express a sense of obligation for the sentiments avowed, however unequally they may be returned. It is natural that obligation should be felt, and if I could feel gratitude I would now thank you. But I cannot—I have never desired your good opinion, and you have certainly bestowed it most unwillingly. I am sorry to have caused you pain. It has been most unconsciously done and, I hope, will be of short duration. These feelings, which, you tell me, have long prevented the acknowledgment of your regard, will surely help you overcome your love for me, especially after what you have expressed as your true opinions of me and my family.”
Mr. Darcy, who was leaning against the mantelpiece with his eyes fixed on her face, seemed to catch her words with both resentment and surprise. His complexion became pale with anger, and the disturbance of his mind was visible in every feature. He was struggling for the appearance of composure and would not open his lips till he believed himself to have attained it.
The pause made Elizabeth feel dreadful, and at length, with a voice full of forced calmness, he said, “And this is all the reply which I am to have the honor of expecting! I might, perhaps, wish to be informed why, with so little endeavor at civility, I am thus rejected. But it is of small importance.”
“I might as well inquire,” replied she, “why with so evident a desire of offending and insulting me, you chose to tell me that you liked me against your will, against your reason, and even against your character.”

Unit 8 Hawking Radiation
Stephen Hawking is probably the best-known physicist of the late 20th century. The reasons for this include the brilliant accomplishments in his field, as well as the fact that he continues his work as a theorist despite suffering from a disability that leaves him with extremely limited speech and mobility. Hawking’s work has focused on black holes, his most famous theory stating that black holes must radiate energy and eventually disappear. This was such an original and unexpected idea that the phenomenon it describes has come to be known as “Hawking radiation.”
Hawking was born in 1942 in Oxford, England. As a child, he showed great ability in mathematics and physics. He graduated from Oxford University in 1962 and earned his PhD in cosmology from Cambridge University in 1966. During this time, Hawking was diagnosed with Amyotropic Lateral Sclerosis (ALS), a rare degenerative disease which gradually destroys a person’s ability to move and speak. Rather than discouraging him, the news inspired Hawking to work even harder and make his mark on science while he still could.
In 1974 Hawking proposed his fascinating theory that black holes are not totally “black,” that they are not simply one-way “drains” of the universe which do nothing but consume everything around them. (A black hole is not literally a hole but rather an object in space with such a strong gravitational pull that nothing nearby can escape from it.) According to Hawking’s theory, a black hole also radiates energy, and gradually, it loses mass. The smaller the black hole becomes, the faster it loses mass, and eventually it disappears completely. This can only happen when it has nothing to consume.
Hawking’s theory has been very influential, though it is difficult to understand even for his colleagues. Hawking explained his theory with mathematical calculations, but it is much harder to articulate in everyday language. One way to try to understand it is by imagining pairs of opposite particles: one matter, the other antimatter. Normally, matter and antimatter particles annihilate each other and simply disappear. But this can change at the event horizon—the point of no return at which matter and energy are sucked by gravity into the black hole. It is possible for a particle of antimatter to be separated and sucked in before canceling out its matter counterpart. The antimatter particle then develops negative energy. This negative energy is added to the black hole, and because of this the black hole must lose some mass, which it does in the form of photons (light particles) and various kinds of other particles. Theoretically, these particles, called Hawking radiation, can be seen and measured. So if we can ascertain that particles are escaping from a black hole, we can deduce that the hole is losing mass at the same time.
According to the theory, Hawking radiation can only occur if a black hole is not actively consuming anything. Since all known black holes are surrounded by clouds of gas which they are pulling in, for many years it was impossible to prove Hawking’s theory. But in 2014, physicist Jeff Steinhauer of the Israel Institute of Technology observed Hawking radiation for the first time—being emitted from a model black hole in a laboratory. The physics community remains cautious about concluding that the model, produced with hyper-cooled rubidium atoms, reproduces conditions comparable to a real black hole. And the results still need to be replicated. But the consensus is that if the findings stand, the radiation observed is, in fact, exactly what Hawking predicted.
Hawking’s work on black holes made him a scientific celebrity, and in 1979 he attained the post of Lucasian Professor of Mathematics at Cambridge, the position held by Sir Isaac Newton 300 years earlier. Although ALS has left him restricted to a wheelchair, and he requires a voice synthesizer to communicate, Hawking has remained active in physics and continues to publish his research in scientific journals. He gives public lectures in many countries and appears on television. He has been very influential in presenting modern theories of the universe to ordinary people, particularly in the books A Brief History of Time (1988) and The Universe in a Nutshell (2001). Hawking’s genius, his love of his work, and his persistence despite an extremely difficult illness are both inspiring and humbling.

Unit 8 The ISS and the Future of Space
The year 2015 marked a major milestone for the International Space Station: a record fifteen years of continuous human presence in space. This is not an accomplishment to take for granted. The station’s age is a growing concern and has spurred some discussion of the eventual end of the ISS’s mission. The US and Russia have tentatively agreed to continue their cooperation in operating the station, with Russia independently announcing plans to keep it running until at least 2024. But at some point it will be necessary to retire the ISS. With the end in sight, the station’s groundbreaking research is all the more important, as it will lay the foundation for the next stage of humanity’s exploration of space.
Since 2011, the ISS has been home to the Alpha Magnetic Spectrometer, an experimental device that supports research in high-energy theoretical physics. The AMS is a tool that records the presence of cosmic rays in its search for such phenomena as antimatter and hypothetical dark matter. So far it has recorded more than 60 billion manifestations of cosmic energy. Nobel Laureate Samuel Ting of the Massachusetts Institute of Technology leads the physicists assigned to interpret the latest findings. Ting believes that thanks to AMS data, his team is on the brink of identifying the origins of dark matter.
Cosmic radiation can also be dangerous to humans in space. A better understanding of this radiation will be needed before astronauts may safely travel to destinations farther out into the universe, like Mars. The ISS is the ideal place for developing the technology to support a mission to Mars, the planning for which is already under way. One of the greatest challenges facing such missions is the unknown effects of long-term spaceflight on the human body. In 2015, astronaut Scott Kelly of NASA and Russian cosmonaut Mikhail Kornienko began a one-year stay aboard the ISS. Very few humans have spent such a long time in space, so this is an important opportunity for discovery. “We know a lot about six months,” says NASA scientist Julie Robinson. “But we know almost nothing about what happens between six and twelve months in space.” Korienko and Kelly will be closely monitored for changes to their eyesight, hearing, and metabolism. Their year on the station will provide ample opportunities to refine the skills that will be needed by the astronauts who someday venture beyond Earth’s orbit.
In addition to helping explore the universe beyond our planet, the ISS also has important work to do closer to home. Some scientists have proposed equipping the station with a powerful laser—not for blasting asteroids or hostile aliens, but for removing space trash. An accumulation of as much as 3,000 tons of junk is already in Earth’s orbit, some traveling at speeds over 20,000 miles per hour, over ten times faster than the average bullet. As accurately dramatized in the 2013 movie Gravity, collisions with even tiny chunks of space trash can cause a serious danger to satellites, the ISS itself, and other spacecraft. Telescopes on the station could detect tiny pieces of debris in orbit and target a laser to deflect the course of the junk down into the atmosphere, where it would burn up. Japanese researchers plan to test such a system on the ISS. “We may finally have a way to stop the headache of rapidly growing space debris that endangers space activities,” project leader Toshikazu Ebisuzaki said.
Though the exploration of space will surely continue, whether in low Earth orbit or beyond, the next step for space stations is hard to predict. One question many observers have posed is whether the next ISS will be operated by governments alone or will involve private companies. Elon Musk's SpaceX has already launched rockets to bring supplies to the ISS, and it is designing reusable vehicles for private travel into space. A private space station might serve tourists along with scientists and astronauts, as well as provide funding for continued research. In any case, for the near future, the ISS will remain an important player in space exploration.

Unit 9 Creatine’s Place in Sports and Fitness
Amid scandals involving sports stars like Lance Armstrong using steroids and other illicit performance-enhancing drugs, a safe, natural alternative to such drugs is the Holy Grail for athletes who want an edge. Since its appearance on the market in the 1990s, creatine has been welcomed as a valuable fitness tool, if not the Holy Grail. Reports on its effectiveness have varied considerably, but the overall consensus is it does have some effect, and most importantly, it’s safe. As with most relatively new products, however, that second part continues to be questioned.
Creatine is an amino acid produced in the body, and it is also present in small amounts in meat and fish. According to the Food and Drug Administration (FDA), a healthy person requires only two grams of creatine per day, half of which is produced in the body by the liver, kidneys, and pancreas. Among other roles, creatine helps cells utilize energy. Increasing creatine levels has been shown to improve the body’s energy use, especially during short but intense bursts of strenuous exercise. This discovery led to the development of high-dose creatine supplements (up to twenty grams daily) as a performance enhancer and workout aid.
Creatine supplements are designed to enhance athletic performance by making more energy available to muscles during exercise. It can be effective for increasing short-term muscular stress endurance in contact sports like football and, especially, in weight training. In addition to facilitating cell metabolism, it draws water into muscle cells, which can help the production of muscle fiber. Creatine was first introduced to Olympic athletes to maximize muscle energy output. It has since enjoyed wide use among professional and amateur athletes. But the supplement attracted criticism shortly after its introduction. Early studies questioned creatine’s effects on endurance. The gains were only observed in bursts of activity of thirty seconds or less, and these findings curbed the enthusiasm of informed athletes hoping to increase their energy throughout long periods of exertion, as are typical in most sports. Nonetheless, the effect remained relevant for certain sports, such as power lifting. And studies confirmed that extra repetitions in workouts translated into greater muscle mass gains—provided those taking the supplements worked out regularly and ate an otherwise balanced diet. Another cause for concern has been water retention. Athletes “loading” creatine at high doses tend to take on extra water weight—as much as five pounds’ worth in the first week.
But the retention occurs inside the muscle cells themselves, so rather than the bloating normally associated with water retention, creatine retention actually just makes the muscles larger—without any added muscle mass or strength. This effect has been found to be neutral at worst, and some research suggests it has a positive effect on motivation. Creatine’s muscle-pumping effect can be a placebo of sorts, which provides an illusion of success that motivates athletes to work harder. Eventually, this translates into increases in muscle mass as well—but as a result of the extra exercise, not the water.
The debate on creatine supplement safety, however, has gone back and forth since the beginning. Mild side effects like abdominal discomfort and diarrhea are well established but not major concerns. Some early studies caused concern about links to kidney problems. Continued study, however, revealed no direct link between creatine and any known kidney disorder. Kidney disorders that involve tissue swelling, however, can be made worse by high doses of creatine, which increases tissue swelling even more. In simple terms, creatine won’t cause kidney problems, but it can worsen some of them. Early concerns about liver damage were not borne out by subsequent research. As with its effect on the kidneys, creatine won’t cause liver problems, but it can swell liver tissue. A 2015 study by the Harvard School of Public Health linked creatine use to testicular cancer in young adults, finding that those who used the supplement were more likely to be diagnosed with the cancer. However, there has been only one such study to date; many more will be necessary to rule out other possible causes.
Several public health institutions, like the National Institutes of Health, conclude creatine is possibly effective for certain groups (excluding the elderly and already highly trained athletes). But they advise against creatine use for those under the age of 18, in part because the effects on the endocrine—or hormonal—system are not yet well understood.

Unit 9 Scuba Safety
Have you ever wondered what entices people to learn how to scuba dive? Is scuba diving an extreme sport that only a few crazy people would ever actively pursue? In fact, scuba diving is, for the most part, a very safe activity.
While there are definitely some dangers associated with it, most can be avoided if divers act responsibly. The acronym “scuba” stands for "self-contained underwater breathing apparatus.” Scuba equipment allows a diver to breathe underwater for an extended period. Three-fourths of the planet’s surface is covered with water, which provides us with varied and fascinating new environments to discover. In addition, scuba diving is relatively easy to learn: a few days in a certification program is usually sufficient to make a first dive. It’s comparatively safe, too, given that many more people get hurt skiing than scuba diving. As with any other activity, though, it should be practiced responsibly as there are hazards involved with improper use and maintenance of equipment, poor situational judgment, and interacting with certain inhabitants of the underwater world.
Divers usually practice the buddy system, which means that one diver will pair up with another diver. These divers look out for each other before and during the dive and assist each other in the event of a problem. They can help each other get into their diving equipment, make sure their buddy is not disoriented after entering the water, and make sure they don’t get distracted and lose track of the other divers. If there is a problem with one diver’s breathing apparatus, his or her buddy will share oxygen, actually taking the regulator out of his or her mouth and taking turns breathing with the buddy until they both can safely reach the surface and breathe air normally again.
While much of what a diver observes and experiences underwater is harmless, there are some creatures and situations that can present be dangerous. Amongst known dangers, sharks generally come to mind first. Of course, some sharks, such as the tiger, mako, and hammerhead, are considered quite aggressive. But many other sharks, including the ones most commonly seen during dives, such as the nurse shark and sand shark, are not usually aggressive at all and are more likely to swim away from a diver than to attack.
But one marine life form divers should only observe from a distance is the sea snake. This creature is not usually aggressive but may become so if disturbed. Its poison is as potent as a cobra’s, and there is no effective antidote for it. The jellyfish, sea wasp, and Portuguese man-of-war are also to be avoided at all costs. Even though they may look harmless and graceful floating in the water, they have nematocysts, which are small barbs that can deliver poisonous and painful stings that require immediate medical attention. The eel can also be dangerous. One of the most common kinds of eel is the moray. They don’t usually bother people, but if a diver disturbs one resting in a dark hole or crevice, it might bite. The stingray is another kind of marine animal that’s beautiful to watch, but divers must take care not to step on its barbed tail. This is a good reason for wearing diving boots; stingrays can burrow under the sand of the ocean floor and might not be easily spotted by divers. Although fatal attacks are almost unheard of, the danger is real: in 2006, TV wildlife expert Steve Irwin tragically died in a stingray attack.
Another danger involved with diving is nitrogen narcosis. This condition is caused by an increased concentration of nitrogen in the blood due to the highpressure environment. It involves feelings of “drunkenness” and a slowing down of normal brain functions, which may cause a diver to have trouble communicating with a buddy, reading diving equipment, or even telling which way is up. No one knows why nitrogen has this effect on divers, but the cure is very easy. The diver should just start slowly ascending, because the closer you get to the surface, the less pressure there is. Typical dives that go no deeper than eighteen to twenty-four meters rarely cause any nitrogen narcosis problems; however, diving to thirty meters or more noticeably affects most divers.
While there are some concerns about safety, people who are well educated about scuba diving, approach the activity responsibly with a competent buddy, and treat the marine environment with respect should have little trouble and lots of fun exploring the wide array of underwater attractions!

Unit 10 Crowdsourcing or Mob Rule?
In March of 1991, video footage of Rodney King being beaten by Los Angeles police shocked the world. Widely recognized as the first viral video, the footage sparked the LA riots of the following year. And it was shot by a passerby with a camcorder, not by a news reporter. Arguably, this video was a watershed in the way we consume media. We are no longer just spectators of the news; we are participants.
Citizen journalism is loosely defined as any amateur participation in the news process, which includes gathering, reporting, analysis, and circulation. The rise of social media has provided alternative outlets like YouTube and Facebook, allowing the public to not only gather news, but share it immediately and directly, surpassing mainstream media in speed and efficiency—if not skill and accuracy.
It’s important to differentiate between types of citizen journalism, as they have not enjoyed equal success. Photo and video journalism stand out in this regard. This is in part because the ubiquity of recording devices has far outstripped mainstream media’s ability to gather visual content, to the point where news teams now search social media sites for footage. Some organizations now “crowdsource” news images and employ platforms for the public to upload digital content directly. These include CNN’s iReport, which as of 015 had 1.3 million contributors—up 600 percent since its 2008 launch. Other news organizations have been following suit, with some commentators noting failure to do so often directs Web traffic to user-content sites like YouTube and LiveLeak. With news organizations relying more and more on Web advertising for revenue, none can afford to ignore traffic trends. The Arab Spring and conflicts in Syria and Iraq are good examples of issues that could not possibly have been covered as effectively with traditional methods— in terms of visual evidence, anyway. It’s still incumbent upon reputable news organizations to monitor the sea of information and check for accuracy. But by and large, mainstream media has accepted the importance of the phone camera as fact.
Reporting and analysis have been less successful, for reasons that support points raised by skeptical professional journalists. One prominent journalist was speaking for many of his colleagues when he commented, “I would trust citizen journalism as much as I would trust citizen surgery.” Maybe some journalists are worried about their job security. But there are also legitimate concerns about core tenets of good journalism: professionalism, objectivity, and ethics. Professional reporters represent the integrity of the organizations that employ them and are held accountable for errors of fact and judgment. Individual Internet users are accountable to no one.
This deficiency in standards was clearly demonstrated in the aftermath of the terrorist bombing in Boston in April 2013. Two men planted bombs near the finish line at the Boston Marathon, killing three and wounding over 250. The bombers escaped but were spotted in security photos of the crowd, and as authorities began trying to hunt them down, the city’s residents were gripped by both outrage and fear. They were also equipped with laptops and iPhones.
The result was a low point for citizen journalism. The Internet exploded with unverified sightings of the bombers all over town, worsening the atmosphere of terror. Then some citizen journalists “reported” that a local teenager was a suspect in the bombing, posting a photo of the boy standing in the crowd at the marathon. Internet users quickly identified the teenager. They began propagating the rumor that he was being sought by the authorities, pulling photos from his Facebook page and circulating them widely. The New York Post picked up the story, even publishing the teenager’s picture on its front page as the prime suspect in a murderous terrorist act. That same day, the newspaper was forced to retract the story—authorities confirmed that the young man was entirely innocent. And while the Post apologized for not verifying its facts, none of the amateur investigative journalists ever did.
Crowdsourcing has become an invaluable tool for professional journalists and is not going away anytime soon. But stories like this one raise questions about journalistic standards. In the future, will citizen reporters create informed citizens or misinformed mobs?

Unit 10 Manchester’s Sherlock Holmes
Outside 221b Baker Street, London, a plaque proclaims Sherlock Holmes once lived there. According to local lore, many foreign tourists visit the site—which happens to be the Sherlock Holmes Museum—apparently unaware the world’s most famous Briton is a fictional character.
In fact, there was no such address when Sir Arthur Conan Doyle wrote his detective stories. (It was added when Baker Street was extended in the 1930s.) And accounts of droves of adoring but supposedly ignorant fans are also suspect. But such stories—and the canon of Holmes fiction that expands to this day—speak to the allure of the character. Now, one author claims, there is evidence Doyle based Holmes on a real police detective: Jerome Caminada, known in his own time as “the Sherlock Holmes of Manchester.”
Angela Buckley is the author of the 2014 book The Real Sherlock Holmes: The Hidden Story of Jerome Caminada. In it she examines Manchester detective Caminada’s autobiography and Doyle’s work to draw comparisons between the real detective—active around the time Doyle created Holmes—and the fictional sleuth. “There are so many parallels,” she concludes, “that it is clear Doyle was using parts of this real character for his.” Chief among these similarities, Buckley asserts, is intellect. Doyle’s fiction hinged on Holmes’ almost superhuman powers of deductive reasoning and observation. Caminada, Buckley reports, could identify a career criminal by his walk and had an “encyclopedic knowledge” of the criminal underworld.
As with Holmes, Caminada was a master of disguise. Holmes goes undercover in many stories, as an Italian priest, a sailor, and an opium addict, among many examples. On real cases, Caminada dressed as a drunkard, a laborer, and even an upper-class professional, imitating the accents of each in order to gather information and apprehend suspects.
Buckley goes on to draw parallels between Caminada’s real cases and the fictional capers Holmes solves. Like Holmes, she explains, Caminada apprehended an alluring femme fatale and had a brilliant arch-nemesis. The two were also formidable fighters in spite of their modest stature.
Is the case closed? Did Doyle in fact base Holmes on Caminada? Going straight to the source, Doyle very clearly identified the medical professor Dr. Joseph Bell as his inspiration for Holmes. Bell was renowned for his keen powers of reasoning in medical diagnoses. Doyle biographers have also suggested development of the character may have been influenced by Sir Henry Littlejohn, an acquaintance who was a forensic surgeon with intimate knowledge of hundreds of crime investigations.
Much of Buckley’s premise rests on timing. Caminada had risen to national prominence, she argues, when Doyle was developing Sherlock Holmes. But so had other detectives, such as Leicester’s Francis “Tanky” Smith, also known for clever disguises and for his vast knowledge of the criminal world. Even if there were only one candidate who fit the bill, this leaves the question of why Doyle would not credit a detective who inspired him, despite being perfectly willing to credit a physician.
Furthermore, most of Buckley’s work is based on unverified accounts from Jerome Caminada’s autobiography, published fifteen years after the first appearance of Sherlock Holmes. One reviewer has pointed out some contemporary police detectives resented Doyle’s portrayals of them as mediocre. Did Caminada use his book to show that real police were better than that? We cannot be sure, but it is safe to say he had clear motives to present himself in a flattering light.
Which brings us to the modern author, and, apparently, the only proponent of the Caminada theory. Although she bills herself as a “family historian,” Buckley is in fact a genealogist and has no academic background in history. She may be faulted for failing to verify claims from a single source—an autobiography at that— and for basing her claims on loose correlations. But no one can prove she is wrong, either. The question may be destined to remain a mystery.

Unit 11 Repatriation of Remains
In 1971, Maria Pearson, a Yankton Dakota tribe member, waited in the Iowa governor’s lobby in full traditional dress until he finally agreed to speak with her. He asked what he could do for her, and she told him, “You can give me back my people’s bones and you can quit digging them up.” She was referring to remains of Native Americans uncovered during a state highway construction project. The remains of white people, apparently early settlers, had been respectfully reburied. By contrast, the Native remains had been sent off to researchers, with obvious, if implicit, disrespect. Maria did not back down, and the meeting eventually led to NAGPRA, the Native American Graves Protection and Repatriation Act.
But the debate about repatriation of remains continues to divide scientists and indigenous peoples. Many indigenous groups strongly believe that they have the right to possess and protect the remains of their ancestors. On the other hand, researchers believe that the skeletons hold too much potential for scientific study to surrender them to indigenous groups. This division is seen in such places as North America, Australia, and New Zealand.
NAGPRA did not become law in the US until 1990. The main principles of NAGPRA are simple: burial sites are sacred and should not be disturbed, and remains that have been removed from graves should be returned to the person’s descendants. Museums and universities have returned thousands of remains to various indigenous groups for reburial. There are two main arguments in support of repatriation. One involves the need to make amends for past abuse; the second involves the ancestral line and rights to remains.
First, one must look at how most of these remains ended up in museums and universities. Most of these collections were gathered during times of colonization, under some of the most severely oppressive conditions indigenous groups have faced. Graves were looted for skeletons and grave goods, which were displayed in museums. Indigenous groups in favor of repatriation are finally able to reclaim the bones of people central to their identities, which were taken from them centuries ago.
Secondly, indigenous groups are laying claim to their ancestral lines. Many cultures feel there’s a direct link between people from their culture today and their ancestors that goes back thousands of years. They also believe that the treatment of their ancestors directly affects their own lives now. Many Native American groups believe that everything is born from the earth and that its Creator brought balance into the world in the form of a circle. If the ancestors’ remains are not put back into the earth, the circle is not complete and the balance is destroyed. Many even feel that the hardships First Nations groups have long experienced is a result of the theft of their ancestors’ remains.
On the other side of the repatriation debate is the scientific argument that these bones can be used to understand human history and diversity, human evolution, human migration, disease, health, and cultural practices. The Natural History Museum in London, for example, holds an extensive collection of about 19,500 items. Physicians have used this collection to develop new methods for knee replacements, and Japanese dentists have used it to study the impact of diet on dental disease. It has also been used as a training collection for forensic anthropology to help identify victims from mass graves. If this collection had been repatriated before it could be studied, this work would not have been possible.
A second argument for the study of remains involves changes in anthropological techniques and the questions being asked. When the remains were first collected, they were used to classify people into races and, oftentimes, to try to prove the superiority of one race over another. Opponents of repatriation argue that anthropologists now use these collections to show the universality of human traits rather than promoting theories of racial superiority. Moreover, they argue, skeletal remains are a record of the past, and if these remains disappear, a large part of history is lost. However, both positions in the debate are based on serious arguments that should be taken into consideration on a case-by-case basis.

Unit 11 Investigating Gender Roles
“Cultural imperialism,” a term first coined in the 1960s, refers to cultural hegemony, or the domination of other nations. The worldwide spread of consumerism, for instance, is cited as a prime example of American influence. Critics of the US point to the plethora of American cultural products available to people in other countries, in particular media such as music, television, movies, news, and technology. They argue that these products replace local ones, thereby threatening the cultures of other nations. With the growing popularity of the Internet, many countries worried about being taken over by US culture have approved laws to control the amount and types of information available to their people. Those who oppose such policies state that the leaders of these countries are going against freedom or progress. However, those in favor of these laws say that they are necessary because their cultures and very identities are under attack.
Herbert Schiller (1919-2000) was a communications scholar. He asserted that although innovations such as the Internet have been praised as democratic, both information and technology are in reality controlled by the rich. This is explained in terms of the core-versus-periphery argument. Core nations such as the United States have political power and economic advantages. Peripheral nations are poor, so-called Third World nations. According to this idea, information, and therefore influence, flows from the core to the periphery. Third World nations are thus unwilling consumers of core values, ideology, and assumptions embedded in the information they receive. Those who believe in the theory of cultural imperialism point to the US television shows and McDonald’s restaurants found worldwide as evidence that influence only flows one way.
But critics of this position consider it far too simple, as it does not account for internal dynamics within societies. Also, they argue, it views culture as deterministic and static. It assumes people are passive and that the dominated cultures will form no opposition. In fact, many believe that “other” groups are not being taken over by US culture and media. Rather, people in other cultures tend to transform the intended meanings to ones that better suit their own societies. Thus, rather than becoming “Americanized,” for example, Asian countries have “Asianized” US cultural exports such as McDonald’s. This transformation can easily be seen on McDonald’s menus in places like India and China.
Other critics of the traditional notion of cultural imperialism state that although cultural imperialism may very well be a factor in the export and consumption of certain US media products, the Internet is unique. The Internet, unlike other media, has no central authorities through which items are selected, written, and produced. Instead, information can be sent from anywhere and by anyone. The Internet allows people to participate in their own languages and to take part in preserving and celebrating their own cultures. Thus, it is argued that growing Internet usage, rather than promoting cultural imperialism, may in fact promote multiculturalism.
On the other hand, recent research on Internet language use casts some doubt on this last idea. A 2013 study published in the scientific journal PLOS One examined the question. The study determined that of the world’s roughly 7,000 living languages, only five percent have any chance of becoming viable on the Internet. And an even smaller number—just over 250—are currently established online. Linguists fear this could speed up the loss of endangered languages, and with them, important aspects of culture.
Maybe the question of whether the Internet will overrun your culture will be answered by Schiller’s theories, at least in part. It may indeed depend very much on whether your language belongs to the core or the periphery. If you speak one of the most common Asian Internet languages (Chinese, Japanese, or Korean), or one of the European ones (French, Italian, or Spanish), you are not likely at risk of such a fate. If you speak one of the sub-Saharan Yoruba languages, however, your culture may be in trouble on the Internet.

Unit 12 Opening a Small Business
Opening and managing a small business requires great motivation, desire, and talent. It also takes research and planning. To increase your chances for success, you should take the time to explore and evaluate your business and personal goals and use this information to build a comprehensive and well-considered business plan to help you reach these goals. The following tips for opening a small business may greatly increase your chances of success.
Before investing a lot of time, energy, and money, it is important to do some self-analysis. Ask yourself these important questions: Do I have management talent? Am I experienced enough in this industry? Studies show that entrepreneurs are persistent, able to succeed in a challenging environment, and have a great need to be in control. They are also risk-takers who take responsibility and are willing and able to make decisions. Successful entrepreneurs are patient and able to wait until the right time to begin a business; they also learn from their mistakes and trust their own judgment. Finally, successful entrepreneurs have positive attitudes. Be objective about yourself and your aptitude. If these traits match your personality, identify what you enjoy doing most and then find a business opportunity congruent with your personality, skills, and interests. This last idea is stressed by many successful entrepreneurs, such as Amazon billionaire CEO Jeff Bezos. “One of the huge mistakes people make is that they try to force an interest on themselves,” he explains. “You don’t choose your passions; your passions choose you.”
Next, an effective plan is an important management tool for setting goals and measuring performance. Although your plan will depend on the type and size of your business, all plans should be organized into individual sections: an executive summary, a description of the product or service, a marketing analysis and plan, a description of the management team, and a financial strategy. A high-quality plan demonstrates a careful analysis and realistic assessment of the future of your business. Putting your thoughts down on paper will help clarify the goals of your business, your customers and competitors, and your strengths, weaknesses, threats, and opportunities. This helps you set realistic targets, allocate funds, execute your plan, and direct your business toward achieving your goals.
It is essential to know your market. A marketing plan must include market research results, specify a business location, and define the targeted customer group. It should also name the competition and outline the four Ps: product, pricing, place, and promotion. Product information should include such things as packaging design, guarantees, and new product development, while pricing information should include setting competitive, profitable, and justifiable prices. Place information covers the physical distribution of goods, and promotion information includes personal selling, advertising, and sales promotion. The essential goal is targeted marketing—making sure your message reaches the people you envision as customers. Because the marketplace is fragmented and diffuse, reaching a large population often requires considerable investments in marketing and advertising, so identifying a specific customer profile is an important part of any marketing plan.
Where you set up shop is a strategic decision that should be made early. Select your location based on the type of goods or services to be provided and your target market, rather than on personal convenience. For retail businesses, consider a location that provides a lot of local traffic, as well as parking convenience, public transportation, compatibility with nearby businesses, and the building itself. For manufacturing and service businesses, consider your proximity to suppliers and customers, as well as customer convenience and space for future expansion.
Finally, just as your product, service, marketing, and location are critical to success, so is the quality of your employees. Your company’s reputation often depends on how employees are viewed by your customers. Before beginning the hiring process, define the job, the experience or education required, and the wage you expect to pay. Employee training is also an important step in making sure you've matched the right employee with the right position. The importance of picking a strong team was not lost on business icons like Steve Jobs. “Innovation has nothing to do with how many R&D dollars you have,” he once said. “When Apple came up with the Mac, IBM was spending at least 100 times more on R&D. It’s not about money. It’s about the people you have, and how you’re led.”

Unit 12 Brand Power / Brand Image
A brand is a name, term, sign, symbol, design, or a combination of these intended to identify the goods or services of a company or other business entity. Another purpose of a brand is to differentiate one company from another. One of the most important tasks of professional marketers is branding: creating, maintaining, protecting, and enhancing the brands of their products and services. This has become so important that today nearly all companies and products have a branding strategy.
Brand power refers to the relative strength of a company’s brand in the minds of consumers. It can influence consumer choice of products—even with impulse purchases. Brands are powerful to the extent that they confer high brand loyalty and strong brand associations. They also confer name recognition, perceived quality, and other assets such as patents and trademarks on a company. A strong brand can be one of a company’s most important assets. Market research firms measure brand power with brand equity metrics and other statistical research tools. Putting a numerical value on a brand name is difficult, but according to one estimate, the brands of companies like Coca-Cola and Microsoft are worth well over $60 billion.
High brand power provides a company with many competitive advantages. Because consumers expect stores to carry name brands, the companies have more bargaining power when negotiating with retailers. And because the brand name brings high credibility, a company with a strong brand can more easily launch new products with the same brand name. When a company introduces an additional item with a new flavor, form, color, or package size in a given product category and under the same brand name, it is called a line extension. Companies utilize this low-cost, low-risk strategy to introduce new products which will satisfy consumer desire for variety. For example, Coca-Cola used its well-known brand name to introduce Diet Coke, and the Johnson & Johnson brand, originally known for its baby shampoo, was later used to introduce products such as Johnson & Johnson Baby Oil, Cotton Swabs, and Dental Floss.
Another strategy is called brand extension. For example, Honda uses its company name for different products, including automobiles, motorcycles, snowmobiles, and marine engines. This allows Honda to advertise that their customers can fit “six Hondas in a two-car garage.”
Under a third strategy, multibranding, companies introduce additional brand names for products in the same category. This strategy is used to create separate brand images for individual products which may differ in some way from their other products. For example, Japan’s Matsushita used separate names for its different product families: Panasonic, National, and Technics.
Brand image refers to the ways in which consumers perceive the company and the brand. Because every customer has a different perception about brands, designing a new brand is not simply about designing a logo or a name. Rather, the image should send the consumer the correct visual, verbal, and conceptual message. Therefore, marketers must pay attention to every detail of their brand, even color. Beyond aesthetics, different colors have different implicit meanings, and those meanings are often associated with whatever bears the color. The colors used in a brand logo say a lot about the image of the brand. Banks, such as Lloyd's of London and Goldman Sachs, use black or blue as their representative color precisely because these colors are perceived as sophisticated, wise, serious, and rich. Likewise, most hospitals’ logos include the color green because it is perceived as refreshing, restful, peaceful, and hopeful. Moreover, according to statistics, people tend to prefer rounded brand logos as opposed to angular ones. Therefore, many logos, such as those of Coca-Cola and Tide, use curved lines.
Because consumers often hold long-standing perceptions about brands, brand power can make or break a company’s long-term success. In other words, a brand is like a reputation. As one CEO puts it, “Your brand is what other people say about you when you’re not in the room.”

Who is in charge of We Can Do It Consulting?