Feeds:
Posts
Comments

Archive for the ‘Evolutionary cognition’ Category

Larry H Bernstein, MD, Reporter

Leaders in Pharmaceutical Intelligence

Lasker~Koshland
Special Achievement Award in Medical Science

Award Description

Mary-Claire King
For bold, imaginative, and diverse contributions to medical science and human rights — she discovered the BRCA1 gene locus that causes hereditary breast cancer and deployed DNA strategies that reunite missing persons or their remains with their families.

The 2014 Lasker~Koshland Award for Special Achievement in Medical Science honors a scientist who has made bold, imaginative, and diverse contributions to medical science and human rights. Mary-Claire King(University of Washington, Seattle) discovered the BRCA1 gene locus that causes hereditary breast cancer and deployed DNA strategies that reunite missing persons or their remains with their families. Her work has touched families around the world.

As a statistics graduate student in the late 1960s, King took the late Curt Stern’s genetics course just for fun. The puzzles she encountered there—problems posed by Stern—enchanted her. She was delighted to learn that people could be paid to solve such problems, and that mathematics holds their key. She decided to study genetics and never looked back.

During her Ph.D. work with the late Allan Wilson (University of California, Berkeley), King discovered that the sequences of human and chimpanzee proteins are, on average, more than 99 percent identical; DNA sequences that do not code for proteins differ only a little more. The two primates therefore are much closer cousins than suggested by fossil studies of the time. The genetic resemblance seemed to contradict obvious distinctions: Human brains outsize those of chimps; their limbs dwarf ours; and modes of communication, food gathering, and other lifestyle features diverge dramatically. King and Wilson proposed that these contrasts arise not from disparities in DNA sequences that encode proteins, but from a small number of differences in DNA sequences that turn the protein-coding genes on and off.

Just as genetic changes drive species in new directions, they also can propel cells toward malignancy. From an evolutionary perspective, the topic of breast cancer began to intrigue King. The illness runs in families and is clearly inherited, yet many affected women have no close relatives with the disease. It is especially deadly for women whose mothers succumbed to it—and risk increases for those who have a mother or sister with breast cancer, particularly if the cancer struck bilaterally or before menopause. Unlike the situation with lung cancer, no environmental exposure distinguishes sisters who get breast cancer from those who remain disease free.

By studying a rare familial cancer, Alfred Knudsen (Lasker Clinical Medical Research Award, 1998) had shown in the early 1970’s how an inherited genetic defect could increase vulnerability to cancer. In the model he advanced, some families harbor a damaged version of a gene that normally encourages proper cellular behavior. Genetic mishaps occur during a person’s lifetime, and a second “hit” in a cell with the first physiological liability nudges the injured cell toward malignancy. A similar story might play out in families with a high incidence of breast cancer, King reasoned. She began to hunt for the theoretical pernicious gene in 1974.

2014_illustration_special
The hunt
Many geneticists doubted that susceptibility to breast cancer would map to a single gene; even if it did, finding the culprit seemed unlikely for numerous reasons. First, most cases are not familial and the disease is common—so common that inherited and non-inherited cases could occur in the same families. Furthermore, the malady might not strike all women who carry a high-risk gene, and different families might carry different high-risk genes. Prevailing views held that the ailment arises from the additive effects of multiple undefined genetic and environmental insults and from complicated interactions among them. No one had previously tacked such complexities, and an attempt to unearth a breast cancer gene seemed woefully naïve.

To test whether she could find evidence that particular genes increase the odds of getting breast cancer, King applied mathematical methods to data from more than 1500 families of women younger than 55 years old with newly diagnosed breast cancer. The analysis, published in 1988, suggested that four percent of the families carry a single gene that predisposes individuals to the illness.

The most convincing way to validate this idea was to track down the gene. Toward this end, King analyzed DNA from 329 participating relatives with 146 cases of invasive breast cancer. In many of the 23 families to which the participants belonged, the scourge struck young women, often in both breasts, and in some families, even men.

In late 1990, King (by then a professor at the University of California, Berkeley) hit her quarry. She had zeroed in on a suspicious section of chromosome 17 that carried particular genetic markers in women with breast cancer in the most severely affected families. Somewhere in that stretch of DNA lay the gene, which she named BRCA1.

This discovery spurred an international race to find the gene. Four years later, scientists at Myriad Genetics, Inc. isolated it. Alterations in either BRCA1 or a second breast-cancer susceptibility gene, BRCA2, found by Michael Stratton and colleagues (Institute of Cancer Research, UK) increase risk of ovarian as well as breast cancer. The proteins encoded by these genes help maintain cellular health by repairing marred DNA. When theBRCA1 or BRCA2 proteins fail to perform their jobs, genetic integrity is compromised, thus setting the stage for cancer.

About 12 percent of women in the general population get breast cancer at some point in their lives. In contrast, 65 percent of women who inherit an abnormal version of BRCA1 and about 45 percent of women who inherit an abnormal version of BRCA2 develop breast cancer by the time they are 70 years old. Individuals with troublesome forms of BRCA1 and BRCA2 can now be identified, monitored, counseled, and treated appropriately.

Harmful versions of other genes also predispose women to breast cancer, ovarian cancer, or both. Several years ago, King devised a scheme to screen for all of these genetic miscreants. This strategy allows genetic testing and risk determination for breast and ovarian cancer; it is already in clinical practice.

Genetic tools, human rights
King has applied her expertise to aid people who suffer from ills perpetrated by humans as well as genes. She helped find the “lost children” of Argentina—those who had been kidnapped as infants or born while their mothers were in prison during the military regime of the late 1970s and early 1980s. Some of these youngsters had been illegally adopted, many by military families. In 1983, King began identifying individuals, first with a technique that was originally designed to match potential organ transplant donors and recipients. She then developed an approach that relies on analysis of DNA from mitochondria—a cellular component that passes specifically from mother to child, and is powerful for connecting people to their female forebears. King helped prove genetic relationships and thus facilitated the reunion of more than 100 of the children with their families.

Later, the Argentinian government asked if she could help identify dead bodies of individuals thought to have been murdered. King harnessed the same method to figure out who had been buried in mass graves. She established that teeth, whose enamel coating protects DNA in the dental pulp from degradation, offer a valuable resource when attempting to trace remains in situations where long periods have elapsed since the time of death.

This and related approaches have been used to identify soldiers who went missing in action, including the remains of an American serviceman who was buried beneath the Tomb of the Unknowns in Arlington National Cemetery for 14 years, as well as victims of natural disasters and man-made tragedies such as 9/11.

Mary-Claire King has employed her intellect, dedication, and ethical sensibilities to generate knowledge that has catalyzed profound changes in health care, and she has applied her expertise to promote justice where nefarious governments have terrorized their citizens.

by Evelyn Strauss

Read Full Post »

Biochemical Insights of Dr. Jose Eduardo de Salles Roselino

Larry H. Bernstein, MD, FCAP, Interviewer, Curator

Leaders in Pharmaceutical Intelligence

Biochemical Insights of Dr. Jose Eduardo de Salles Roselino

http://pharmaceuticalintelligence.com/12/24/2014/larryhbern/Biochemical_
Insights_of_Dr._Jose_Eduardo_de_Salles_Roselino/

Article ID #165: Biochemical Insights of Dr. Jose Eduardo de Salles Roselino. Published on 12/17/2014

WordCloud Image Produced by Adam Tubman

Biochemical Insights of Dr. Jose Eduardo de Salles Roselino

How is it that developments late in the 20th century diverted the attention of
biological processes from a dynamic construct involving interacting chemical
reactions under rapidly changing external conditions effecting tissues and cell
function to a rigid construct that is determined unilaterally by the genome
construct, diverting attention from mechanisms essential for seeing the complete
cellular construct?

Larry, I assume that in case you read the article titled Neo – Darwinism, The
Modern Synthesis and Selfish Genes that bares no relationship with Physiology
with Molecular Biology J. Physiol 2011; 589(5): 1007-11 by Denis Noble, you might
find that it was the key factor required in order to understand the dislodgment
of physiology as a foundation of medical reasoning. In the near unilateral emphasis
of genomic activity as a determinant of cellular activity all of the required general
support for the understanding of my reasoning. The DNA to protein link goes
from triplet sequence to amino acid sequence. That is the realm of genetics.
Further, protein conformation, activity and function requires that environmental
and micro-environmental factors should be considered (Biochemistry). If that
were not the case, we have no way to bridge the gap between the genetic
code and the evolution of cells, tissues, organs, and organisms.

  • Consider this example of hormonal function. I would like to stress in
    the cAMP dependent hormonal response, the transfer of information
    that 
    occurs through conformation changes after protein interactions.
    This mechanism therefore, requires that proteins must not have their
    conformation determined by sequence alone.
    Regulatory protein conformation is determined by its sequence plus
    the interaction it has in its micro-environment. For instance, if your
    scheme takes into account what happens inside the membrane and
    that occurs before cAMP, then production is increased by hormone
    action. A dynamic scheme  will show an effect initially, over hormone
    receptor (hormone binding causing change in its conformation) followed
    by GTPase change in conformation caused by receptor interaction and
    finally, Adenylate cyclase change in conformation and in activity after
    GTPase protein binding in a complex system that is dependent on self-
    assembly and also, on changes in their conformation in response to
    hormonal signals (see R. A Kahn and A. G Gilman 1984 J. Biol. Chem.
    v. 259,n 10 pp6235-6240. In this case, trimeric or dimeric G does not
    matter). Furthermore, after the step of cAMP increased production we
    also can see changes in protein conformation.  The effect of increased
    cAMP levels over (inhibitor protein and protein kinase protein complex)
    also is an effect upon protein conformation. Increased cAMP levels led
    to the separation of inhibitor protein (R ) from cAMP dependent protein
    kinase (C ) causing removal of the inhibitor R and the increase in C activity.
    R stands for regulatory subunit and C for catalytic subunit of the protein
    complex.
  • This cAMP effect over the quaternary structure of the enzyme complex
    (C protein kinase + R the inhibitor) may be better understood as an
    environmental information producing an effect in opposition to
    what may be considered as a tendency  towards a conformation
    “determined” by the genetic code. This “ideal” conformation
    “determined” by the genome  would be only seen in crystalline
    protein.
     In carbohydrate metabolism in the liver the hormonal signal
    causes a biochemical regulatory response that preserves homeostatic
    levels of glucose (one function) and in the muscle, it is a biochemical
    regulatory response that preserves intracellular levels of ATP (another
    function).
  • Therefore, sequence alone does not explain conformation, activity
    and function of regulatory proteins
    .  If this important regulatory
    mechanism was  not ignored, the work of  S. Prusiner (Prion diseases
    and the BSE crisis Stanley B. Prusiner 1997 Science; 278: 245 – 251,
    10  October) would be easily understood.  We would be accustomed
    to reason about changes in protein conformation caused by protein
    interaction with other proteins, lipids, small molecules and even ions.
  • In case this wrong biochemical reasoning is used in microorganisms.
    Still it is wrong but, it will cause a minor error most of the time, since
    we may reduce almost all activity of microorganism´s proteins to a
    single function – The production of another microorganism. However,
    even microorganisms respond differently to their micro-environment
    despite a single genome (See M. Rouxii dimorphic fungus works,
    later). The reason for the reasoning error is, proteins are proteins
    and DNA are DNA quite different in chemical terms. Proteins must
    change their conformation to allow for fast regulatory responses and
    DNA must preserve its sequence to allow for genetic inheritance.

Read Full Post »

The History of Infectious Diseases and Epidemiology in the late 19th and 20th Century

Curator: Larry H Bernstein, MD, FCAP

 

Infectious diseases are a part of the history of English, French, and Spanish Colonization of the Americas, and of the Slave Trade.  The many plagues in the new and old world that have effected the course of history from ancient to modern times were known to the Egyptians, Greeks, Chinese, crusaders, explorers, Napoleon, and had familiar ties of war, pestilence, and epidemic. Our coverage is mainly concerned with the scientific and public health consequences of these events that preceded WWI and extended to the Vietnam War, and is highlighted by the invention of a public health system world wide.

The Armed Forces Institute of Pathology (AFIP) closed its’ doors on September 15, 2011. It was founded as the Army Medical Museum on May 21, 1862, to collect pathological specimens along with their case histories.

The information from the case files of the pathological specimens from the Civil War was compared with Army pensions records and compiled into the six-volume Medical and Surgical History of the War of the Rebellion, an early study of wartime medicine.

In 1900, museum curator Walter Reed led the commission which proved that a mosquito was the vector for Yellow Fever, beginning the mosquito eradication campaigns throughout most of the twentieth century.

WalterReed

WalterReed

Another museum curator, Frederick Russell, conducted clinical trials on the typhoid vaccine in 1907, resulting in the U.S. Army to be the first Army vaccinated against typhoid.

Increased emphasis on pathology during the twentieth century turned the museum, renamed the Armed Forces Institute of Pathology in 1949, into an international resource for pathology and the study of disease. AFIP’s pathological collections have been used, for example, in the characterization of the 1918-influenza virus in 1997.

Prior to moving to the Walter Reed Army Medical Center, the AFIP was located at the Army Medical Museum and Library on the Mall (1887-1969), and earlier as Army Medical Museum in Ford’s Theatre (1867-1886).

Army Medical Museum and Library on the Mall

Army Medical Museum and Library on the Mall

This institution, originally the Library of the Surgeon General’s Office (U.S. Army), gained its present name and was transferred from the Army to the Public Health Service in 1956. In 1962, it moved to its own Bethesda site after sharing space for nearly 100 years with other Army units, first at the former Ford’s Theatre building and then at the Army Medical Museum and Library on the Mall. Rare books and other holdings that had been sent to Cleveland for safekeeping during World War II were also reunited with the main collection at that time.

The National Museum of Health and Medicine, established in 1862, inspires interest in and promotes the understanding of medicine — past, present, and future — with a special emphasis on tri-service American military medicine. As a National Historic Landmark recognized for its ongoing value to the health of the military and to the nation, the Museum identifies, collects, and preserves important and unique resources to support a broad agenda of innovative exhibits, educational programs, and scientific, historical, and medical research. NMHM is a headquarters element of the U.S. Army Medical Research and Materiel Command. NMHM’s newest exhibit installations showcase the institution’s 25-million object collection, focusing on topics as diverse as innovations in military medicine, traumatic brain injury, anatomy and pathology, military medicine during the Civil War, the assassination of Abraham Lincoln (including the bullet that killed him), human identification and a special exhibition on the Museum’s own major milestone—the 150th anniversary of the founding of the Army Medical Museum. Objects on display will include familiar artifacts and specimens: the bullet that killed Lincoln and a leg showing the effects of elephantiasis, as well as recent finds in the collection—all designed to astound visitors to the new Museum.

Today, the National Library of Medicine houses the largest collection of print and non-print materials in the history of the health sciences in the United States, and maintains an active program of exhibits and public lectures. Most of the archival and manuscript material dates from the 17th century; however, the Library owns about 200 pre-1601 Western and Islamic manuscripts. Holdings include pre-1914 books, pre-1871 journals, archives and modern manuscripts, medieval and Islamic manuscripts, a collection of printed books, manuscripts, and visual material in Japanese, Chinese, and Korean; historical prints, photographs, films, and videos; pamphlets, dissertations, theses, college catalogs, and government documents.

The oldest item in the Library is an Arabic manuscript on gastrointestinal diseases from al-Razi’s The Comprehensive Book on Medicine (Kitab al-Hawi fi al-tibb) dated 1094. Significant modern collections include the papers of U.S. Surgeons General, including C. Everett Koop, and the papers of Nobel Prize-winning scientists, particularly those connected with NIH.

As part of its Profiles in Science project, the National Library of Medicine has collaborated with the Churchill Archives Centre to digitize and make available over the World Wide Web a selection of the Rosalind Franklin Papers for use by educators and researchers. This site provides access to the portions of the Rosalind Franklin Papers, which range from 1920 to 1975. The collection contains photographs, correspondence, diaries, published articles, lectures, laboratory notebooks, and research notes.

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

“Science and everyday life cannot and should not be separated. Science, for me, gives a partial explanation of life. In so far as it goes, it is based on fact, experience, and experiment. . . . I agree that faith is essential to success in life, but I do not accept your definition of faith, i.e., belief in life after death. In my view, all that is necessary for faith is the belief that by doing our best we shall come nearer to success and that success in our aims (the improvement of the lot of mankind, present and future) is worth attaining.”

–Rosalind Franklin in a letter to Ellis Franklin, ca. summer 1940

Smallpox

Although some disliked mandatory smallpox vaccination measures, coordinated efforts against smallpox went on in the United States after 1867, and the disease continued to diminish in the wealthy countries. By 1897, smallpox had largely been eliminated from the United States. In Northern Europe a number of countries had eliminated smallpox by 1900, and by 1914, the incidence in most industrialized countries had decreased to comparatively low levels. Vaccination continued in industrialized countries, until the mid to late 1970s as protection against reintroduction. Australia and New Zealand are two notable exceptions; neither experienced endemic smallpox and never vaccinated widely, relying instead on protection by distance and strict quarantines.

In 1966 an international team, the Smallpox Eradication Unit, was formed under the leadership of an American, Donald Henderson. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort, and adopted the new disease surveillance method promoted by Czech epidemiologist Karel Raška. Two-year old Rahima Banu of Bangladesh (pictured) was the last person infected with naturally occurring Variola major, in 1975

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

—World Health Organization, Resolution WHA33.3

Anthrax

Anthrax is an acute disease caused by the bacterium Bacillus anthracis. Most forms of the disease are lethal, and it affects both humans and other animals. Effective vaccines against anthrax are now available, and some forms of the disease respond well to antibiotic treatment.

Like many other members of the genus Bacillus, B. anthracis can form dormant endospores (often referred to as “spores” for short, but not to be confused with fungal spores) that are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, even Antarctica. When spores are inhaled, ingested, or come into contact with a skin lesion on a host, they may become reactivated and multiply rapidly.

Anthrax commonly infects wild and domesticated herbivorous mammals that ingest or inhale the spores while grazing. Ingestion is thought to be the most common route by which herbivores contract anthrax. Carnivores living in the same environment may become infected by consuming infected animals. Diseased animals can spread anthrax to humans, either by direct contact (e.g., inoculation of infected blood to broken skin) or by consumption of a diseased animal’s flesh.

Anthrax does not spread directly from one infected animal or person to another; it is spread by spores. These spores can be transported by clothing or shoes. The body of an animal that had active anthrax at the time of death can also be a source of anthrax spores. Owing to the hardiness of anthrax spores, and their ease of production in vitro, they are extraordinarily well suited to use (in powdered and aerosol form) as biological weapons.

Bacillus anthracis is a rod-shaped, Gram-positive, aerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria and put them into a mouse. The bacterium normally rests in endospore form in the soil, and can survive for decades in this state. Once ingested or placed in an open wound, the bacterium begins multiplying inside the animal or human and typically kills the host within a few days or weeks. The endospores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.

Robert Koch

Robert Koch

Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. One spore forms per one vegetative bacterium. Once formed, these spores are very hard to eradicate.

The lethality of the anthrax disease is due to the bacterium’s two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.

To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore.

Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol.

Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling. LF inactivates neutrophils so they cannot phagocytose bacteria. Anthrax causes vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.

Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere. Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. The body’s natural defenses presumably can destroy low levels of exposure. These people usually contract cutaneous anthrax if they catch anything.

Throughout history, the most dangerous form of inhalational anthrax was called woolsorters’ disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare, as almost no infected animals remain. The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. Gastrointestinal anthrax is exceedingly rare in the United States, with only one case on record, reported in 1942, according to the Centers for Disease Control and Prevention.

Various techniques are used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.

All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2.

  1. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a “medusa head” appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.

Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine, from Pasteur’s pioneering 19th-century work with cattle (the second effective vaccine ever) to the controversial 20th century use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.

If a person is suspected as having died from anthrax, every precaution should be taken to avoid skin contact with the potentially contaminated body and fluids exuded through natural body openings. The body should be put in strict quarantine and then incinerated. A blood sample should then be collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though culture of the organism is still the gold standard for diagnosis.

Full isolation of the body is important to prevent possible contamination of others. Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations should be used when handling the body. Disposable personal protective equipment and filters should be autoclaved, and/or burned and buried.

Anyone working with anthrax in a suspected or confirmed victim should wear respiratory equipment capable of filtering this size of particle or smaller. The US National Institute for Occupational Safety and Health – and Mine Safety and Health Administration-approved high-efficiency respirator, such as a half-face disposable respirator with a high-efficiency particulate air filter, is recommended.

All possibly contaminated bedding or clothing should be isolated in double plastic bags and treated as possible biohazard waste. The victim should be sealed in an airtight body bag. Dead victims who are opened and not burned provide an ideal source of anthrax spores. Cremating victims is the preferred way of handling body disposal.

Until the 20th century, anthrax infections killed hundreds of thousands of animals and people worldwide each year. French scientist Louis Pasteur developed the first effective vaccine for anthrax in 1881.

louis-pasteur

louis-pasteur

As a result of over a century of animal vaccination programs, sterilization of raw animal waste materials, and anthrax eradication programs in United States, Canada, Russia, Eastern Europe, Oceania, and parts of Africa and Asia, anthrax infection is now relatively rare in domestic animals. Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001.

Anthrax outbreaks occur in some wild animal populations with some regularity. The disease is more common in countries without widespread veterinary or human public health programs. In the 21st century, anthrax is still a problem in less developed countries.

  1. anthracis bacterial spores are soil-borne. Because of their long lifespan, spores are present globally and remain at the burial sites of animals killed by anthrax for many decades. Disturbed grave sites of infected animals have caused reinfection over 70 years after the animal’s interment.

Cholera

This is an acute diarrheal infection that can kill within a matter of hours if untreated. Oral rehydration therapy — drinking water mixed with salts and sugar. But researchers at EPFL — the Swiss Federal Institute of Technology in Lausanne — say using rice starch instead of sugar with the rehydration salts could reduce bacterial toxicity by almost 75 percent. That would make the microbe less likely to infect a patient’s family and friends if they are exposed to any body fluids.

The World Health Organization says cholera, a water-borne bacterium, infects three to five million people every year, and the severe dehydration it causes leads to as many as 120,000 deaths.

Cholera is an acute diarrheal disease caused by the water borne bacteria Vibrio cholerae O1 or O139 (V. cholerae). Infection is mainly through ingestion of contaminated water or food. The V cholerae passes through the stomach, colonizes the upper part of the small intestine, penetrates the mucus layer, and secretes cholera toxin which affects the small intestine.

Clinically, the majority of cholera episodes are characterized by a sudden onset of massive diarrhea and vomiting accompanied by the loss of profuse amounts of protein-free fluid with electrolytes. The resulting dehydration produces tachycardia, hypotension, and vascular collapse, which can lead to sudden death. The diagnosis of cholera is commonly established by isolating the causative organism from the stools of infected individuals

There are an estimated 3–5 million cholera cases and 100 000–120 000 deaths due to cholera every year.

Up to 80% of cases can be successfully treated with oral rehydration salts.

Effective control measures rely on prevention, preparedness and response.

Provision of safe water and sanitation is critical in reducing the impact of cholera and other waterborne diseases.

Oral cholera vaccines are considered an additional means to control cholera, but should not replace conventional control measures.

During the 19th century, cholera spread across the world from its original reservoir in the Ganges delta in India. Six subsequent pandemics killed millions of people across all continents. The current (seventh) pandemic started in South Asia in 1961, and reached Africa in 1971 and the Americas in 1991. Cholera is now endemic in many countries.

INDIA-ENVIRONMENT-POLUTION

INDIA-ENVIRONMENT-POLUTION

In its extreme manifestation, cholera is one of the most rapidly fatal infectious illnesses known. Within 3–4 hours of onset of symptoms, a previously healthy person may become severely dehydrated and if not treated may die within 24 hours (WHO, 2010). The disease is one of the most researched in the world today; nevertheless, it is still an important public health problem despite more than a century of study, especially in developing tropical countries. Cholera is currently listed as one of three internationally quarantinable diseases by the World Health Organization (WHO), along with plague and yellow fever (WHO, 2000a).

Two serogroups of V. cholerae – O1 and O139 – cause outbreaks. V. cholerae O1 causes the majority of outbreaks, while O139 – first identified in Bangladesh in 1992 – is confined to South-East Asia.

Non-O1 and non-O139 V. cholerae can cause mild diarrhoea but do not generate epidemics.

The main reservoirs of V. cholerae are people and aquatic sources such as brackish water and estuaries, often associated with algal blooms. Recent studies indicate that global warming creates a favorable environment for the bacteria.

Socioeconomic and demographic factors enhance the vulnerability of a population to infection and contribute to epidemic spread. Such factors also mandate the extent to which the disease will reach epidemic proportions and also modulate the size of the epidemic.Known population level (local-level) risk factors of cholera include poverty, lack of development, high population density, low education, and lack of previous exposure. Cholera diffuses rapidly in environments that lack basic infrastructure with regard to access to safe water and proper sanitation. The cholera vibrios can survive and multiply outside the human body and can spread rapidly in environments where living conditions are overcrowded and where there is no safe disposal of solid waste, liquid waste, and human feces.

Mapping the locations of cholera victims, John Snow was able to trace the cause of the disease to a contaminated water source. Surprisingly, this was done 20 years before Koch and Pasteur established the beginnings of microbiology (Koch, 1884).

John Snow's  map

John Snow’s map

Yellow Fever

Yellow fever virus was probably introduced into the New World via ships carrying slaves from West Africa. Throughout the 18th and 19th centuries, regular and devastating epidemics of yellow fever occurred across the Caribbean, Central and South America, the southern United States and Europe. The Yellow Fever Commission, founded as a consequence of excessive disease mortality during the Spanish– American War (1898), concluded that the best way to control the disease was to control the mosquito. William Gorgas successfully eradicated yellow fever from Havana by destroying larval breeding sites and this strategy of source reduction was then successfully used to reduce disease problems and thus finally permit the construction of the Panama Canal in 1904. Success was due largely to a top-down, military approach involving strict supervision and discipline (Gorgas, 1915). In 1946, an intensive Aedes aegypti eradication campaign was initiated in the Americas, which succeeded in reducing vector populations to undetectable levels throughout most of its range.

The production of an effective vaccine in the 1930s led to a change of emphasis from vector control to vaccination for the control of yellow fever. Vaccination campaigns almost eliminated urban yellow fever but incomplete coverage, as with incomplete anti-vectorial measures previously, meant the disease persisted, and outbreaks occurred in remote forest areas.

It was acknowledged by the Health Organization of the League of Nations (the forerunner to the World Health Organization (WHO)) that yellow fever was a severe burden on endemic countries. The work of Soper and the Brazilian Cooperative Yellow Fever Service (Soper, 1934, 1935a, b) began to determine the geographical extent of the disease, specifically in Brazil. Regional maps of disease outbreaks were published by Sawyer (1934), but it was not until after the formation of the WHO that a global map of yellow fever endemicity was first constructed (van Rooyen and Rhodes, 1948). This map was based on expert opinion (United Nations Relief and Rehabilitation Administration/Expert Commission on Quarantine) and serological surveys. The present-day distribution map for yellow fever is still essentially a modified version of this map.

global yellow fever risk map

global yellow fever risk map

Yellow fever is conspicuously absent from Asia. Although there is some evidence that other flaviviruses may offer cross-protection against yellow fever (Gordon-Smith et al., 1962), why yellow fever does not occur in Asia is still unexplained.

It has been estimated that the currently circulating strains of YFV arose in Africa within the last 1,500 years and emerged in the Americas following the slave trade approximately 300–400 years ago. These viruses then spread westwards across the continent and persist there to this day in the jungles of South America.

The 17D live-attenuated vaccine still in use today was developed in 1936, and a single dose confers immunity for at least ten years in 95% of the cases. In a bid to contain the spread of the disease, travellers to countries within endemic areas or those thought to be ‘at risk’ require a certificate of vaccination. The yellow fever certificate is the only internationally regulated certification supported by the WHO. The effectiveness of the vaccine reduces the need for anti-vectorial campaigns directed specifically against yellow fever. As the same major vector is involved, control of Aedes aegypti for dengue reduction will also reduce yellow fever transmission where both diseases co-occur, especially within urban settings.

Dengue

Probable epidemics of dengue fever have been recorded from Africa, Asia, Europe and the Americas since the early 19th century (Armstrong, 1923). Although it is rarely fatal, up to 90% of the

population of an infected area can be incapacitated during the course of an epidemic (Armstrong, 1923; Siler et al., 1926). Widespread movements of troops and refugees during and after World War II introduced vectors and viruses into many new areas. Dengue fever has unsurprisingly been mistaken for yellow fever as well as other diseases including influenza, measles, typhoid and malaria. It is rarely fatal and survivors appear to have lifelong immunity to the homologous serotype.

Far more serious is dengue haemorrhagic fever (DHF), where additional symptoms develop, including haemorrhaging and shock. The mortality from DHF can exceed 30% if appropriate care is unavailable. The most significant risk factor for DHF is when secondary infection with a different serotype occurs in people who have already had, and recovered from, a primary dengue infection.

Dengue has adapted to changes in human demography very effectively. The main vector of dengue is the anthropophilic Aedes aegypti, which is found in close association with human settlements throughout the tropics, breeding mainly in containers in and around, and feeding almost exclusively on humans. As a result, dengue is essentially a disease of tropical urban areas. Before 1970, only nine countries had experienced DHF epidemics, but by 1995 this number had increased fourfold (WHO, 2001). Dengue case numbers have increased considerably since the 1960s; by the end of the 20th century an estimated 50 million cases of dengue fever and 500 000 cases of DHF were occurring every year (WHO, 2001).

The appearance of DHF stimulated large amounts of dengue research, which established the existence of the four serotypes and the range of competent vectors, and led to the adoption of Aedes aegypti control programs in some areas (particularly South-East Asia) (Kilpatrick et al., 1970).

There have been several attempts to estimate the economic impact of dengue: the 1977 epidemic in Puerto Rico was thought to have cost between $6.1 and $15.6 million ($26–$31 per clinical case) (Von Allmen et al., 1979), while the 1981 Cuban epidemic (with a total of 344 203 reported cases) cost about $103 million (around $299 per case) (Kouri et al., 1989).

There is no cure for dengue fever or for DHF. Currently, the only treatment is symptomatic, but this can reduce mortality from DHF to less than 1% (WHO, 2002). Unfortunately, the extent of dengue epidemics means that local public health services are often overwhelmed by the demands for treatment.

Malaria

Malaria is a serious and sometimes fatal disease caused by a parasite that infects a mosquito. People who get malaria are typically very sick with high fevers, shaking chills, and flu-like illness. About 1,500 cases of malaria are diagnosed in the United States each year. The vast majority of cases in the United States are in travelers and immigrants returning from countries where malaria transmission occurs, many from sub-Saharan Africa and South Asia. Malaria has been noted for more than 4,000 years. It became widely recognized in Greece by the 4th century BCE, and it was responsible for the decline of many of the city-state populations. Hippocrates noted the principal symptoms. In the Susruta, a Sanskrit medical treatise, the symptoms of malarial fever were described and attributed to the bites of certain insects. A number of Roman writers attributed malarial diseases to the swamps.

Following their arrival in the New World, Spanish Jesuit missionaries learned from indigenous Indian tribes of a medicinal bark used for the treatment of fevers. With this bark, the Countess of Chinchón, the wife of the Viceroy of Peru, was cured of her fever. The bark from the tree was then called Peruvian bark and the tree was named Cinchona after the countess. The medicine from the bark is now known as the antimalarial, quinine. Along with artemisinins, quinine is one of the most effective antimalarial drugs available today.

quinquin acalisaya

quinquin acalisaya

Cinchona officinalis is a medicinal plant, one of several Cinchona species used for the production of quinine, which is an anti-fever agent. It is especially useful in the prevention and treatment of malaria. Cinchona calisaya is the tree most cultivated for quinine production.

There are a number of other alkaloids that are extracted from this tree. They include cinchonine, cinchonidine and quinidine  (Wikipedia)

Charles Louis Alphonse Laveran, a French army surgeon stationed in Constantine, Algeria, was the first to notice parasites in the blood of a patient suffering from malaria in 1880. Laveran was awarded the Nobel Prize in 1907.

Alphonse Laveran

Alphonse Laveran

Camillo Golgi, an Italian neurophysiologist, established that there were at least two forms of the disease, one with tertian periodicity (fever every other day) and one with quartan periodicity (fever every third day). He also observed that the forms produced differing numbers of merozoites (new parasites) upon maturity and that fever coincided with the rupture and release of merozoites into the blood stream. He was awarded a Nobel Prize in Medicine for his discoveries in neurophysiology in 1906.

malaria_lifecycle.

malaria_lifecycle.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

The Italian investigators Giovanni Batista Grassi and Raimondo Filetti first introduced the names Plasmodium vivax and P. malariae for two of the malaria parasites that affect humans in 1890. Laveran had believed that there was only one species, Oscillaria malariae. William H. Welch, reviewed the subject and, in 1897, he named the malignant tertian malaria parasite P. falciparum. In 1922, John William Watson Stephens described the fourth human malaria parasite, P. ovale. P. knowlesi was first described by Robert Knowles and Biraj Mohan Das Gupta in 1931 in a long-tailed macaque, but the first documented human infection with P. knowlesi was in 1965.

Anopheles mosquito

Anopheles mosquito

Ronald Ross, a British officer in the Indian Medical Service, was the first to demonstrate that malaria parasites could be transmitted from infected patients to mosquitoes in 1997. In further work with bird malaria, Ross showed that mosquitoes could transmit malaria parasites from bird to bird. This necessitated a sporogonic cycle (the time interval during which the parasite developed in the mosquito). Ross was awarded the Nobel Prize in 1902.

Ronald Ross_1899

Ronald Ross_1899

A team of Italian investigators led by Giovanni Batista Grassi, collected Anopheles claviger mosquitoes and fed them on malarial patients. The complete sporogonic cycle of Plasmodium falciparum, P. vivax, and P. malariae were demonstrated. Mosquitoes infected by feeding on a patient in Rome were sent to London in 1999, where they fed on two volunteers, both of whom developed malaria.

The construction of the Panama Canal was made possible only after yellow fever and malaria were controlled in the area. These two diseases were a major cause of death and disease among workers in the area. In 1906, there were over 26,000 employees working on the Canal. Of these, over 21,000 were hospitalized for malaria at some time during their work. By 1912, there were over 50,000 employees, and the number of hospitalized workers had decreased to approximately 5,600. Through the leadership and efforts of William Crawford Gorgas, Joseph Augustin LePrince, and Samuel Taylor Darling, yellow fever was eliminated and malaria incidence markedly reduced through an integrated program of insect and malaria control.

Gorgas-William-Crawford, MD

Gorgas-William-Crawford, MD

During the U.S. military occupation of Cuba and the construction of the Panama Canal at the turn of the 20th century, U.S. officials made great strides in the control of malaria and yellow fever. In 1914 Henry Rose Carter and Rudolph H. von Ezdorf of the USPHS requested and received funds from the U.S. Congress to control malaria in the United States. Various activities to investigate and combat malaria in the United States followed from this initial request and reduced the number of malaria cases in the United States. USPHS established malaria control activities around military bases in the malarious regions of the southern United States to allow soldiers to train year round.

U.S. President Franklin D. Roosevelt signed a bill that created the Tennessee Valley Authority (TVA) on May 18, 1933. The law gave the federal government a centralized body to control the Tennessee River’s potential for hydroelectric power and improve the land and waterways for development of the region. An organized and effective malaria control program stemmed from this new authority in the Tennessee River valley. Malaria affected 30 percent of the population in the region when the TVA was incorporated in 1933. The Public Health Service played a vital role in the research and control operations and by 1947, the disease was essentially eliminated. Mosquito breeding sites were reduced by controlling water levels and insecticide applications.

Chloroquine was discovered by a German, Hans Andersag, in 1934 at Bayer I.G. Farbenindustrie A.G. laboratories in Eberfeld, Germany. He named his compound resochin. Through a series of lapses and confusion brought about during the war, chloroquine was finally recognized and established as an effective and safe antimalarial in 1946 by British and U.S. scientists.

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

A German chemistry student, Othmer Zeidler, synthesized DDT in 1874, for his thesis. The insecticidal property of DDT was not discovered until 1939 by Paul Müller in Switzerland. Various militaries in WWII utilized the new insecticide initially for control of louse-borne typhus. DDT was used for malaria control at the end of WWII after it had proven effective against malaria-carrying mosquitoes by British, Italian, and American scientists. Müller won the Nobel Prize for Medicine in 1948.

Paul Muller

Paul Muller

Malaria Control in War Areas (MCWA) was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic. Many of the bases were established in areas where mosquitoes were abundant. MCWA aimed to prevent reintroduction of malaria into the civilian population by mosquitoes that would have fed on malaria-infected soldiers, in training or returning from endemic areas. During these activities, MCWA also trained state and local health department officials in malaria control techniques and strategies.

The National Malaria Eradication Program, a cooperative undertaking by state and local health agencies of 13 Southeastern states and the CDC, originally proposed by Louis Laval Williams, commenced operations on July 1, 1947. By the end of 1949, over 4,650,000 housespray applications had been made. In 1947, 15,000 malaria cases were reported. By 1950, only 2,000 cases were reported. By 1951, malaria was considered eliminated from the United States.

With the success of DDT, the advent of less toxic, more effective synthetic antimalarials, and the enthusiastic and urgent belief that time and money were of the essence, the World Health Organization (WHO) submitted at the World Health Assembly in 1955 an ambitious proposal for the eradication of malaria worldwide. Eradication efforts began and focused on house spraying with residual insecticides, antimalarial drug treatment, and surveillance, and would be carried out in 4 successive steps: preparation, attack, consolidation, and maintenance. Successes included elimination in nations with temperate climates and seasonal malaria transmission.

Some countries such as India and Sri Lanka had sharp reductions in the number of cases, followed by increases to substantial levels after efforts ceased, while other nations had negligible progress (such as Indonesia, Afghanistan, Haiti, and Nicaragua), and still others were excluded completely from the eradication campaign(sub-Saharan Africa). The emergence of drug resistance, widespread resistance to available insecticides, wars and massive population movements, difficulties in obtaining sustained funding from donor countries, and lack of community participation made the long-term maintenance of the effort untenable.

The goal of most current National Malaria Prevention and Control Programs and most malaria activities conducted in endemic countries is to reduce the number of malaria-related cases and deaths. To reduce malaria transmission to a level where it is no longer a public health problem is the goal of what is called malaria “control.”

The natural ecology of malaria involves malaria parasites infecting successively two types of hosts: humans and female Anopheles mosquitoes. In humans, the parasites grow and multiply first in the liver cells and then in the red cells of the blood. In the blood, successive broods of parasites grow inside the red cells and destroy them, releasing daughter parasites (“merozoites”) that continue the cycle by invading other red cells.

Anopheles mosquito

Anopheles mosquito

The blood stage parasites are those that cause the symptoms of malaria. When certain forms of blood stage parasites (“gametocytes”) are picked up by a female Anopheles mosquito during a blood meal, they start another, different cycle of growth and multiplication in the mosquito.

After 10-18 days, the parasites are found (as “sporozoites”) in the mosquito’s salivary glands. When the Anopheles mosquito takes a blood meal on another human, the sporozoites are injected with the mosquito’s saliva and start another human infection when they parasitize the liver cells.

Malaria. Wikipedia

Malaria. Wikipedia

A Plasmodium from the saliva of a female mosquito moving across a mosquito cell

Thus the mosquito carries the disease from one human to another (acting as a “vector”). Differently from the human host, the mosquito vector does not suffer from the presence of the parasites.

All the clinical symptoms associated with malaria are caused by the asexual erythrocytic or blood stage parasites. When the parasite develops in the erythrocyte, numerous known and unknown waste substances such as hemozoin pigment and other toxic factors accumulate in the infected red blood cell. These are dumped into the bloodstream when the infected cells lyse and release invasive merozoites. The hemozoin and other toxic factors such as glucose phosphate isomerase (GPI) stimulate macrophages and other cells to produce cytokines and other soluble factors which act to produce fever and rigors associated with malaria.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

Plasmodium falciparum-infected erythrocytes, particularly those with mature trophozoites, adhere to the vascular endothelium of venular blood vessel walls and when they become sequestered in the vessels of the brain it is a factor in causing the severe disease syndrome known as cerebral malaria, which is associated with high mortality.

Following the infective bite by the Anopheles mosquito, a period of time (the “incubation period”) goes by before the first symptoms appear. The incubation period in most cases varies from 7 to 30 days. The shorter periods are observed most frequently with P. falciparum and the longer ones with P. malariae.

malaria_lifecycle.

malaria_lifecycle.

Antimalarial drugs taken for prophylaxis by travelers can delay the appearance of malaria symptoms by weeks or months, long after the traveler has left the malaria-endemic area. (This can happen particularly with P. vivax and P. ovale, both of which can produce dormant liver stage parasites; the liver stages may reactivate and cause disease months after the infective mosquito bite.)

The Influenza Pandemic of 1918

The Nation’s Health

If you had lived in the early twentieth century, your life expectancy would
have been much shorter than it is today. Today, life expectancy for men is 75 years;
for women, it is 80 years. In 1918, life expectancy for men was only 53 years.

Women’s life expectancy at 54 was only marginally better.

Why was life expectancy so much shorter?

During the early twentieth century, communicable diseases—that is diseases
which can spread from person to person—were widespread. Influenza and
pneumonia along with tuberculosis and gastrointestinal infections such
as diarrhea killed Americans at an alarming rate but
non-communicable diseases such as cancer and heart disease also
exacted a heavy toll. Accidents, especially in the nation’s unregulated factories
and workshops, were also responsible for maiming and killing many workers.

High infant mortality further shortened life expectancy. In 1918, one in
five American children did not live beyond their fifth birthday. In some
cities, the situation was even worse, with thirty percent of all infants dying
before their first birthday. Childhood diseases such as diphtheria, measles,
scarlet fever and whooping cough contributed significantly to these high
death rates.

osler_at_a_bedside

osler_at_a_bedside

By 1900, an increasing number of physicians were receiving clinical
training. This training provided doctors with new insights into disease
and specific types of diseases. [Credit: National Library of Medicine]

scarlet_fever

scarlet_fever

Quarantine signs such as this one warned visitors away from homes
with scarlet fever and other infectious diseases. [Credit: National
Library of Medicine]

Rat Proofing

Cities often sponsored Clean-Up Days. Here, Public Health Service
employees clean up San Francisco’s streets in a campaign to
eradicate bubonic plague. [Credit: Office of the Public Health
Service Historian]

cleanup days

cleanup days

A young woman is seated with a baby on her lap in the center
of the photo.  On the right are two young children.  One child is
standing.  The other is seated in a crib.  A woman in a long
white apron stands by the stove on the left side of the photo.
She is pulling a bottle out of a pan on the stove.

nurse_helps_with_baby_formula

nurse_helps_with_baby_formula

A public health nurse teaches a young mother how to sterilize
a bottle. [Credit: National Library of Medicine]

Seeking Medical Care

Feeling Sick in 1918?

If you became sick in nineteenth-century America, you might consult
a doctor, a druggist, a midwife, a folk healer, a nurse or even
your neighbor. Most of these practitioners would visit you in your home.

By 1918, these attitudes toward health care were beginning to
change. Some physicians had begun to set up offices where patients
could receive medical care and hospitals, which emphasized sterilization
and isolation, were also becoming popular.

However, these changes were not yet universal and many Americans
still lived their entire lives without visiting a doctor.

How Did Ordinary People View Disease?

Folk Medicine:

In 1918, folk healers could be found all over America. Some of these
healers believed that diseases had a physical cause such as cold
weather but others believed it had a supernatural cause such as a curse.

Treatments advocated by these healers ran the gamut. Herbal remedies
were especially popular. Other popular remedies included cupping,
which entailed attaching a heated cup to the surface of the skin,
and acupuncture. Many people also wore magical objects which they
believed protected the wearer from illness.

During the influenza pandemic of 1918 when scientific medicine
failed to provide Americans with a cure or preventative, many people
turned to folk remedies and treatments.

Scientific Medicine

In the 1880s, building on developments which had been in the
making since the 1830s, a growing number of scientists and
physicians came to believe that disease was spread by
minute pathogenic organisms or germs.

Often called the bacteriological revolution, this new theory
radically transformed the practice of medicine. But while this was a
major step forward in understanding disease, doctors and scientists
continued to have only a rudimentary understanding of the differences
between different types of microbes. Many practicing physicians
did not understand the differences between bacteria and viruses
and this sharply limited their ability to understand disease
causation and disease prevention.

Drugs and Druggists:

Although the early twentieth century witnessed growing attempts
to regulate the practice of medicine, many druggists assumed
duties we associate today with physicians. Some druggists, for
example, diagnosed and prescribed treatments which they
then sold to the patient. Some of these treatments included opiates;
few actually cured diseases.

Desperate times called for desperate remedies and during the
influenza pandemic, many patients turned to these and other drugs
in the hopes that they would provide a cure.

Nurses:

Between 1890 and 1920, nursing schools multiplied and trained
nurses began to replace practical nurses. Isolation practices
sterility, and strict routines, practices associated with professionally
trained nurses, increasingly became standard during this period. In 1918, nurses served as the physician’s hand, assisting doctors as
they made the rounds. During the pandemic, many nurses acted
independently of doctors, treating and prescribing for patients.

Physicians:

Throughout the eighteenth and much of the nineteenth centuries,
pretty much anyone had the right to call oneself a physician. By the
late nineteenth century, growing calls for reform had begun to
transform the profession.

In 1900, every state in the Union had some type of medical registration
law with about half of all states requiring physicians to possess a
medical diploma and pass an exam before they received a license
to practice. However, grandfather clauses which exempted many older
physicians meant that many physicians who practiced in 1918
had been poorly trained.

quack_doctor

quack_doctor

Poor training and loose regulations meant that some doctors were
little more than quacks. [Credit: National Library of Medicine]

drug_ad

drug_ad

Drug advertisers routinely promised quick and painless cures.
[Credit: National Library of Medicine]

While access to the profession was tightening, women and minorities,
including African-Americans, entered the profession in growing
numbers during the early twentieth century.

What Did Doctors Really Know?

Growing understanding of bacteriology enabled early twentieth-
century physicians to diagnose diseases more effectively than their
predecessors but diagnosis continued to be difficult. Influenza was
especially tricky to diagnose and many physicians may have incorrectly
diagnosed their patients, especially in the early stages of the pandemic.

Bacteriology did not revolutionize the treatment of disease. In the
pre-antibiotic era of 1918, physicians continued to rely heavily
on traditional therapeutics. During the pandemic, many physicians
used traditional treatments such as sweating which had their
roots in humoral medicine.

Reflecting the uneven structure of medical education, the level and
quality of care which physicians provided varied wildly.

The Public Health Service

Founded in 1798, the Marine Hospital Service originally provided
health care for sick and disabled seaman. By the late nineteenth
century, the growth of trade, travel and immigration networks
had led the Service to expand its mission to include protecting
the health of all Americans.

In a nation where federal and state authorities had consistently
battled for supremacy, the powers of the Public Health Service
were limited. Viewed with suspicion by many state and local
authorities, PHS officers often found themselves fighting state
and local authorities as well as epidemics—even when they had
been called in by these authorities.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

A network of hospitals in the nation’s ports provided seamen with
access to healthcare. [Credit: Office of the Public Health Service Historian]

In 1918, there were fewer than 700 commissioned officers in the PHS.
Charged with the daunting task of protecting the health of some
106 million Americans, PHS officers were stationed in not only
the United States but also abroad.

Because few diseases could be cured, the prevention of disease
was central to the PHS mission. Under the leadership of Surgeon
General Rupert Blue, the PHS advocated the use of scientific
research, domestic and foreign quarantine, marine hospitals
and statistics to accomplish this mission. hen an epidemic emerged,
the Public Health Service’s epidemiologists tracked the disease,
house by house. The 1918 influenza pandemic occurred too
rapidly for the PHS to develop a detailed study of the pandemic.

typhoid_map

typhoid_map

This map was used to trace a smaller typhoid epidemic which erupted in
Washington, DC in 1906. [Credit: Office of the Public Health Service Historian]

The spread of disease within the US was a serious concern. However,
PHS officers were most concerned about the importation of disease into
the United States. To prevent this, ships could be, and often were,
quarantined by the PHS.

fever-quaranteen-station-1880

fever-quaranteen-station-1880

Travelers and immigrants to the United States were also required
to undergo a medical exam when entering the country. In 1918 alone,
700,000 immigrants underwent a medical exam at the hands of PHS
officers. Within the United States, PHS officers worked directly with
state and local departments of health to track, prevent and arrest
epidemics as they emerged. During 1918, PHS officers found themselves
battling not only influenza but also polio, typhus, typhoid, smallpox
and a range of other diseases. In 1918, the PHS operated research
laboratories stretching from Hamilton, Montana to Washington DC.
Scientific researchers at these laboratories ultimately discovered
both the causes and cures of diseases ranging from Rocky Mountain
Spotted Fever to pellagra.

Sewers and Sanitation:

In the nineteenth century, most physicians and public health experts
believed that disease was caused not by microorganisms but rather by dirt itself.

Sanitarians, as these people were called, argued that cleaning dirt-
infested cities and building better sewage systems would both prevent
and end many epidemics. At their urging, cities and towns across the United
States built better sewage systems and provided citizens with access to
clean water. By 1918, these improved water and sewage systems had greatly
contributed to a decline in gastrointestinal infections and a significant
reduction in mortality rates among infants, children and young adults.

But because diseases are caused by microorganisms, not dirt, these
tactics were not completely effective in ending all epidemics.

Sanitation: Controlling problems at source

Box 1: Sharing toilets in Uganda

A recent survey by the Ministry of Health in Uganda suggested that there is only one toilet for every 700 Ugandan pupils, compared to one for every 328 pupils in 1995. Out of 8000 schools surveyed, only 33% of the 8000 schools sampled have separate latrines for girls. The deterioration in sanitary conditions was attributed to increased enrolment in schools. UNICEF surveyed 90 primary schools in crisis-affected districts of north and west Uganda: only 2% had adequate latrine facilities (IRIN, 1999).

Box 2: Sanitation and diarrhoeal disease

Gwatkin and Guillot (1999) have claimed that diarrhoea accounts for 11% of all deaths in the poorest 20% of all countries. This toll could be reduced by key measures: better sanitation to reduce the cause of water linked diarrhoea; and more widespread use of oral rehydration therapy (ORT) to treat its effects. Improving water supplies, sanitation facilities and hygiene practices reduces diarrhoea incidence by 26%. Even more impressive, deaths due to diarrhoea are reduced by 65% with these same improvements (Esrey et al., 1991). Of the 2.2 million people that die from diarrhoea each year, many of those deaths are caused by one bacteria – Shigella. Simple hand washing with soap and water reduces Shigella and other diarrhoea transmission by 35% (Kotloff et al., 1999; Khan, 1982). ORT is effective in reducing deaths due to diarrhoea but does not prevent it.

http://www.who.int/water_sanitation_health/sanitproblems/en/index1.html

Garbage-A-polluted-creek

Garbage-A-polluted-creek

Influenza Strikes

Throughout history, influenza viruses have mutated and caused
pandemics or global epidemics. In 1890, an especially virulent influenza
pandemic struck, killing many Americans. Those who survived that
pandemic and lived to experience the 1918 pandemic tended to be
less susceptible to the disease.

From Kansas to Europe and back again, wave after wave, the
unfolding of the pandemic, mobilizing to fight influenza, the
pandemic hits, protecting yourself, communication, fading of
the pandemic.

Influenza ward

Influenza ward

When it came to treating influenza patients, doctors, nurses and
druggists were at a loss. [Credit: Office of the Public Health Service Historian]

The influenza pandemic of 1918-1919 killed more people than the
Great War, known today as World War I (WWI), at somewhere
between 20 and 40 million people. It has been cited as the most
devastating epidemic in recorded world history. More people died of
influenza in a single year than in four-years of the Black Death Bubonic
Plague from 1347 to 1351. Known as “Spanish Flu” or “La Grippe”
the influenza of 1918-1919 was a global disaster.

Grim Reaper

Grim Reaper

The Grim Reaper by Louis Raemaekers

In the fall of 1918 the Great War in Europe was winding down and
peace was on the horizon. The Americans had joined in the fight,
bringing the Allies closer to victory against the Germans. Deep within
the trenches these men lived through some of the most brutal conditions
of life, which it seemed could not be any worse. Then, in pockets
across the globe, something erupted that seemed as benign as the
common cold. The influenza of that season, however, was far more
than a cold. In the two years that this scourge ravaged the earth,
a fifth of the world’s population was infected. The flu was most deadly
for people ages 20 to 40. This pattern of morbidity was unusual for
influenza which is usually a killer of the elderly and young children.
It infected 28% of all Americans (Tice). An estimated 675,000
Americans died of influenza during the pandemic, ten times as
many as in the world war. Of the U.S. soldiers who died in Europe,
half of them fell to the influenza virus and not to the enemy (Deseret
News). An estimated 43,000 servicemen mobilized for WWI died
of influenza (Crosby). 1918 would go down as unforgettable year
of suffering and death and yet of peace. As noted in the Journal
of the American Medical Association final edition of 1918:   “The 1918
has gone: a year momentous as the termination of the most cruel war
in the annals of the human race; a year which marked, the end at
least for a time, of man’s destruction of man; unfortunately a year in
which developed a most fatal infectious disease causing the death
of hundreds of thousands of human beings. Medical science for
four and one-half years devoted itself to putting men on the firing
line and keeping them there. Now it must turn with its whole might to
combating the greatest enemy of all–infectious disease,” (12/28/1918).

From Kansas to Europe and Back Again:

scourge ravaged the earth

scourge ravaged the earth

Where did the 1918 influenza come from? And why was it so lethal?

In 1918, the Public Health Service had just begun to require state
and local health departments to provide them with reports about
diseases in their communities. The problem? Influenza wasn’t
a reportable disease.

But in early March of 1918, officials in Haskell County in Kansas
sent a worrisome report to the Public Health Service.Although
these officials knew that influenza was not a reportable disease,
they wanted the federal government to know that “18 cases
of influenza of a severe type” had been reported there.

By May, reports of severe influenza trickled in from Europe. Young
soldiers, men in the prime of life, were becoming ill in large
numbers. Most of these men recovered quickly but some developed
a secondary pneumonia of “a most virulent and deadly type.”

Within two months, influenza had spread from the military to the
civilian population in Europe. From there, the disease spread outward—to Asia, Africa, South America and, back again, to North America.

Wave After Wave:

In late August, the influenza virus probably mutated again and
epidemics now erupted in three port cities: Freetown, Sierra
Leone; Brest, France, and Boston, Massachusetts. In Boston,
dockworkers at Commonwealth Pier reported sick in massive
numbers during the last week in August. Suffering from fevers
as high as 105 degrees, these workers had severe muscle and
joint pains. For most of these men, recovery quickly followed. But
5 to 10% of these patients developed severe and massive
pneumonia. Death often followed.

Public health experts had little time to register their shock at the
severity of this outbreak. Within days, the disease had spread
outward to the city of Boston itself. By mid-September, the epidemic
had spread even further with states as far away as California, North
Dakota, Florida and Texas reporting severe epidemics.

The Unfolding of the Pandemic:

The pandemic of 1918-1919 occurred in three waves. The first
wave had occurred when mild influenza erupted in the late
spring and summer of 1918. The second wave occurred with an
outbreak of severe influenza in the fall of 1918 and the final wave
occurred in the spring of 1919.

In its wake, the pandemic would leave about twenty million dead
across the world. In America alone, about 675,000 people in
a population of 105 million would die from the disease.

Find out what happened in your state during the Pandemic

Mobilizing to Fight Influenza:

Although taken unaware by the pandemic, federal, state and local
authorities quickly mobilized to fight the disease.

On September 27th, influenza became a reportable disease. However,
influenza had become so widespread by that time that most states
were unable to keep accurate records. Many simply failed to
report to the Public Health Service during the pandemic, leaving
epidemiologists to guess at the impact the disease may have
had in different areas.

World War I had left many communities with a shortage of trained
medical personnel. As influenza spread, local officials urgently
requested the Public Health Service to send nurses and doctors.
With less than 700 officers on duty, the Public Health Service was
unable to meet most of these requests. On the rare occasions when
the PHS was able to send physicians and nurses, they often became
ill en route. Those who did reach their destination safely often found
themselves both unprepared and unable to provide real assistance.

In October, Congress appropriated a million dollars for the Public
Health Service. The money enabled the PHS to recruit and pay
for additional doctors and nurses. The existing shortage of doctors
and nurses, caused by the war, made it difficult for the PHS to locate and hire qualified practitioners. The virulence of the disease also meant that many nurses and doctors contracted influenza
within days of being hired.

Confronted with a shortage of hospital beds, many local officials
ordered that community centers and local schools be transformed
into emergency hospitals. In some areas, the lack of doctors meant
that nursing and medical students were drafted to staff these
makeshift hospitals.

The Pandemic Hits:

Entire families became ill. In Philadelphia, a city especially hard hit,
so many children were orphaned that the Bureau of Child Hygiene
found itself overwhelmed and unable to care for them.

As the disease spread, schools and businesses emptied. Telegraph
and telephone services collapsed as operators took to their
beds. Garbage went uncollected as garbage men reported sick.
The mail piled up as postal carriers failed to come to work.

State and local departments of health also suffered from high
absentee rates. No one was left to record the pandemic’s spread
and the Public Health Service’s requests for information went
unanswered.

As the bodies accumulated, funeral parlors ran out of caskets
and bodies went uncollected in morgues.

Protecting Yourself From Influenza:

In the absence of a sure cure, fighting influenza seemed an
impossible task.

In many communities, quarantines were imposed to prevent
the spread of the disease.Schools, theaters, saloons, pool
halls and even churches were all closed. As the bodies
mounted, even funerals were held out doors to protect mourners
against the spread of the disease.

Emergency Hospital for Influenza Patients

An Emergency Hospital for Influenza Patients

The effect of the influenza epidemic was so severe that the
average life span in the US was depressed by 10 years.
The influenza virus had a profound virulence, with a mortality
rate at 2.5% compared to the previous influenza epidemics, which
were less than 0.1%. The death rate for 15 to 34-year-olds of
influenza and pneumonia were 20 times higher in 1918 than in
previous years (Taubenberger). People were struck
with illness on the street and died rapid deaths.

One anecdote shared of 1918 was of four women playing bridge
together late into the night. Overnight, three of the women died
from influenza (Hoagg). Others told stories of people on their way
to work suddenly developing the flu and dying within hours
(Henig). One physician writes that patients with seemingly
ordinary influenza would rapidly “develop the most viscous
type of pneumonia that has ever been seen” and later when
cyanosis appeared in the patients, “it is simply a struggle for air
until they suffocate,” (Grist, 1979). Another physician recalls
that the influenza patients “died struggling to clear their airways
of a blood-tinged froth that sometimes gushed from their nose
and mouth,” (Starr, 1976). The physicians of the time were
helpless against this powerful agent of influenza. In 1918 children
would skip rope to the rhyme (Crawford):

I had a little bird,

Its name was Enza.

I opened the window,

And in-flu-enza.

schools inspected -

schools inspected –

The influenza pandemic circled the globe. Most of humanity felt the
effects of this strain of the influenza virus. It spread following
the path of its human carriers, along trade routes and shipping lines.
Outbreaks swept through North America, Europe, Asia, Africa, Brazil
and the South Pacific (Taubenberger). In India the mortality rate was
extremely high at around 50 deaths from influenza per 1,000
people (Brown). The Great War, with its mass movements of men
in armies and aboard ships, probably aided in its rapid diffusion
and attack. The origins of the deadly flu disease were unknown but
widely speculated upon. Some of the allies thought of the epidemic as a
biological warfare tool of the Germans. Many thought it was a result of
the trench warfare, the use of mustard gases and the generated “smoke
and fumes” of the war. A national campaign began using the ready
rhetoric of war to fight the new enemy of microscopic proportions. A
study attempted to reason why the disease had been so devastating
in certain localized regions, looking at the climate, the weather and
the racial composition of cities. They found humidity to be linked with
more severe epidemics as it “fosters the dissemination of the bacteria,”
(Committee on Atmosphere and Man, 1923). Meanwhile the new
sciences of the infectious agents and immunology were
racing to come up with a vaccine or therapy to stop the epidemics.

The experiences of people in military camps encountering the
influenza pandemic: An excerpt for the memoirs of a survivor at
Camp Funston of the pandemic Survivor A letter to a fellow physician
describing conditions during the influenza epidemic at Camp Devens.

A collection of letters of a soldier stationed in Camp Funston Soldier

The origins of this influenza variant is not precisely known. It is thought
to have originated in China in a rare genetic shift of the influenza virus.
The recombination of its surface proteins created a virus novel to
almost everyone and a loss of herd immunity. Recently the virus
has been reconstructed from the tissue of a dead soldier and is
now being genetically characterized.

The name of Spanish Flu came from the early affliction and large
mortalities in Spain (BMJ,10/19/1918) where it allegedly killed 8
million in May (BMJ, 7/13/1918). However, a first wave of influenza
appeared early in the spring of 1918 in Kansas and in military
camps throughout the US. Few noticed the epidemic in the midst of
the war. Wilson had just given his 14 point address. There was
virtually no response or acknowledgment to the epidemics in March
and April in the military camps. It was unfortunate that no steps were
taken to prepare for the usual recrudescence of the virulent influenza
strain in the winter. The lack of action was later criticized when the
epidemic could not be ignored in the winter of 1918 (BMJ, 1918).
These first epidemics at training camps were a sign of what was
coming in greater magnitude in the fall and winter of 1918 to the
entire world.

The war brought the virus back into the US for the second wave
of the epidemic. It first arrived in Boston in September of 1918
through the port busy with war shipments of machinery and supplies.
The war also enabled the virus to spread and diffuse. Men across
the nation were mobilizing to join the military and the cause. As they
came together, they brought the virus with them and to those they
contacted. The virus  killed almost 200,00 in October of 1918
alone. In November 11 of 1918 the end of the war enabled a resurgence.
As people celebrated Armistice Day with parades and large parties, a
complete disaster from the public health standpoint, a rebirth of
the epidemic occurred in some cities. The flu that winter was beyond
imagination as millions were infected and thousands died. Just as
the war had effected the course of influenza, influenza affected
the war. Entire fleets were ill with the disease and men on the front
were too sick to fight. The flu was devastating to both sides, killing
more men than their own weapons could.

With the military patients coming home from the war with battle wounds
and mustard gas burns, hospital facilities and staff were taxed
to the limit. This created a shortage of physicians, especially in the
civilian sector as many had been lost for service with the military.
Since the medical practitioners were away with the troops, only
the medical students were left to care for the sick. Third and forth
year classes were closed and the students assigned jobs as
interns or nurses (Starr,1976). One article noted that “depletion has
been carried to such an extent that the practitioners are brought
very near the breaking point,” (BMJ, 11/2/1918). The shortage was
further confounded by the added loss of physicians to the epidemic.
In the U.S., the Red Cross had to recruit more volunteers to contribute
to the new cause at home of fighting the influenza epidemic. To respond
with the fullest utilization of nurses, volunteers and medical supplies, the
Red Cross created a National Committee on Influenza. It was involved
in both military and civilian sectors to mobilize all forces to fight Spanish
influenza (Crosby, 1989). In some areas of the US, the nursing shortage
was so acute that the Red Cross had to ask local businesses to
allow workers to have the day off if they volunteer in the hospitals
at night (Deseret News). Emergency hospitals were created to
take in the patients from the US and those arriving sick from overseas.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

red_cross_public_health_nurse

red_cross_public_health_nurse

The pandemic affected everyone. With one-quarter of the US and
one-fifth of the world infected with the influenza, it was  impossible
to escape from the illness. Even President Woodrow Wilson suffered
from the flu in early 1919 while negotiating the crucial treaty of
Versailles to end the World War (Tice). Those who were
lucky enough to avoid infection had to deal with the public health
ordinances to restrain the spread of the disease.

The public health departments distributed gauze masks to be worn
in public. Stores could not hold sales, funerals were limited
to 15 minutes. Some towns required a signed certificate to
enter and railroads would not accept passengers without
them. Those who ignored the flu ordinances had to pay steep
fines enforced by extra officers (Deseret News). Bodies pilled up
as the massive deaths of the epidemic ensued. Besides the
lack of health care workers and medical supplies, there was a shortage
of coffins, morticians and gravediggers (Knox). The conditions in 1918
were not so far removed from the Black Death in the era of the
bubonic plague of the Middle Ages.

iowa_flu

iowa_flu

In 1918-19 this deadly influenza pandemic erupted during the final
stages of World War I. Nations were already attempting to deal with
the  effects and costs of the war. Propaganda campaigns and war
restrictions and rations had been implemented by governments.
Nationalism pervaded as people accepted government authority.
This allowed the public health departments to easily step in and
implement their restrictive measures. The war also gave science
greater importance as governments relied on scientists, now armed
with the new germ theory and the development of antiseptic surgery,
to design vaccines and reduce mortalities of disease and battle
wounds. Their new technologies could preserve the men on
the front and ultimately save the world. These conditions
created by World War I, together with the current social attitudes
and ideas, led to the relatively calm response of the public and
application of scientific ideas. People allowed for strict measures
and loss of freedom during the war as they submitted to the
needs of the nation ahead of their personal needs. They had
accepted the limitations placed with rationing and drafting.
The responses of the public health officials reflected the new
allegiance to science and the wartime society. The medical
and scientific communities had developed new theories and
applied them to prevention, diagnostics and treatment of the
influenza patients.

The Medical and Scientific Conceptions of Influenza

Scientific ideas about influenza, the disease and its origins,
shaped the public health and medical responses. In 1918
infectious diseases were beginning to be unraveled. Pasteur
and Koch had solidified the germ theory of disease through
clear experiments clever science. The bacillus responsible
for many infections such as tuberculosis and anthrax  had
been visualized, isolated and identified. Koch’s postulates
had been developed to clearly link a disease to a specific
microbial agent.

Robert Koch

Robert Koch

The petri dish was widely used to grow sterile cultures of bacteria
and investigate bacterial flora. Vaccines had been created for
bacterial infections and even the unseen rabies virus by
serial passage techniques. The immune system was explained by
Paul Erhlich and his side-chain theory. Tests of antibodies such as
Wasserman and coagulation experiments were becoming commonplace.
Science and medicine were on their way to their complete entanglement
and fusion as scientific principles and methodologies made their way
into clinical practice, diagnostics and therapy.

The Clinical Descriptions of Influenza

Patients with the influenza disease of the epidemic were generally
characterized by common complaints associated with the flu. They had
body aches, muscle and joint pain, headache, a sore throat and a
unproductive cough with occasional harsh breathing (JAMA, 1/25/1919).

The most common sign of infection was the fever, which ranged from
100 to 104 F and lasted for a few days. The onset of the epidemic influenza
was peculiarly sudden, as people were struck down with dizziness, weakness
and pain while on duty or in the street (BMJ, 7/13/1918). After  the
disease was established the mucous membranes became reddened
with sneezing. In some cases there was a hemorrhage of the
mucous membranes of the nose and bloody noses were commonly
seen. Vomiting occurred on occasion, and also sometimes diarrhea
but more commonly there was constipation (JAMA, 10/3/1918).

The danger of an influenza infection was its tendency to progress into
the often fatal secondary bacterial infection of pneumonia. In the
patients that did not rapidly recover after three or four days of fever, there
is an “irregular pyrexia” due to bronchitis or broncopneumonia (BMJ,
7/13/1918). The pneumonia would often appear after a period of
normal temperature with a sharp spike and expectorant of bright
red blood. The lobes of the lung became speckled with “pneumonic
consolidations.” The fatal cases developed toxemia and vasomotor
depression (JAMA, 10/3/1918). It was this tendency for secondary
complications that made this influenza infection so deadly.

pneumonia

pneumonia

hospital ward in 1918

hospital ward in 1918

A military hospital ward in 1918

In the medical literature characterizing the influenza disease, new
diagnostic techniques are frequently used to describe the clinical
appearance. The most basic clinical guideline was the temperature,
a record of which was kept in a table over time. Also closely
monitored was the pulse rate. One clinical account said that
“the pulse was remarkably slow,” (JAMA, 4/12/1919) while others
noted that the pulse rate did not increase as expected. With the
pulse, the respiration rate was measured and reported to provide
clues of the clinical progression.
Patients were also occasionally “roentgenographed” or chest x-rayed,
(JAMA, 1/25/1919). The discussion of clinical influenza also often
included analysis of the blood. The number of white blood cells were
counted for many patients. Leukopenia was commonly associated
with influenza. The albumin was also measured, since it was noted that
transient albuminuria was frequent in influenza patients. This was
done by urine analysis. The Wassermann reaction was another
added new test of the blood for antibodies (JAMA, 10/3/1918).
These new measurements enabled to physicians to have an
image of action and knowledge using scientific instruments. They
could record precisely the progress of the influenza infection and perhaps
were able to forecast its outcome.

The most novel of these tests were the blood and sputum cultures.
Building on the germ theory of disease, the physicians and their
associated research scientists attempted to find the culprit for this
deadly infection. Physicians would commonly order both blood and sputum
cultures of their influenza and pneumonia patients mostly for research
and investigative purposes. At the military training camp
Camp Lewis during a influenza epidemic, “in all cases of pneumonia.
a sputum study, white blood and differential count, blood culture
and urine examinations were made as routine,” (JAMA, 1/25/1919).

The bacterial flora of the nasopharynx of some patients was also cultured
since droplet infection was where the disease disseminated. The
collected swabs and specimens were inoculated onto blood agar of
petri dishes. The grown up bacterial colonies were closely studied to
find the causal organism. Commonly found were pneumococcus,
streptococcus, staphylococcus and Bacillus influenzae (JAMA, 4/12/1919).

pneumonia

pneumonia

These new laboratory tests used in the clinical setting brought in a solid
scientific, biological link to the practice of medicine. Medicine had
become fully scientific and technologic in its understanding and
characterization of the influenza epidemic.

Treatment and Therapy

The therapeutic remedies for influenza patients varied from the
newly developed drugs to oils and herbs. The therapy was much less
scientific than the diagnostics, as the drugs had no clear explanatory
theory of action. The treatment was largely symptomatic, aiming to
reduce fever or pain. Aspirin, or acetylsalicylic acid was a common remedy.
For secondary pneumonia doses of epinephrin were given. To
combat the cyanosis physicians gave oxygen by mask or some
injected it under the skin (JAMA, 10/3/1918). Others used salicin which
reduced pain, discomfort and fever and claimed to reduce the infectivity
of the patient. Another popular remedy was cinnamon in powder or oil form
with milk to reduce temperature (BMJ, 10/19/1918). Finally, salt of quinine
was suggested as a treatment. Most physicians agreed that the patient should
be  kept in bed (BMJ, 7/13/1918). With that was the advice of plenty of
fluids and nourishment. The application of cold to the head, with
warm packs or warm drinks was also advised. Warm baths were used
as a hydrotherapeutic method in hospitals but were discarded for
lack of success (JAMA, 10/3/1918). These treatments, like the
suggested prophylactic measures of the public health officials, seemed to
originate in the common social practices and not in the growing field of
scientific medicine. It seems that as science was entering the medical
field, it served only for explanatory, diagnostic and preventative
measures such as vaccines and technical tests. This science had
little use once a person was ill.

However, a few proposed treatment did incorporate scientific ideas
of germ theory and the immune system. O’Malley and Hartman
suggested to treat influenza patients with the serum of convalescent
patients. They utilize the theorized antibodies to boost the immune
system of sick patients. Other treatments were “digitalis,” the
administration of isotonic glucose and sodium bicarbonate intravenously
which was done in military camps (JAMA, 1/4/1919). Ross and
Hund too utilized ideas about the immune system and properties of the
blood to neutralize toxins and circulate white blood cells. They believed
that the best treatment for influenza should aim to: “…neutralize or render
the intoxicant inert…and prevent the blood destruction with its destructive
leukopenia and lessened coagulability,” (JAMA, 3/1/1919). They tried
to create a therapeutic immune serum to fight infection. These therapies
built on current scientific ideas and represented the highest
biomedical, technological treatment like the antitoxin to diphtheria.

influenza

influenza

In July, an American soldier said that while influenza caused a heavy
fever, it “usually only confines the patient to bed for a few days.” The
mutation of the virus changed all that. [Credit: National Library of Medicine]

recovering_from_influenza

recovering_from_influenza

An old cliché maintained that influenza was a wonderful disease as
it killed no one but provided doctors with lots of patients. The 1918
pandemic turned this saying on its head. [Credit: The Etiology of
Influenza in 1918]

During the 1890 influenza epidemic, Pfeiffer found what he
determined to be the microbial agent to cause influenza.
In the sputum and respiratory tract of influenza patients in 1892,
he isolated the bacteria Bacillus influenzae , which was
accepted as the true “virus” though it was not found in localized
outbreaks (BMJ, 11/2/1918). However, in studies of the 1907-8
epidemic in the US, Lord had found the bacillus in only 3 of 20 cases.
He also found the bacillus in 30% of cultures of sputum from TB patients.
Rosenthal further refuted the finding when he found the bacillus in 1 of 6
healthy people in 1900 (JAMA, 1/18/1919). The bacillus was also
found to be present in all cases of whooping cough and many cases
of measles, chronic bronchitis and scarlet fever (JAMA, 10/5/1918).
The influenza pandemic provided scientists the opportunity to confirm
or refute this contested microbe as the cause of influenza. The sputum
studies from the Camp Lewis epidemic found only a few influenza cases
harvesting the influenza bacilli and mostly type IV pneumococcus . They
concluded that “the recent epidemic at Camp Lewis was an acute
respiratory infection and not an epidemic due to Bacillus influenzae ,”
(JAMA, 1/25/1919). This finding along with others suggested to most
scientists that the Pfeiffer’s Bacillus was not the cause of influenza.

In the 1918-19 influenza pandemic, there was a great drive to find the
etiological agent responsible for the deadly scourge. Scientists in their
labs were working hard, using the cultures obtained from physician clinics,
to isolate the etiological agent for influenza. As a report early in the
epidemic said, “the ‘influence’ of influenza is still veiled in mystery, ”
(JAMA, 10/5/1918). The nominated bacillus influenzae bacteria
seemed to be incorrect and scientists scrambled to isolate the true cause.
In the journals, many authors speculated on the type of agent- was
it a new microbe, was it a bacteria, was it a virus? One journal offered
that “the severity of the present pandemic, the suddenness of onset…
led to the suggestion that the disease cannot be influenza but some other
and more lethal infection,” (BMJ, 11/2/1918). However, most accepted that
the epidemic disease was influenza based on the familiar symptoms
and known pattern of disease. The respiratory disease of influenza was
understood to give warning in the late spring of its potential effects
upon its recrudescence once the weather turned cold in the winter
(BMJ, 10/19/1918).One article with foresight stated that ” there can
be no question that the virus of influenza is a living organism…

flu virus EM

flu virus EM

it is possibly beyond the range of microscopic vision,” (BMJ, 11/16/1918). Another
article confirmed the idea of an “undiscovered virus” and noted that pneumococci
and streptococci were responsible for “the gravity of the secondary pulmonary
complications,” (BMJ, 11/2/1918). The article went on to offer the idea of a
symbiosis of virus and secondary bacterial infection combining to make it
such a severe disease.

The investigators as they attempted to find the responsible agent for the influenza
pandemic were developing ideas of infectious microbes and the concept of the
virus. The idea of the virus as an infectious agent had been around for years.
The articles of the period refer to the “virus” in their discussion but do not
consistently use it to be an infectious microbe, distinctive from bacteria. The
term virus has the same usage and application as bacillus. In 1918, a virus
was defined scientifically to be a submicroscopic infectious entity which could
be filtered but not grown in vitro . In the 1880s Pasteur developed an attenuated
vaccine for the rabies virus by serial passage way ahead of his time. Ivanoski’s
work on the tobacco mosaic virus in 1890s lead to the discovery of the virus.
He found an infectious agent that acted as a micro-organism as it multiplied
yet which passed through the sterilizing filter as a nonmicrobe. By the 1910s
several viruses, defined as filterable infectious microbes, had been identified
as causing infectious disease (Hughes). However, the scientists were still
conceptually behind in defining a virus; they distinguished it only by size
from a bacteria and not as an obligate parasite with a distinct life cycle
dependent on infecting a host cell.

The influenza epidemic afforded the opportunity to research the etiological
agent and develop the idea of the virus. Experiments by Nicolle and Le Bailly in
Paris were the earliest suggestions that influenza was caused by a “filter-passing
virus,” (BMJ, 11/2/1918). They filtered out the bacteria from bronchial expectoration
of an influenza patient and injected the filtrate into the eyes and nose of two monkeys.
The monkeys developed a fever and a marked depression. The filtration was later
administered to a volunteer subcutaneously who developed typical signs of influenza.
They reasoned that the inoculated person developed influenza from the filtrate since
no one else in their quarters developed influenza (JAMA, 12/28/1918). These scientists
followed Koch’s postulates as they isolated the causal agent from patients with the
illness and used it to reproduce the same illness in animals. Through these studies,
the scientists proved that influenza was due to a submicroscopic infectious agent
and not a bacteria, refuting the claims of Pfeiffer and advancing virology. They were
on their way to discerning the virus and characterizing the orthomyxo viruses that
lead to the disease of influenza.

These scientific experiments which unravel the cause of influenza, had immediate
preventative applications. They would assist in the effort to create a effective
vaccine to prevent influenza. This was the ultimate goal of most studies, since
vaccines were thought to be the best preventative solution in the early 20th century.
Several experiments attempted to produce vaccines, each with a different
understanding of the etiology of fatal influenza infection. A Dr. Rosenow invented
a vaccine to target the multiple bacterial agents involved from the serum of patients.
He aimed to raise the immunity to against the bacteria, the “common causes of death,
“and not the cause of the initial symptoms by inoculating with the proportions found
in the lungs and sputum (JAMA, 1/4/1919). The vaccines made for the British forces
took a similar approach and were “mixed vaccines” of pneumococcus and
lethal streptococcus. The vaccine development therefore focused on the culture
results of what could be isolated from the sickest patients and lagged behind the
scientific progress.

Fading of the Pandemic:

In November, two months after the pandemic had erupted, the Public Health Service
began reporting that influenza cases were declining.

Communities slowly lifted their quarantines. Masks were discarded. Schools were
re-opened and citizens flocked to celebrate the end of World War I.

Communities and the disease continued to be a threat throughout the spring of 1919.

By the time the pandemic had ended, in the summer of 1919, nearly 675,000
Americans were dead from influenza. Hundred of thousands more were orphaned
and widowed.

The Legacy of the Pandemic

No one knows exactly how many people died during the 1918-1919 influenza
pandemic. During the 1920s, researchers estimated that 21.5 million people died
as a result of the 1918-1919 pandemic. More recent estimates have estimated
global mortality from the 1918-1919 pandemic at anywhere between 30 and 50
million. An estimated 675,000 Americans were among the dead.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969. These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Research, forgetting the pandemic of 1918-1919, scientific milestones, 20th century influenza or global pandemics.
The Influenza Pandemic occurred in three waves in the United States throughout
1918 and 1919.

More Americans died from influenza than died in World War I. [Credit: National Library of Medicine]

All of these deaths caused a severe disruption in the economy. Claims against life
insurance policies skyrocketed, with one insurance company reporting a 745 percent
rise in the number of claims made. Small businesses, many of which had been unable to operate during the pandemic, went bankrupt.

Joseph goldberger

Joseph goldberger

Joseph Goldberger, one of the leading researchers in the PHS, studied influenza
during the pandemic. But Goldberger had multiple interests and influenza research
became less important to him in the years following 1918. [Credit: Office of the Public
Health Service Historian]

In the summer and fall of 1919, Americans called for the government to research
both the causes and impact of the pandemic. In response, both the federal government
and private companies, such as Metropolitan Life Insurance, dedicated money
specifically for flu research.

In an attempt to determine the effect influenza had different communities, the Public
Health Service conducted several small epidemiological studies. These studies,
however, were conducted after the pandemic and most PHS officers
admitted that the data which was collected was probably inaccurate.

PHS scientists continued to search for the causative agent of influenza in their
laboratories as did their fellow scientists in and outside the United States.

But while there was a burst of enthusiasm for funding flu research in
1918- 1919, the funds allocated for this research were actually fairly meager.
As time passed, Americans became less interested in the pandemic and its
causes. And even when funding for medical research dramatically increased
after World War II, funding for research on the 1918-1919 pandemic remained
limited.

Forgetting the 1918-1919 Pandemic:

In the years following 1919, Americans seemed eager to forget the pandemic.
Given the devastating impact of the pandemic, the reasons for this forgetfulness
are puzzling.

It is possible, however, that the pandemic’s close association with World War I
may have caused this amnesia. While more people died from the pandemic than
from World War I, the war had lasted longer than the pandemic and caused
greater and more immediate changes in American society.

Influenza also hit communities quickly. Often it disappeared within a few weeks of
its arrival. As one historian put it, “the disease moved too fast, arrived, flourished
and was gone before…many people had time to fully realize just how great
was the danger.” Small wonder, then, that many Americans forgot about the
pandemic in the years which followed.

Scientific Milestones in Understanding and Preventing Influenza:

In the early stages of the pandemic, many scientists believed that the agent
responsible for influenza was Pfeiffer’s bacillus. Autopsies and research conducted
during the pandemic ultimately led many scientists to discard this theory.

In late October of 1918, some researchers began to argue that influenza was
caused by a virus. Although scientists had understood that viruses could cause
diseases for more than two decades, virology was still very much in its infancy at
this time.

It was not until 1933 that the influenza A virus, which causes almost every type
of endemic and pandemic influenza, was isolated. Seven years later, in 1940,
the influenza B virus was isolated. The influenza C virus was finally isolated in 1950.

Influenza vaccine was first introduced as a licensed product in the United States in
1944. Because of the rapid rate of mutation of the influenza virus, the
effectiveness of a given vaccine usually lasts for only a year or two.

By the 1950s, vaccine makers were able to prepare and routinely release vaccines
which could be used in the prevention or control of future pandemics. During the
1960s, increased understanding of the virus enabled scientists to develop both
more potent and purer vaccines.

Mass production of influenza vaccines continued, however, to require several
months lead time.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969.

These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics
were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Tuberculosis

Mycobacterium tuberculosis was first discovered in 1882 by Robert Koch and is one of almost 200 mycobacterial species which have been detected by molecular techniques. The genus Actinobacteria (given its own family, the Mycobacteriaceae) includes pathogens known to cause serious diseases in mammals, including tuberculosis (MTBC) and leprosy (M. leprae). Mycobacteria are grouped neither as Gram-positive nor Gram-negative bacteria. MTBC consists of M. tuberculosis, M. bovis, M. bovis BCG (bacillus Calmette-Guérin), M. africanum M. caprae, M. microti, M. canettii and M. pinnipedii, all of which share genetic homology, with no significant variation between sequences (∼0.01 to 0.03%), although differences in phenotypes are present. Cells in the genus have a typical rod, or slightly curved-shape, with dimensions of 0.2 to 0.6 μm by 1 to 10 μm.

Mycobacterium tuberculosis has a waxy mycolic acid lipid complex coating on its cell surface. The cells are impervious to Gram staining, so a common staining procedure used is Ziehl-Neelsen (ZN) staining. The outer compartment of the cell wall contains lipid-linked polysaccharides, is water-soluble, and interacts with the immune system. The inner wall is impermeable. Mycobacteria have some unique qualities that are divergent from members of the Gram-positive group, such as the presence of mycolic acids in the cell wall.

MTBC and M. leprae replication occurs in the tissues of warm-blooded human hosts. This air-borne pathogen is transmitted from an active pulmonary tuberculosis patient by coughing. Droplet nuclei, approximately 1 to 5 μm in size “meander” in the air and are transmitted to susceptible individuals by inhalation. Mycobacteria are incapable of replicating in or on inanimate objects. The risk of infection is dependent to the load of the bacillus that has been inhaled, level of infectiousness, contact perimeter and the immune competency of potential hosts. Due to the size of the droplets inhaled into the lungs, the infection penetrates the defense systems of the bronchi and enters the terminal alveoli. Invading bacteria are then engulfed by alveolar macrophage and dendritic cells.

The cell-mediated immune response alleviates the multiplication of M. tuberculosis and halts infection. Infected individuals with strong immune systems are generally able to combat the infection within 2 to 8 weeks post-infection, when the active cell-mediated immune response stops further multiplication of M. tuberculosis. Tuberculosis infection shows several significant clinical manifestations in pulmonary and extra-pulmonary sites. Prolonged coughing, severe weigh-lost, night sweats, low-grade fever, dyspnoea and chest pain are clinical symptoms indicated from pulmonary infections.

Fort Bayard, N.M., T.B. service assignment

Fort Gayard, NM

Fort Gayard, NM

Fort Bayard, NM Post Hospital circa 1890

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

Tuberculosis, (Pvt.) Richard Johnson said, was “regarded as a much dreaded disease that was easily contracted by association.” In fact, so many hospital corpsmen requested transfers out that the Surgeon General established a policy that no such requests would be considered until after two years of service. Consequently, Johnson noted, “During my time there we had a high percentage of desertions.” For example, all four of the men who arrived with Johnson, within a year—“two of them,” he dryly observed, “owing me money.”

Four years later another young man arrived at Fort Bayard. He, too, remarked on the long journey by rail through the “desert waste of New Mexico,” and then the wagon ride over “dry desolate foothills,” to the post. But his reaction was different from Johnson’s. Capt. Earl Bruns moved from being a patient to a physician at the hospital. For Bruns Fort Bayard was “a veritable oasis in the desert, studded with shade trees, green lawns, shrubbery, and flowers.” He credited the hospital commander, Colonel (Col.) George E. Bushnell, writing that, “[i]n this one spot one man had made the desert bloom like a rose.”

Johnson’s and Bruns’ different views from 1904 and 1908, respectively, may reflect the fact that Johnson was healthy and assigned grudgingly to work at the tuberculosis hospital, whereas Bruns had few other options and came in hopes of regaining his health—or it may reflect the improvements Bushnell made during his first years in command. But every week for the more than twenty years that Fort Bayard was an Army tuberculosis hospital, workers and patients arrived with dread and foreboding, or joy and relief—or a mix of them all.

The approach Fort Bayard and George Bushnell took to tuberculosis was similar to how physicians manage the disease today in that it involved isolating the patient, treating the disease, and educating the patient and his family on how to maintain their health. The hospital offered patients sanctuary from the demands, fears, and prejudices regarding tuberculosis in the outside world. Fort Bayard treated tuberculosis patients with prolonged bed rest, fresh air, and a healthy diet, but undertaking this “rest treatment”—confining oneself to bed for months—proved difficult if not impossible for many patients. Fort Bayard involved patients’ adaptation to new lifestyles as people with tuberculosis. Finally, Fort Bayard managed patients’ transition back to the outside world.

One of the most striking aspects of Fort Bayard was that many of the medical staff had tuberculosis themselves, including George Bushnell. Tuberculosis weakened Bushnell’s lungs and shaped his life in numerous ways. He tired easily, had to carefully monitor his health, and as Earl Bruns observed, “was never a well man.” Bushnell had active tuberculosis five times in his life: the fourth time in 1919 with a breakdown from the strain of wartime work; and the fifth and the final illness in 1924 that lead to his death at age 70. In 1911 he advised his superiors that, “I did not consider myself strong enough to carry on the work of commanding this Hospital and keeping myself in condition for active duty.” The War Department generally required officers in poor physical condition to retire, but the Surgeon General secured a waiver for Bushnell, because “the interests of the service would suffer by his retirement.” After a leave of absence in 1909–10, Bushnell’s annual reports on the competency of his officers included his own name on the list of those competent for hospital duty, but “unfit for active field service.”

“What would our sanatorium movement and our anti-tuberculosis crusade amount to,” wrote tuberculosis expert Adolphus Knopf, “were it not for the labors of tuberculous physicians, or one-time tuberculous physicians, who, because of their infirmity, had become interested in tuberculosis?” Well-known leaders in the antituberculosis movement such as Edward Trudeau and Lawrence Flick established their sanatoriums after they recovered from tuberculosis in order to offer others the treatment. Twenty-one of the first thirty recipients of the Trudeau Medal, established in 1926 for outstanding work in tuberculosis, had the disease. James Waring, a tuberculosis physician who arrived at a Colorado Springs sanatorium on a stretcher in 1908, later wrote, “It has been my good fortune to serve three separate and extended ‘hitches’ as a ‘bed patient,’ the time so spent numbering in all about nine years.” He, like many physicians, saw his personal experience as an asset in his practice. The three key figures in the Army tuberculosis program during World War I were Bushnell, Bruns, and Gerald Webb of Colorado Springs who started a tuberculosis sanatorium after his wife died of the disease.

Bushnell turned tuberculosis into an asset for the Army Medical Department, making Fort Bayard a center of national expertise on the disease. His personal experience with chronic pulmonary tuberculosis gave him good rapport and credibility with many of his patients. Medical officer Earl Bruns wrote that, “[H]e went among the patients and talked to them individually” and thereby provided “a living example of a cure due to rational treatment.” Bruns described how Bushnell spent his days attending to patients, carrying out administrative duties, and devoted hours to supervising the work in the gardens and grounds of Fort Bayard.

(Who’s Who in America, 1924-25. E. H. Bruns in American Review of Tuberculosis, June 1925. 0. B. Webb in Outdoor Life, Sept. 1924. Lancet. Lond., 1924. Jour. Am. Med. Ass’n., 1924, p. 374.)

General George M. Sternberg

In addition to being an Army surgeon, Sternberg was also a noted bacteriologist who, in 1880, had translated Antoine Magnin’s The Bacteria, which presented the latest research in germ theory. Sternberg’s work contributed to preparing American understanding of Robert Koch’s pronouncement in 1882 of the existence of the tubercle bacillus (Ott 1996:55). Over the next two decades Koch’s analysis gained converts, leading to the universally accepted belief that tuberculosis represented a bacterium infection that could be diagnosed and then monitored by microscopic inspection of patient’s sputum.

Sternberg was no doubt aware of the efforts of Edward Livingston Trudeau. Beginning in the 1870s, when he undertook his own recovery from consumption by withdrawing to the Adirondack Mountains, Trudeau had become an advocate of extended bed rest in remote, healthful environments. Quickly accepting Koch’s research, Trudeau argued that those afflicted by the tubercle bacillus could best be healed when removed from cities and placed under the care of physicians who carefully monitored their weight and sputum and who prescribed constant bed rest with exposure to fresh air. Preferring the term “sanatorium,” derived from the Latin word “to heal,” to “sanitarium,” derived from the Latin term for health, Trudeau founded his Adirondack Cottage Sanatorium at Saranac, New York, in 1885. This spawned the opening of hundreds of similar institutions throughout the country (Caldwell 1988:70).

In 1899, Fort Bayard remained within the Army under the auspices of the Army Medical Department. The Army’s decision to retain the fort, even after it had outlived its military usefulness, grew from the strong interest that General George M. Sternberg, Surgeon General of the Army, had in pulmonary tuberculosis and its treatment.
Sternberg was also aware of the relatively good health that the Army’s soldiers had enjoyed serving in the higher elevations of the American West. Members of Zebulon Pike’s expedition of 1810 and of Fremont’s exploratory parties of the 1840s had witness their health improve while in the Rocky Mountains.

………………………………………………………………………………………………………………………………………………..

Upon assuming command in 1904, Bushnell, who had studied botany for years, immediately began to plant flowers, shrubs, and trees. When President Theodore Roosevelt created the Gila Forest Reserve in 1905, Bushnell ensured that Fort Bayard, which adjoined the Reserve, was part of a government reforestation project. The first year alone the Forest Service gave the hospital 250 seedlings of Himalayan cedar and yellow pine. Bushnell also got approval to fence in land for pasturing dairy cattle and arranged to recultivate long-neglected garden plots. The first year he predicted that the garden would generate “about 1300 dollars worth of produce.” After the quartermaster located an underground water source, Bushnell redoubled his cultivation efforts, planting trees, flowers, and grass to mitigate the wind and dust, and “to beautify the Post.” In later years Bushnell successfully grew beans from ancient cave dwellers (Anasazi beans), and made a less successful effort to grow Giant Sequoia from California.28 By 1910 Fort Bayard had four acres of vegetable gardens, a greenhouse, an orchard of 200 fruit trees, and alfalfa fields and hay fields for the dairy herd of 115 Holsteins, which the Silver City Enterprise proclaimed “one of the finest in the west.” The hospital also raised all of the beef consumed at the hospital (thereby avoiding Daniel Appel’s purchasing problems) and consumed pork at small expense by feeding the pigs the waste food. The hospital laboratory raised its own Belgian hares and guinea pigs for experiments.

Bushnell oversaw years of construction at Fort Bayard. In the wake of Florence Nightingale’s writings, nineteenth-century sanitation practices stressed cleanliness and ventilation, giving rise to pavilion style hospitals, narrow one- or two-story buildings lined with windows to provide patients with ample ventilation. In March 1904, Bushnell sent the Surgeon General plans for an “open court building” in modified pavilion style (Figure 2-1).

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

The building consisted of a quadrangle of long, narrow dressing rooms around an open court with porches along both the exterior and interior of the building. The rooms could be used for sleeping in inclement weather and the porches allowed patients to seek sun or shade as they wished. Wide doors enabled the easy movement of beds between the rooms and the porches. “The object of this style of building is to facilitate sleeping out of doors, which is now considered so important in modern sanatoria for the treatment of tuberculosis,” Bushnell explained.

The United States escaped the cauldron of WWI until April 1917. But after years of trying to maintain neutrality, President Woodrow Wilson’s administration mobilized the nation to fight in the most deadly enterprise the world had ever seen. Modern industrialized warfare would kill millions of soldiers, sailors, and civilians and unleash disease and famine across the globe. Typhus flourished in Eastern Europe and a lethal strain of influenza exploded out of the Western Front in 1918, producing one of the worst pandemics in history. Although eclipsed by such fierce epidemics, tuberculosis also fed on the war.

He was ordered to the office of The Surgeon General on June 2, 1917, and placed in charge of the Division of Internal Medicine and on June 13 there appeared S. G. O. Circular No. 20, Examinations for pulmonary tuberculosis in the military service, establishing a standard method of examination of the lungs for tuberculosis. Through his efforts a reexamination of all personnel already in the service was made by tuberculosis examiners and about 24,000 were rejected on that score. He had charge of the location, construction, and administration of all army tuberculosis hospitals, of which eight were built with a capacity of 8,000 patients.

With his relief from service in 1919 he took up his residence on a small farm at Bedford, Mass., where he prepared his Study of the Epidemiology of Tuberculosis (1920) and later Diseases of the Chest (1925) in collaboration with Dr. Joseph H. Pratt of Boston. As chief delegate of the National Tuberculosis Association he attended the first meeting of the International Union Against Tuberculosis in London in 1921. During the winter of 1922-23 he delivered a series of lectures on military medicine at Harvard University. In the summer of 1923 he moved to California and took up his residence at Pasadena.

………………………………………………………………………………………………………………………………………………..

In eighteen months the Selective Service registered twenty-five million men for the draft, examined ten million for military service, and enlisted more than four million soldiers, sailors, and Marines. To the dismay of many people, medical screening boards across the nation soon discovered that American men were not as strong and healthy as they had assumed. Of those eligible for military service, 30 percent were physically unfit; a number of them deemed ineligible to serve had tuberculosis. Therefore, in 1917 Surgeon General William Gorgas called George Bushnell to Washington, DC, to establish the Office of Tuberculosis in the Division of Internal Medicine, leaving Bushnell’s protégé, Earl Bruns, in charge of Fort Bayard. Given the Medical Department’s mission to maintain a strong and healthy fighting force, Bushnell’s new job was to minimize the incidence of tuberculosis among active-duty soldiers and avoid the high cost of disability pensions for men who incurred the disease during military service. It was a tall order.

Wartime tuberculosis had already received attention in 1916, when reports circulated that the French army had sent home 86,000 men with the disease, raising the specter that life in the trenches would generate hundreds of thousands of cases. One investigator found that tuberculosis rates in the British army were double those in peacetime, reversing the prewar downward trend. The head of the New York City Public Health Department, Hermann Biggs, declared that “tuberculosis
offers a problem of stupendous magnitude in France.” Subsequent studies revealed that only 20 percent or less of the French soldiers sent home with tuberculosis actually had the disease; others were either misdiagnosed or had had tuberculosis prior to entering the military and therefore had not contracted it in the trenches. The reports nevertheless galvanized public health officials to address the tuberculosis problem. The Rockefeller Foundation, for example, in cooperation with the American Red Cross, established a Commission for the Prevention of Tuberculosis in France to help the French and protect any Americans from contracting tuberculosis “over there.”

Bushnell established four “tuberculosis screens” by (1) examining all volunteers and draftees before enlistment, (2) checking recruits again in the training camps, (3) examining soldiers already in the Army for tuberculosis, and (4) screening military personnel at discharge to ensure they returned to civil life in sound condition. To implement these activities, Bushnell developed a protocol under which physicians could quickly examine men for tuberculosis as part of the larger physical examination process. He standardized the procedures for examinations throughout the Army, and crafted a narrow definition of what constituted a tuberculosis diagnosis to enable the Army to enlist as many young men as possible. Despite these efforts, soldiers developed active cases of tuberculosis throughout the war. Bushnell’s office also created eight more tuberculosis hospitals in the United States and designated three hospitals with the American Expeditionary Forces (AEF) in France to care for soldiers who developed active tuberculosis in the camps and trenches. Short of resources and knowledge, however, the Army Medical Department at times struggled just to provide beds for tuberculosis patients, let alone deliver the individual care Bushnell and his staff had provided at Fort Bayard before the war.

Overburdened medical personnel worked long hours, in often poor conditions. Thousands of tuberculosis patients resented the diagnosis and protested the conditions in which at times they were virtually warehoused. The draft, which brought millions of young men into government control and responsibility, also exposed the Army Medical Department to public scrutiny. Congress launched an investigation in 1919. World War I, which so dramatically changed the world, profoundly altered the Army’s tuberculosis program as well. It also challenged George Bushnell’s expertise. The Army’s tuberculosis expert had founded his policies on assumptions that, although widely held at the time, proved to be inaccurate and costly in lives and treasure. Wartime tuberculosis, therefore, shows the power of disease to overwhelm both knowledge and institutions.

Bushnell and his contemporaries were familiar with the concept of immunity and the power of vaccination, and the Army Medical Department vaccinated soldiers for smallpox and typhoid. Extending this concept of immunity to tuberculosis, medical officers differentiated between primary infection in childhood and secondary infection later in life. Observing that tuberculosis was often fatal for infants and young children, they reasoned that for survivors, an early infection of tuberculosis bacilli immunized a person against the disease later in life.
A “primary infection,” wrote Bushnell, gave a person some immunity, which “while not sufficient in many cases to prevent extension of disease [within the body]…is sufficient to counteract new infections from without.”8 In an article on “The Tuberculous Soldier,” the revered physician William Osler agreed. For years autopsies had uncovered healed tuberculosis lesions in people who had died in accidents or of other diseases. Although it was not known how many men between the ages of eighteen and forty harbored the tubercle bacillus, Osler wrote, “We do know that it is exceptional not to find a few [lesions] in the bodies of men between these ages dead of other diseases.” Thus, he argued, “In a majority of cases the germ enlists with the soldier. A few, very few, catch the disease in infected billets or barracks.”9 Bushnell reasoned if adults developed tuberculosis, “they do it on account of failure of their resistance.”

At one point Bushnell told the chief surgeon of the AEF, “Personally I have no fear of the contagion of tuberculosis between adults and see no reason why patients of this kind should not be treated in the ordinary hospital.” He asserted that the “really cruel persecution of the consumptive…through the fear that he will infect others, is based on what I must characterize as highly exaggerated notions of the danger of such infection.” This, too, was the prevailing view. Boston bacteriologist Edward O. Otis, who served as a medical officer during the war, wrote that “Undue fear of the communicability of pulmonary tuberculosis from one adult to another is unwarranted in the present state of our knowledge.”
Bushnell reasoned that if men infected with tuberculosis could indeed easily spread it to others, there would be much more tuberculosis in the Army than there was. British physician Leslie Murry, reasoned that although the crowded and damp conditions of trench warfare would have unfavorable effects on soldiers’ health, living outside with plenty of fresh air and good food and hygienic practices would improve their resistance to tuberculosis. Public health specialist George Thomas Palmer countered that although reactivation may not be higher in the military than in civil life, the United States had enough men without tuberculosis to bar anyone suspected of it from the military and thereby avoid an “added financial burden to the nation.” The challenge was to keep tuberculosis out of the Army and tuberculars off the disability rolls, but not to exclude so many men as to impair the nation’s ability to amass an army.

Bushnell’s views of tuberculosis immunity, contagion, interaction with military life, and the risk of overdiagnosis shaped the Army Medical Department programs for screening recruits. He knew he could not guarantee that all tuberculosis could be eliminated from the Army, but asserted that, “a sufficiently rigid selection of promising material in itself practically excludes tuberculosis.” In addition to enlisting the strongest men, Bushnell believed that a massive screening program would pay for itself by eliminating those who would later cost the government in medical services and disability benefits.

But the nation at war did not have the time or resources for the meticulous one-hour examination practiced at Fort Bayard, so Bushnell developed a protocol for civilian and military physicians to examine volunteers, draftees, trainees, and soldiers for tuberculosis in a matter of minutes. Circular No. 20 detailed how physicians should examine recruits, and became the single most important Army tuberculosis document during the war. The circular explained that the apices, or the tops of lungs, were the most common location for tuberculosis lesions, and that “the only trustworthy sign of activity in apical tuberculosis is the presence of persistent moist rales.” Circular No. 20 directed that “the presence of tubercle bacilli in the sputum is a cause for rejection,” and that “no examination for tuberculosis is complete without auscultation following a cough.” It recommended that a sputum sample “be coughed up in [the examiner’s] presence,” to ensure that it was actually from the examinee.

The last one-third of the document detailed X-ray examinations, summarizing eight different kinds of conditions that may appear and that would be grounds for rejection, and which conditions would not. By 1915, a Fort Bayard medical officer stated that X-ray technology “has become one of the most valued procedures in the diagnosis of pulmonary tuberculosis.” Medical officers F. E. Diemer and R. D. MacRae at Camp Lewis, Washington, argued in the pages of the JAMA that X-rays should be the primary diagnostic tool, not an “adjunct.” World War I ultimately, however, did encourage X-ray technology by revealing its power to thousands of physicians, stimulating the search for technical advances, and demonstrating the importance of specialization in reading X-rays. By the end of the war, the Army Medical Department had shipped to France hundreds of X-ray machines for use in Army hospitals and at the bedside, and developed various modes of X-ray equipment, including X-ray ambulances

Calculating that it would require 600 examiners for the screening process, the Medical Department turned to training general practitioners from civil life who knew little about tuberculosis. Bushnell’s office established a six-week tuberculosis course to prepare physicians. The first course at the Army Medical School in Washington, DC, was so popular that instructors offered it at several other training camps in the country. General Hospital No. 16, operating in conjunction with Yale Medical School, also offered a course on hospital administration to train medical officers to run tuberculosis hospitals.

Public health officials and the National Tuberculosis Association asked to be informed of any tuberculous individuals being sent to their communities, including the name and address of the “party assuming responsibility for such continued treatment and care.” The journal American Medicine published an article by British tuberculosis specialist Halliday Sutherland, who expressed concern that if men declined treatment and returned home they could spread tuberculosis to their families. He suggested that the U.S. Army retain men diagnosed with tuberculosis so that the government could provide treatment and discipline them if they resisted. Members of Congress opposed simply discharging men with tuberculosis. Representative Carl Hayden of Arizona argued that such men had given up their civilian lives upon induction into the Army, only to discover “that they were afflicted with a dread disease which prevents them from earning a livelihood.” He suggested that “some provision should be made for the care of such men until they are able to provide for themselves.”

While Bushnell’s policies succeeded in suppressing tuberculosis rates in the Army, the narrow definition of a tuberculosis diagnosis explicitly allowed men with healed lesions in their lungs to serve, and the rapid screening system caused some examiners to miss cases of active disease. Bushnell recognized that “a standard, though imperfect, is believed to be an indispensable adjunct in Army tuberculosis work not only to support the examiner but also to secure the necessary uniformity of practice in the matter of discharge for tuberculosis.” Nationwide, local draft boards and training camps rejected more than 88,000 men for tuberculosis, about 2.3 percent of the 3.8 million men examined. Postwar assessments calculated that of the more than two million soldiers who went to France to serve in the AEF, only 8,717 were evacuated with a diagnosis of tuberculosis, an incidence of only 0.4 percent.

In early 1918 a strep infection in the training camps in the United States caused medical officers to send hundreds of trainees to Army hospitals misdiagnosed with tuberculosis, crowding hospitals and generating paperwork and confusion. For a time, therefore, the Office of The Surgeon General ordered that no one should be discharged for tuberculosis from the training camps unless he had bacilli in his sputum—meaning the very severe cases. More than 50 percent of the patients being sent back to the United States from France with a diagnosis of tuberculosis did not actually have the disease. Bushnell viewed such overdiagnoses as “evil,” because it took men out of the AEF and overburdened tuberculosis hospitals and naval transports, which had to segregate suspected tuberculosis cases in isolation rooms or on open decks.

Faced with what he called “leaking” of soldiers from the AEF due to erroneous tuberculosis diagnoses, Bushnell turned to a specialist for assistance, Gerald B. Webb (Figure 4-3), from Colorado Springs.61 An Englishman by birth, Webb had married an American, and when she developed tuberculosis the couple traveled to Colorado Springs, Colorado, for treatment. His wife struggled with the disease for ten years until her death in 1903, and afterward Webb stayed on in Colorado Springs, remarrying and building a medical practice specializing in tuberculosis. In addition to his medical practice, Webb pioneered research into the body’s immune function, searched for a tuberculosis vaccine, and was a founder of the American Association of Immunologists (1913). Still somewhat bored in Colorado Springs, Webb volunteered for the Medical Corps soon after the United States declared war and helped organize and run tuberculosis screening boards at Camp Russell, Wyoming, and Camp Bowie, Texas. Bushnell
appointed him senior tuberculosis consultant for the AEF. After meeting with Bushnell in Washington and attending the Army War Course for senior officers at Columbia University, Webb sailed to France in March 1918.

Gerald B. Webb, World War I, Gerald B. Webb Papers.

Gerald B. Webb, World War I,

Gerald B. Webb, World War I,

Photograph courtesy of Special Collections, Tutt Library, Colorado College, Colorado Springs, Colorado.

Immunity in tuberculosis: Further experiments Unknown Binding – 1914

Webb instituted a screening process similar to that in the United States, distributing Circular No. 20, and preparing an illustrated version for medical officers in the field. He established a policy directing that only patients with sputum positive for tuberculosis should be sent back to the United States. Others would be tagged “tuberculosis observation” and sent to one of three hospitals designated as tuberculosis observation centers. There, specialists—Bushnell’s “good tuberculosis men”—would distinguish tuberculosis signs from other lung problems such as bronchitis and pneumonia, determining that he was free of disease, and send only patients who were indeed positive for tuberculosis back to the homeland.

Webb traveled to field and base hospitals throughout France. He would typically spend three days at a hospital, examining patients, leading conferences, giving lectures, and, according to his biographer, Helen Clapesattle, “preaching his gospel of fresh air and absolute rest.” He recruited a radiologist to teach the proper reading of X-ray plates, and advocated the early detection of tuberculosis, explaining, “Just as the wounded do better if they are got to the surgeons quickly, so the tuberculosis-wounded are more likely to recover if they are spotted and sent to the doctors early.”

In the 1930s, as Webb had concluded in 1919, scientists came to recognize that early tuberculosis infections did not provide protection and that adults could be reinfected with tuberculosis and develop active disease. In the meantime, with his AEF work done, in January 1919 Webb returned to his family and medical practice in Colorado Springs. The National Tuberculosis Association recognized Webb’s war work by electing him president in 1920, and Webb set the Association on a course of tuberculosis research on the immunity question and the standardization of X-ray diagnostics. He did not return to military service, but was a mentor for young physicians Esmond Long and James Waring, who would be leaders in the Army Medical Department’s tuberculosis program during the next war.

May 1941, as the United States stood on the brink of another world war, Benjamin Goldberg, president of the American College of Chest Physicians, recited some stunning figures at the association’s annual meeting in Cleveland, Ohio. He calculated that from 1919 to 1940 the Veterans Administration had admitted 293,761 tuberculosis patients to its hospitals. These patients had received government care and benefits for a total of 1,085,245 patient-years, at a cost of $1,185,914,489.56. Goldberg’s remarks reveal that although tuberculosis rates in the United States were declining 3 to 4 percent annually during the interwar years, the government’s burden to care for tuberculosis patients remained heavy. The Army was only three-quarters the size it was before World War I (131,000 versus 175,000 strength) and experienced no major epidemics, so that suicide and automobile accidents became the leading causes of death in the peacetime Army. Although hospital admissions of active duty personnel for tuberculosis declined during the decade, tuberculosis admissions at Fitzsimons Hospital in Denver remained constant due to a steady stream of patients who were veterans of the war. Tuberculosis, in fact, became a leading cause of disability discharges from the Army and, with nervous and mental disorders, generated the greatest amount of veterans’ benefits between the wars,

The story of tuberculosis in the Army after World War I, then, is one of increasing demand and decreasing resources, a dynamic that left Fitzsimons financially strapped even before the country entered the Great Depression. An examination of Fitzsimons’ postwar environment—the modern hospital and technology, the ever-changing landscape of veterans’ benefits, and new, invasive treatments for tuberculosis—illuminates these stresses.

President Franklin Delano Roosevelt proclaimed a “limited national emergency” on 8 September 1939, a week after Germany invaded Poland. But due to underfunding during the interwar period, one observer wrote that, “to prepare for war the Medical Department had to start almost from scratch.”1 Given the lean years of the 1920s and 1930s and the Army Medical Department’s policy of discharging officers with tuberculosis from duty, Surgeon General James C. Magee had to turn to the civilian sector for a tuberculosis expert. He recruited Esmond R. Long, M.D., Ph.D., director of the Henry Phipps Institute for the Study, Prevention and Treatment of Tuberculosis in Philadelphia. He could not have made a better choice. Long was also professor of pathology at the University of Pennsylvania, director of medical research for the National Tuberculosis Association, and the youngest person to be awarded the Trudeau Medal at age forty-two years (in 1932) for his tuberculosis research.2 He would now become the Army’s point man on the disease and stand at the front lines of the Medical Department’s struggle with tuberculosis beginning before Pearl Harbor to well after V-J (Victory-Japan) Day.

His mission to reduce the effect of tuberculosis on the Army differed from that of Colonel (Col.) George Bushnell in the previous war because disease was less of a threat. In fact, World War II would be the first war in which more American personnel died of battle wounds than of disease. Of 405,399 recorded fatalities, battle deaths outnumbered those from disease and nonbattle injuries more than two to one: 291,557 to 113,842.3 Malaria, sexually transmitted diseases, and respiratory infections did sicken millions of soldiers, sailors, Marines, and airmen, but most survived. Thanks in part to sulfa drugs and, beginning in 1943, penicillin to treat bacterial infections, the Army Medical Department had only 14,904 deaths of 14,998,369 disease admissions worldwide, a 0.1 percent death rate.4 Tuberculosis declined, too, representing only 1 percent of Army hospital admissions for diseases—1.2 per 1,000 cases per year—a rate much lower than the 12 per 1,000 cases per year during World War I. The Medical Department concluded that “tuberculosis was not a major cause of non-effectiveness during the war.”

But Sir Arthur S. McNalty, chief medical officer of the British Ministry of Health (1935–40), called tuberculosis “one of the camp followers of war.” War abetted tuberculosis, he explained, because of the “lowering of bodily resistance and increased physical or mental strain or both.”6 It also found fertile ground in crowded barracks and camps, and ran rampant in the World War II prison camps and Nazi concentration camps. And just one active case of tuberculosis per thousand in the Army meant thousands of tuberculosis sufferers among the 11 million Americans in uniform, each of whom consumed Medical Department resources: the average hospital stay per case during the war was 113 days.7

But if tuberculosis was a camp follower, Esmond Long (Figure 8-1) was a tuberculosis follower.8 He tracked it down, studied it, and tried to prevent its spread at every stage of American involvement in the war. With war looming in 1940, the National Research Council asked Long to chair the Division of Medical Sciences, Subcommittee for Tuberculosis, to advise the government on preventing and controlling tuberculosis in both civilian and military populations during war mobilization. Once the United States entered the war, Long received a commission as a colonel in the Medical Corps and moved his family from Philadelphia to Washington, DC. Working out of the Office of The Surgeon General, Long set up a screening process with the Selective Service to keep tuberculosis out of the Army and then traveled to more than ninety induction camps to ensure adherence to the procedures. He also oversaw the expansion of tuberculosis treatment facilities in the United States, inspected Fitzsimons and other Army tuberculosis hospitals, advised medical officers on treating patients, kept abreast of research developments in the labs, monitored outbreaks of tuberculosis in the theaters of war, and wrote articles for medical and lay periodicals to publicize the Army’s antituberculosis program.

In 1945 Long traveled to the European theater to inspect hospitals caring for tubercular refugees and liberated prisoners of war (POWs). There he saw the horrors of the concentration camps at Buchenwald and Dachau where Army medical personnel cared for thousands of former prisoners sick and dying of typhus, starvation, and tuberculosis. After the war Long organized the tuberculosis control program for the Allied occupation of Germany, and returned annually in the 1950s to assess its progress. He split his time between the Army Medical Department and the Veterans Administration (VA) to supervise the transition of the federal tuberculosis treatment program from the War Department to the VA. He also helped organize and evaluate the antibiotic trials, which ultimately led to an effective cure for tuberculosis. After returning to civilian life Long continued to study tuberculosis in the Army, and he wrote the key tuberculosis chapters for the Army Medical Department’s official history of the war.

With Long as a guide, this chapter shows how war once again served as handmaiden to disease around the globe. This time the Army Medical Department assumed not only national but international responsibilities for the control of tuberculosis in military and civilian populations, among friend and foe. Long and the Army Medical Department did succeed in demoting tuberculosis from the leading cause of disability discharge for American World War I personnel (13.5 percent of discharges), to thirteenth position during the years 1942–45 (1.9 percent of all discharges), behind conditions such as psychoneuroses, ulcers, respiratory diseases, arthritis, and other diseases.9 But this achievement required continued vigilance, an Army-wide surveillance program, and dedicated personnel and resources. The first step was to keep tuberculosis out of the Army.

After war broke out in Europe, Congress passed the National Defense Act of 1940, which established the first peacetime military draft in U.S. history, increasing Army strength eightfold from 210,000 in September 1939 to almost 1.7 million (1,686,403) by December 1941. This resulted in a 75 percent rise in the number of patients in military hospitals, straining the Medical Department, which had only seven general hospitals and 119 station hospitals in 1939.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Figure.. Esmond R. Long, who directed the Army tuberculosis program during World War II. Photograph courtesy of the National Library of Medicine, Image #B017302.

“Good Tuberculosis Men”

Soon appropriating freely, pledging “all of the resources of the country“ to meet the crisis, the War Department was constantly readjusting to meet the escalating emergency.

The National Research Council Committee on Medicine, Subcommittee on Tuberculosis, chaired by Long, met for the first time on 24 July 1940 and prioritized its responsibilities: first, develop recommendations on how to screen draft registrants for tuberculosis; second, screen civilians in federal service and wartime industries; third, figure out how to care for people rejected by the draft for the disease; and finally, help civilian and military agencies prepare for tuberculosis in war refugee populations. In its first nine-hour meeting, the subcommittee decided on centralized tuberculosis screening centers at 200 recruiting stations and generated a list of tuberculosis specialists nationwide to evaluate recruits and interpret X-rays at those centers. Subcommittee members stressed the importance of maintaining good records for processing any subsequent benefits claims and, most importantly, called for X-ray screening of all inductees—not just those who looked like they might have tuberculosis.

The War Department leadership initially rejected such comprehensive screening of inductees as expensive and time-consuming. The fact that tuberculosis death rates in the country had fallen two-thirds from 140 per 100,000 people in 1917 to 45 per 100,000 people in 1941, and in the Army from 4.6 per 1,000 in 1922 to 1.4 per 1,000 in 1940, may have led to complacency. But Long, his colleagues, and the national tuberculosis community, mindful of the cost to the nation in sickness, death, and disability benefits in the previous war, persisted. The American College of Chest Surgeons asked in July 1940, “Shall We Spread or Eliminate Tuberculosis in the Army?” and their president, Benjamin Goldberg, reported that the VA had spent almost $1.2 billion on tuberculosis patients through 1940. One medical officer calculated that 31 percent of all veterans who died as a result of World War I service and whose dependents received benefits, had died of tuberculosis. Even the lay press chimed in with a TIME magazine article, “TB Warning,” that stressed the importance of chest X-rays.16 Advocates pointed out that X-ray technology was more available and less expensive than in the previous war, and radiologists were more plentiful and skillful. They were also confident that new technology, such as the development of a lens that allowed the direct and rapid photography of a fluoroscopic image and new 4 x 5 inch films, which made storage and transport easier than that of the 11 x 14 inch films, rendered screening more practical than in 1917–18.

The Army Medical Department agreed with the National Research Council subcommittee. Since 1934 it had required X-rays for all Army personnel assigned overseas, but it had not yet convinced the War Department on universal screening. In June 1941, Brigadier General (Brig. Gen.) Charles Hillman, Chief, Office of The Surgeon General Professional Service Division, told the National Tuberculosis chairman, C. M. Hendricks, that “the desirability of routine X-rays had long been recognized by the Surgeon General’s Office,” but “considerations other than medical entered the picture and the character of induction

Camp Follower: Tuberculosis in World War II 277

Examinations had to be adapted to the limitations of time, place, and available equipment.” When Fitzsimons informed Hillman later that new recruits were arriving at the hospital with tuberculosis, he responded almost plaintively. “I am working with the Adjutant General to devise some method by which every volunteer for enlistment in the Regular Army will have a chest X-ray and serological test before acceptance.” He asked for all available evidence of sick recruits, explaining that “data on Regular Army men of short service now in Fitzsimons with tuberculosis will help me get the thing across.” As the data and advice accumulated, in January 1942, the Adjutant General required that all voluntary applicants and reenlisting men be given chest X-rays. Finally, on 15 March 1942, mobilization regulations made chest X-rays mandatory in all induction physicals.

With universal screening in place, Long, as chief of the tuberculosis branch in the Office of The Surgeon General, oversaw the screening process and faced a task similar to that of George Bushnell in 1917–18, finding that fine line between excluding as much tuberculosis as possible from the Army without rejecting too few or too many men. Conscious of his predecessor’s miscalculations, Long was careful not to criticize Bushnell’s tuberculosis program, at one point noting that World War I medical officers were “not to be reproached for not having knowledge that came into existence only later, any more than the chief of the Army air service in 1917 is to be reproached because more efficient airplanes are available now than then.”

The wartime emergency produced a public health campaign regarding tuberculosis and other disease threats. A War Department pamphlet, What Every Citizen Should Know about Wartime Medicine, presented the issue as one of maintaining troop health and limiting public costs. “The strenuous activity of soldiering is likely to cause extension of an incipient (early) tuberculous invasion of the lungs, or to precipitate the breakdown and reactivation of arrested cases,” it explained. Such illness could result in disability “and the necessity of providing long care of these patients in military hospitals where they must remain isolated from nontuberculous patients.” The Public Health Service also created a tuberculosis office to handle the expected increase in tuberculosis, and, as the National Research Council Subcommittee recommended, gave war industry workers chest examinations.

As military and civilian screening boards found thousands of people with active tuberculosis and sent many of them to tuberculosis sanatoriums and hospitals, they generated what a public health nurse referred to as “potentially the greatest case finding program that workers in tuberculosis control have ever known.” At the same time, however, war mobilization drew civilian medical personnel into the military, reducing staffing in home front institutions. Army medical personnel ultimately numbered more than 688,000, including 48,000 physicians in the Medical Corps, 14,000 dentists in the Dental Corps, and 56,000 nurses in the Army Nurse Corps—a large portion of the nation’s medical professionals.27 To maintain his nursing staff, VA Director Frank Hynes even asked the Army Nurse Corps in May 1942 not to hire VA nurses away from his hospitals.

Army tuberculosis rates during World War II, while lower than during World War I, did show a similar “U” curve with high rates at the beginning of the war as the Selective Service built up the military forces and cases that had eluded screening became active during training or combat (Figure 8-2). Tuberculosis rates fell as radiologists became more proficient at identifying tuberculosis infections, and then another sharp, higher increase in cases at the end of the war as discharge examinations found people who had developed active tuberculosis during their service. Postwar studies also revealed a seemingly paradoxical phenomenon that during the war military personnel serving overseas had lower tuberculosis rates than those serving in the United States, yet higher rates when they returned home.

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

Chart comparing the incidence curves of tuberculosis in the Army during World War I and World War II. From Esmond R. Long, “Tuberculosis,” in John Boyd Coates, Robert S. Anderson, and W. Paul Havens, eds., Internal Medicine in World War II, Medical Department, U.S. Army in World War II, vol. 2, Infectious Diseases (Washington, DC: Office of The Surgeon General, Department of the Army, 1961), chart 17, p. 335. Available at http://history.amedd.army.mil/booksdocs/wwii/infectiousdisvolii/chapter11chart17.pdf.

The Medical Department of the United States Army in the World War. Communicable and Other Diseases. Washington: U. S. Government Printing Office, 1928, vol. IX, pp. 171-202.
Letter, The Adjutant General, to Commanding Generals of all Corps Areas and Departments, 25 Oct. 1940, subject: Chest X-rays on Induction Examinations.
M. R. No. 1-9, Standards of Physical Examination During Mobilization, 31 Aug. 1940 and 15 Mar. 1942
Long, E. R.: Exclusion of Tuberculosis. Physical Standards for Induction and Appointment.[Official record.]

Long, E. R., and Stearns, W. II.: Physical Examination at Induction; Standards With Respect to Tuberculosis Induction and Their Application as Illustrated by a Review of 53,400 X-ray Films of Men in the Army of the United States. Radiology 41: 144-150, August 1943.
Long, Esmond R., and Jablon, Seymour: Tuberculosis in the Army of the United States in World War II. An Epidemiological Study with an Evaluation of X-ray Screening. Washington: U. S. Government Printing Office, 1955.

It is estimated that, before roentgen examination became mandatory (MR No. 1-9, 15 March 1942), one. million men had been accepted without this form of examination. Where roentgen examination was practiced, it resulted in a rejection rate of about 1 percent for tuberculosis. Applying this figure, it can be estimated that some 10,000 men were accepted who would have been rejected if they had been subjected to chest roentgen-ray study. Various studies have shown that approximately one-half of these would have been cases of active

http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Troops who developed tuberculosis were not discovered until their separation examinations, conducted when they were once again in the United States.

In the end, the screening process rejected 171,300 men for tuberculosis as the primary cause (thousands more had tuberculosis in addition to the disqualifying condition), and Long calculated that this saved the government millions of dollars in hospitalization costs. After the war, however, Long identified two factors that allowed tuberculous men into the Army: the failure to screen all inductees until March 1942, and the 4 x 5 inch stereoscopic (fluorographic) films, which were used in the interest of speed but which Long believed caused examiners to miss about 10 percent of minimal tuberculosis lesions in recruits. To better understand the latter problem he had two radiologists read the same X-rays and found substantial disagreement between their findings. Long therefore concluded that “if the induction films had each been read by two different radiologists, undoubtedly many more of the men who had tuberculosis at entry could have been excluded from service.” The Army ultimately discharged 15,387 enlisted men for tuberculosis during the war, which earned it thirteenth position as a cause of disability discharge.

American military forces fought in nine theaters of war—five in the Pacific and Asia, the other four in North Africa, the Mediterranean, Europe, and the Middle East. The Allies gave priority to defeating Germany and Italy in Europe beginning with operations in North Africa and the Mediterranean. After fighting in Tunisia in 1942–43, the Allies invaded Sicily on 10 July 1943, and moved up the Italian peninsula. By April 1944—in preparation for the D-Day invasion on 6 June 1944—the United States had more than 3 million soldiers in Europe, supported by 258,000 medical personnel managing a total of 318 hospitals with 252,050 beds. The war against Japan got off to a slower start as U.S. military forces developed the means to execute an island war across vast expanses of ocean. After fighting began in the Southwest Pacific, military forces grew from 62,500 troops in March 1942 to 670,000 in the summer of 1944 with 60,140 medical personnel. Even though military personnel developed tuberculosis in all of the nine theaters, the numbers were not high and tuberculosis was not a major military problem. In the Southwest Pacific theater, for example, only sixty-four of more than 40,000 hospital admissions were for the disease.

Tuberculosis was of the greatest consequence in the North Africa and Mediterranean theaters, in part due to poor screening early in the war, but also because, according to historian Charles Wiltse, it was the theater “in which the lessons of ground combat were learned by the Medical Department as much as by the line troops.” In general, medical personnel learned the importance of treating battle casualties as promptly as possible and keeping hospitals and clearing stations mobile and far forward to shorten evacuation and turnaround times. With regard to tuberculosis, the Medical Department had to relearn the World War I lesson of the importance of having skilled practitioners—or “good tuberculosis men”—in theater. They also ascertained which treatments were appropriate close to the battle lines and which were not, and when and how best to evacuate tubercular patients to the United States.

When soldiers with tuberculosis began to appear at Army medical stations in North Africa in late 1942, Major General (Maj. Gen.) Paul R. Hawley, chief of medical services for the European theater of operations, called for a tuberculosis specialist. On Long’s recommendation, Hawley appointed Col. Theodore Badger (Figure 8-3) as a senior consultant in tuberculosis on 2 January 1943. A professor of medicine at the Harvard School of Medicine, Badger had served in the Navy during World War I, and then attended Yale and Harvard where he earned his medical degree. Chief of medical service of the 5th General Hospital (GH), organized out of Harvard, Badger would play a role similar to that played by Gerald Webb during World War I—medical specialist, teacher, and troubleshooter.

Assessing the tuberculosis situation in the Mediterranean theater, Badger identified five hazards: (1) the development of active disease in American troops who had not been X-rayed upon induction; (2) association with British troops and civilians who had not been screened for tuberculosis; (3) drinking of nonpasteurized and possibly infected milk that could transmit tuberculosis; (4) battlefield conditions that could activate soldiers’ latent infections; and (5) the undetermined effects of other respiratory infections.41 Badger soon got the Army to use pasteurized milk and to establish X-ray centers with the proper equipment and trained staff, but he was not able to examine the thousands of American soldiers in the war zone. To gauge the extent of the tuberculosis problem he therefore arranged for a mobile X-ray unit to conduct spot surveys of troops in the field. Three examinations of some 3,000 troops each found only about 1 percent with signs of tuberculosis. To avoid losing manpower, Badger reported in mid-1943 that “up to the present time no individual has been removed from duty because of X-ray findings, and follow-up study has, so far, not indicated the necessity for it.” Badger planned to recheck those with suspicious films every few months to see if the signs had advanced. Badger recommended that patients with pleural effusion, the accumulation of fluid between the layers of the membranes that line the lungs and chest cavity that often indicates tuberculosis, be evacuated back to the United States. He also ended the practice of transporting some tuberculosis patients sitting up

. As the first true air war, World War II saw the introduction of air evacuation when Army aeromedical squadrons deployed in early 1943. After successful trials in the Pacific and North Africa, air evacuation increased so that during the Battle of the Bulge (1944–45), some patients arrived in U.S. hospitals within three days of being wounded. Some medical officers were concerned about the effects of transporting tuberculosis patients by air where they would be exposed to high speeds, jolting, and reduced air pressure. Tuberculosis specialists in New Mexico and Colorado therefore studied 143 white, male military patients, twenty-two-years old to twenty-eight-years old, with active tuberculosis flown to Army hospitals in nonpressurized air ambulances for any signs of trouble. Fearing the worst, they instead found that “severe discomfort, pulmonary hemorrhage, and spontaneous pneumothorax did not occur in the series either during or following the flight,” and concluded that air transport up to 10,000 feet was safe and preferable to time-consuming travel by water. By the end of the war the consensus was that rapid air evacuation to the United States also reduced the need to give a tuberculosis patient a pneumothorax in the field.

From the roof of Fitzsimons’ new building in April 1943, Rocky Mountain News reporter John Stephenson could see the Rocky Mountain Arsenal, the Denver Ordnance Plant, and Lowry Field, “places where the Army studies how to kill people.” But, he wrote, “The Army is merciful. It lets the right-hand of justice know what the left hand of mercy is doing at Fitzsimons General Hospital.” The largest Army hospital in the world, Fitzsimons had 322 buildings on 600 acres, paved streets with traffic lights, a post office, barbershop, pharmacy school, dental school, print shop, bakery, fire department, and chapel. It was, wrote Stephenson, “a city of 10,000.”61 No longer a liability, Fitzsimons was the pride of the Army Medical Department. One Army inspector reported that “it is apparent that no expense has been spared in this extraordinary building or in the general equipment and maintenance of the whole hospital plant.”62 As Congressman Lawrence Lewis had hoped, Fitzsimons’ mission now extended beyond caring for tuberculosis patients to meeting the general medical and surgical needs of the wider military community in the Denver region.

During the war the hospital maintained about 3,500 beds, reaching its highest daily patient population after the war—3,719 on 3 February 1946. The annual occupancy rate, calculated in patient days, increased from 603,683 in 1942 to a high of 1,097,760 for 1945, about 85 percent capacity.

With the reduction of tuberculosis in the Army over the years, the percentage of tuberculosis patients among all those at Fitzsimons had declined from 80 percent to 90 percent in the 1920s to 40 percent to 50 percent in the late 1930s. As the Army grew it now rose again. During the war Fitzsimons admitted more than 8,100 patients with tuberculosis. In fact, in 1943, only eighteen patients had battle injuries; the rest were in the hospital for illness and noncombat injuries. Unlike during the previous war, however, this Medical Department had a network of more than fifty veterans’ hospitals to which it could transfer patients too disabled by tuberculosis or other disease or injury to return to duty. Now, instead of allowing patients to stay in the service and receive the benefit of hospitalization with the hopes that they would recover and return to duty, the Medical Department discharged patients to VA hospitals as soon as they were determined to be unfit for military service, thereby reserving capacity for active-duty personnel. Maj. D. P. Greenlee had returned from a training course in penicillin therapy at Bushnell General Hospital in Utah to supervise the administration of the new drug on a variety of infections. He soon reported a cure rate of 93 percent. There were fewer victories in tuberculosis treatment.

During the war about one-quarter of all tuberculosis patients were treated with pneumothorax. During the war Fitzsimons surgeon Col. John B. Grow and other surgeons tried lung resection to treat tuberculosis, with few patient deaths. In 1946, however, when Grow’s staff contacted thirty patients who had had such surgery, they found that half of them were doing well, but three others had died, seven were seriously ill, and the rest were still under treatment. “It was felt that pulmonary resection in the presence of positive sputum was extremely hazardous and the indications were consequently narrowed down.”

Outside the operating rooms, the “City of 10,000” had a rich social life with people arriving at the post from all corners of the country. With Congressman Lewis’s acquisition of the School for Medical Technicians, Fitzsimons assumed the role of medical trainer, offering six- to twelve-week courses in technical training for dental, laboratory, X-ray, surgical, clinical, and pharmacy assistants. By 1946 the School had graduated more than 28,000 such technicians to serve around the world. The Women’s Army Corps arrived at Fitzsimons in February 1944 when 165 women attended the medical technicians school as part of the first coeducational class.74 Members of the Women’s Army Corps, rehabilitation aides, Education Department staff, dietitians, as well as nurses increased the female presence at Fitzsimons, as did activities of welfare organizations such as the Gold Star Mothers, the Red Cross, and the Junior League. Fitzsimons’ patients and staff also enjoyed visits from celebrities, including Jack Benny, Miss America, Gary Cooper, Dorothy Lamour, and other entertainers such as the big band leader Fred Waring and his Pennsylvanians, the Denver Symphony Orchestra, and an African American Methodist Church children’s choir from Denver. Like communities across the country, the hospital participated in war bond campaigns and had a huge war garden that produced thousands of ears of sweet corn and bushels of other vegetables.

Despite national mobilization and generous congressional funding, the Army could not escape the strain on its hospitals. By July 1944, Fitzsimons had reached capacity so the Medical Department designated two more hospitals as specialty centers for tuberculosis. Earl Bruns’ widow Caroline, who lived in Denver at the time, was no doubt pleased when the department named Bruns General Hospital in Santa Fe, New Mexico, in honor of her husband. Bruns along with Moore General Hospital in Swannanoa, North Carolina, cared for enlisted patients with minimal or suspected tuberculosis.

As Allied troops liberated France in 1944 and crossed into Germany they encountered thousands of refugees or “displaced persons”—escaped prisoners from Nazi concentration camps, exhausted and terrified Jews, slave laborers, political prisoners, Allied POWs, and other victims. The Nazi camps that held these people served as incubators for diseases such as tuberculosis and typhus, and the frightened, sick, and starved refugees inundated Army hospitals in late 1944 and early 1945. Theodore Badger reported one of the first waves that arrived on 18 December 1944 when 304 men, most of them Russians, came to the 50th GH in Commercy, France. They had been in the Nazi labor camps for the mines and heavy industries, where thousands died and survivors were malnourished and sick. All of the 304 had tuberculosis, 90 percent with moderate or advanced disease. Four were dead on arrival, eight more died in the first week, and one-third of the patients would die by May.96 Alarmed, Gen. Hawley, Chief Surgeon of the European Theater of Operations, ordered that all displaced civilians and recovered military personnel be examined for signs of tuberculosis “to establish the gravity of the situation.” The situation was dire. At one time the 46th GH had more than 1,000 tuberculosis patients, all recovered Allied POWs, causing Esmond Long to remark that the hospital “had the largest number of tuberculosis patients of any Army hospital in the world.”

The 46th GH from Portland, Oregon, which had cared for tuberculosis patients in the Mediterranean theater, also stood on the front lines of the tuberculosis problem in Europe. Serving at Besancon, France, the hospital would receive the Meritorious Service Unit Plaque and Col. J. G. Strohm, the commanding officer, the Bronze Star Medal for service during the liberation of France. During the spring of 1945, the 46th GH admitted 2,472 Russians, forty-one Poles, and 128 Yugoslav POWs and former slave laborers freed by American forces. The influx began on 12 March and within four days the 46th GH had admitted 1,200 such patients.

“The hospital staff was agast [sic] at the terrible physical condition of these people,” reported the hospital commander.99 When Badger visited the 46th GH in March 1945 he said the patients “constitute one of the most seriously affected groups with tuberculosis and malnutrition that I have ever seen,” explaining that most of them suffered “acute fulminating, rapidly fatal disease, mixed with chronic, slowly progressive, fibrotic tuberculosis. ”Medical personnel (Figure 8-4) cared for these patients as best they could, comforting many of them as they died. They began the rest treatment with some men but, as Badger reported, convincing Allied POWs to submit to absolute bed rest after months of confinement was “practically impossible.” Badger was able to report that after a month “those men who did not die of acute tuberculosis showed marked improvement.”

46th General Hospital nurses who cared for former prisoners of war.

46th General Hospital nurses who cared for former prisoners of war.

Figure 8-4. 46th General Hospital nurses who cared for former prisoners of war. Photograph courtesy of Oregon Health Sciences University, Historical Collections and Archives, Portland, Oregon.

26th Gen Hospital WWII, North Africa

26th Gen Hospital WWII, North Africa

In late 1944 Hawley requested 100,000 additional hospital beds for the displaced persons and POWs he expected to encounter after the German surrender, but Gen. George Marshall and Secretary of War Henry L. Stimson denied the request, believing they could not spare resources of that magnitude. The European Theater, they decided, must use German medical personnel and hospitals to care for the prisoners. Only after the war did American hospital units transfer their equipment and supplies to German civilians and Allies for their use.

The liberation of Europe also freed American POWs, who, not surprisingly, had higher rates of tuberculosis than other American military personnel. Captured British medical officer Capt. A. L. Cochrane cared for some of them in the prison where he was confined and noted sardonically that imprisonment was “an excellent place to study tuberculosis; [and] to learn the vast importance of food in human health and happiness.” German prison guards gave POWs only 1,000 to 1,500 calories per day, so Red Cross food parcels, which provided an additional 1,500 daily calories per person, were critical to preventing malnutrition and physical breakdown. Cochrane observed that the American and British POWs received the most parcels and had the lowest tuberculosis rates in the camp, while the Russians received nothing at all and had the highest rates. During the eighteen months that French POWs received the Red Cross parcels, he noted, just two men of 1,200 developed tuberculosis but when parcels for the French ceased to arrive in 1945, their tuberculosis rate rose to equal that of the Russians. The situation, he concluded, showed the “vast importance of nutrition in the incidence of tuberculosis.” Not all Americans got their parcels, though. William H. Balzer, with an American artillery unit, was captured in February 1943, and remembered how German guards stole the Americans’ packages.
Balzer survived imprisonment but never recovered from the ordeal. Severely disabled (70 percent), he died in 1960 on his forty-sixth birthday.

Exact tuberculosis rates among American POWs are not known because the rush of events surrounding the liberation of prisoners from German and Japanese control prevented a systematic X-ray survey. Rates did appear to be higher, though, for prisoners of the Japanese than for prisoners of the Germans. Long reported that about 0.6 percent of recovered troops from European POW camps had tuberculosis, whereas data from the Pacific theater suggested that 1 percent of recovered prisoners had tuberculosis. Moreover, an analysis of the chest X-rays done at West Coast debarkation hospitals revealed that 101 (or 2.7 percent) of 3,742 former POWs of the Japanese showed evidence of active tuberculosis. John R. Bumgarner was a tuberculosis ward officer at Sternberg General Hospital in Manila, the Philippines, before the war. A POW for forty-two months after the Japanese invasion, he described his experience in Parade of the Dead. Bumgarner did what he could to care for many of the 13,000 prisoners in the camp, but knew that “my patients were poorly diagnosed and poorly treated.” The narrow cots were so close together, he wrote, “the crowding and the breathing of air loaded with this bacilliary miasma from coughing ensured that those mistakenly segregated would be infected.”

Bumgarner was able to stay relatively healthy throughout his imprisonment. His luck ended, however, because “on my way home across the Pacific I had the first symptoms of tuberculosis.” Severe chest pain and subsequent X-rays at Letterman Hospital in San Francisco revealed active disease. “I had gone through more than four years of hell—now this!” Discharged on disability for tuberculosis in September 1946 he began to work at the Medical College of Virginia but soon had a lung hemorrhage. This time it took eight years of rest, with surgery and new antibiotic treatment for him to recover. By 1956, however, Bumgarner had married his sweetheart, Evelyn, and begun a medical career in cardiology that lasted for thirty years.

Tuberculosis continued to take its toll on POWs for years after the war. The VA followed POWs as a special group because, explained Long, of “the hardships that many of these men endured, and the notorious tendency for tuberculosis to make its appearance years after the acquisition of infection.” A follow-up study published in 1954 reported that for American POWs during the six years after liberation tuberculosis was the second highest cause of death, after accidents.

If the challenges Army medical personnel faced in caring for sick and starving POWs and refugees were unprecedented, the scale of disease and suffering they encountered in the Nazi concentration camps was almost unimaginable. Allied troops had heard about secret and deadly camps but were not prepared for what they found. As the Allies converged on Berlin from the East and the West, the Nazis evacuated thousands of prisoners—most of them Jews seized from across Europe, as well as POWs—to interior camps to hide their crimes and prevent the inmates from falling into Allied hands. These evacuations became death marches as SS (abbreviation of Schutzstaffel, which stood for “defense squadron”) guards beat and murdered people, and failed to feed them for days on end. Survivors were crowded into camps such as Buchenwald and Dachau making them even more chaotic and deadly. Americans, therefore, liberated camps that were riven with disease, especially typhus, tuberculosis, and malnutrition.

The Allies liberated Buchenwald on 11 April 1945. The following day the world learned that Franklin Roosevelt had died. Americans then liberated Dachau on 29 April, the day Italian partisans executed Mussolini in Milan, and the next day Hitler killed himself in his bunker. Dachau (Figure 8-5) had been the first of hundreds of concentration camps in the German Reich to which the Nazis sent political enemies, the disabled, people accused of socially deviant behavior, and, increasingly after the Kristallnacht pogroms of 1938, Jewish men, women, and children. In January1945 Dachau held 67,000 prisoners, but with troops of the Seventh U.S. Army approaching the SS began evacuating and killing prisoners. Capt. Marcus J. Smith, a medical officer in his thirties, arrived at Dachau on 30 April 1945, the day after liberation, part of a small team trained to treat persons displaced by the war. Horror greeted him outside the camp in a train of forty boxcars loaded with more than two thousand corpses. Smith called the frost that had formed on the bodies in the intense cold, “Nature’s shroud.” Inside Dachau he encountered more grotesque piles of naked, skeletal bodies of prisoners and scattered, mutilated bodies of German guards.

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Figure 8-5. Dachau survivors gather by the moat to greet American liberators, 29 April 1945. Photograph courtesy of the United States Holocaust Memorial Museum, Washington, DC.
Smith found more than 30,000 prisoners, mostly Jews of forty nationalities, and all men except for about 300 women the SS had kept in a brothel. They were in desperate condition. Typhus and dysentery raged, at least half of the prisoners were starving, and hundreds had advanced tuberculosis. “The well, the sick, the dying, and the dead lie next to each other in these poorly ventilated, unheated, dark, stinking buildings,” Smith told his wife. The men were “malnourished and emaciated, their diseases in all stages of development: early, late, and terminal.” He wondered, “What am I going to write in my notebook?” and then started a list of needed supplies: clothes, shoes, socks, towels, bedding, beds, soap, toilet paper, more latrines, and new quarters. He almost despaired. “What are we going to do with the starving patients? How will we care for them without sterile bandages, gloves, bedpans, urinals, thermometers, and all the basic material? How do we manage without an organization? No interns, no nursing staff, no ambulances, no bathtubs, no laboratories, no charts, and no orderlies, no administrator, and no doctors.… I feel helpless and empty. I cannot think of anything like this in modern medical history.”

American efforts did prevent a deadly typhus epidemic from sweeping postwar Europe and helped contain tuberculosis rates in Germany, but the Nazis had created a human catastrophe so immense that even the most dedicated efforts would at times fall short.

Faced with horror on such a scale, Smith and other Army Medical Department personnel assigned to the concentration camps threw themselves into the work of cleansing, comforting, treating, and nurturing their patients. American commanders called in at least six Army evacuation hospitals (EH) to care for the sick and dying in the liberated camps. EH No. 116 and EH No. 127 began arriving at Dachau on 2 May with some forty medical officers, forty nurses, and 220 enlisted men. Consulting with Smith and his team, the units set up in the former SS guard barracks. They tore out partitions to create larger wards, scrubbed the walls and floors with Cresol solution, sprayed them with dichloro-diphenyl-trichloroethane (DDT), and then set up cots to create two hospitals of 1,200 beds each. Medical staff also discovered physician-prisoners who had cared for the sick and injured as well as they could, and could now advise and assist, and in some cases translate for the medical staff. In two days the hospitals were ready to admit patients by triage, segregating them by disease and prognosis. Laurence Ball, the EH No. 116 commander, noted that more than 900 patients had “two or more diseases, such as malnutrition, typhus, diarrhea, and tuberculosis.” Staff bathed and deloused them, gave them clean pajamas, and put them to bed.

Death by overeating was but one of the dangers that the prisoners faced. During May 1945, American hospitals at Dachau had more than 4,000 typhus patients and lost 2,226 to typhus and other diseases. Typhus, a rickettsial disease transmitted by body lice, had a mortality rate as high as 40 percent. With no medical cure, treatment consisted of supportive care—keeping patients clean and nourished—to mitigate effects of prolonged fever, such as the breakdown of tissue into gangrene. The Americans knew that typhus had taken three million lives in Eastern Europe after World War I, but now they had a means of prevention and better weapons—a typhus vaccine and DDT. On 2 May, the day the evacuation hospitals arrived, the commander of the Seventh Army imposed quarantines for typhus and tuberculosis, and summoned the U.S. Typhus Commission, which had controlled a typhus outbreak in Naples, Italy. A typhus team arrived the next day and began to immunize American personnel and dust them with DDT. On 7 May staff began to vaccinate inmates but kept typhus patients isolated for at least twenty-one days from the onset of illness to prevent transmission to others. This meant that the Americans did not immediately enter the inner camp barracks—the worst, most typhus-infested part of the camp—nor did they quickly relieve crowding there for fear of spreading typhus-bearing lice. It took over a week for personnel to prepare more spacious and clean quarters.

Smith wrote his lists, reported to his wife, and kept track of the daily death toll, finding comfort as the number of people who died daily fell from 200 during the first week to twenty by the end of May. Another medical officer performed autopsies. He chose ten of the dead bodies, five from the death train and five from the camp yard, to see what had caused their deaths. All had typhus and extreme malnutrition, eight had advanced tuberculosis, and some bodies had signs of fractures and head injuries.

Survivors in Dachau, 1 May 1945

Survivors in Dachau, 1 May 1945

By the end of May, conditions at Dachau had improved. Typhus was abating and American officials began to release groups of inmates by nationality. Beyond Dachau, the U.S. Typhus Commission tracked down new cases of typhus in civilian and military populations, deloused one million people, sprayed fifteen tons of DDT, and created a cordon sanitaire on the Rhine requiring all who crossed from Germany to be vaccinated and dusted to prevent the spread of disease. Thus the Army averted a broader typhus epidemic.138 The tuberculosis situation was more complicated and presented the Americans with a conundrum. What to do with thousands of people suffering from a long-term, infectious, and deadly disease?

As with the American POWs, tuberculosis continued to follow Dachau survivors into their new lives. Thousands of Jewish survivors emigrated to what would become the state of Israel. Fifteen years after liberation, the Israeli Minister of Health reported that although concentration camp survivors comprised only 25 percent of the population, they accounted for 65 percent of the tuberculosis cases in the country. Tuberculosis continued to thrive in Europe as well.

Historian Albert Cowdrey has credited the American actions with preventing a number of postwar scourges: “No one can prove that a great typhus epidemic, mass deaths of prisoners of war, or widespread outbreaks of disease among the German population would have taken place without the efforts of Army doctors of the field forces and the military government.” But, he continued, “conditions were ripe for such tragedies to occur, and Army medics brought both professional knowledge and military discipline to forestalling what might have been the last calamities of the war in Europe.” Thus, as usual, in public health the good news is no news at all.

Thousands of men survived the Vietnam War because of the quality of their hospital care. US hospitals in Vietnam were the best that could be deployed, incorporating several improvements from previous field hospitals. Army doctors were better trained, and they had good facilities at the semi-permanent base camps. As a result, more advanced surgical procedures were possible: more laparotomies, thoracotomies, vascular repairs (including even aortic and carotid repairs), advanced neurosurgery for head wounds, and other medical procedures. Blood transfusions were performed, with massive quantities of blood available for seriously wounded patients; some patients received as many as 50 units of blood. Advances in equipment resulted in the development of intensive care units with mechanical ventilators. There were far more medications available for particular diseases than in earlier conflicts.

With about 30 physicians assigned, the 12th could keep four or five operating tables going all day, and two or three all night. A common practice was delayed primary closure for wounds with a high likelihood of infection. Instead of stitching the wound closed immediately, dirt and contaminants were flushed out, bleeding was controlled, dead flesh was removed (debrided), the wound was packed with sterile gauze, and antibiotics were administered. For a few days the patient healed, while nurses changed the bandages and made sure the wound did
not get worse. Then doctors removed any remaining contaminants or dead flesh and stitched up the wound. This procedure reduced the incidence of infection compared to immediate wound closure at a risk of a larger scar.

In any given year in Vietnam, about one soldier in three was hospitalized for disease. The main causes for hospitalization were malaria, psychiatric problems, and ordinary fevers. Although many men fell sick, competent care was available and most recovered quickly and returned to duty.

The war spurred advances in surgery and medical trauma research. New surgical techniques allowed limbs that previously would have been amputated to remain functional. Nurse anesthetist Rosemary Sue Smith recalled the development of new blood-handling procedures:

We started separating blood into its components, because we were getting a lot of aggregates that were causing a lot of disseminated intravascular coagulopathy in patients, and causing a lot of blood clots, and pulmonary thrombosis, and a lot of ARDS, Adult Respiratory Distress Syndrome, which started in Da Nang and was called Da Nang Lung initially. It has developed into today being called Adult Respiratory Distress Syndrome, and they did a lot of research on this, and they were having us separate our blood into its components, into fresh frozen plasma and into platelets, and then we started doing blood tests to see which the patients would need. If their platelets were low, or if their blood clotting factors were low, we would just give them the particular products. We actually started breaking these products down and administering them in the Vietnam War, and it’s carried over into civilian life now. They’re used today in acute trauma to prevent disseminated intravascular coagulopathy and prevent Adult Respiratory Distress Syndrome on massive traumas that have to be naturally resuscitated with blood and blood products.

In the 1960s, intensive care was still quite new and the 12th had only one (later two) intensive care wards fully equipped and staffed. A key piece of equipment was the ventilator, then called “respirator.” Ventilators worked on pure oxygen until 1969, when research revealed physiological problems from prolonged breathing of pure oxygen. Early ventilators required considerable maintenance; valves needed frequent cleaning or the machines broke down.

Antibiotics were important because of the wide variety of bacteria and large number of penetrating wounds; in the face of a possible systemic infection (the development of sepsis), antibiotics were delivered through an IV. Nurse Rosie Parmeter recalled having to prepare antibiotics to be delivered through an IV several times a day for each patient, a necessary but time-consuming task.

About two-thirds of patients cared for by the 12th were US military; the other third were mainly Vietnamese but also included nonmilitary Americans and Free World Military Assistance Forces personnel. Staff regularly dealt with the Vietnamese, both military and civilian, enemy and friendly. There were wards set aside for enemy prisoners (who were stabilized, then transferred to hospitals at POW camps) and civilians. Wounded South Vietnamese Army soldiers were also stabilized and transferred to hospitals run by the Army of the Republic of Vietnam (ARVN). Civilian patients often stayed longer because the war swamped the available hospitals for Vietnamese civilians.

Through the years of the Vietnam War, US forces sustained 313,616 wounded in action; at peak strength, there were 26 American hospitals. The 12th Evacuation Hospital was at Cu Chi for 4 years and treated just over 37,000 patients. Records for the 12th are incomplete, but the average died-of-wounds rate in Vietnam was about 2.8% of patients who reached a hospital alive. Applied to the 12th, that rate amounted to about 1,036 patients, including prisoners and Vietnamese as well as Americans. But over 36,000 people survived and could return home because of the treatment they received at the 12th Evac.

Sources:

Fort Bayard,  by David Kammer, Establishment of Fort Bayard Army Post
http://newmexicohistory.org/places/fort-bayard
George Ensign Bushnell, Colonel, Medical Corps, U. S. Army
THE ARMY MEDICAL BULLETIN, NUMBER 50 (OCTOBER 1939)
http://history.amedd.army.mil/biographies/bushnell
Chapter One, The Early Years: Fort Bayard, New Mexico
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=332041d7-dbd2-4edf-823f-29a66c0b65ef
Dachau concentration camp (Wikipedia)
http://en.wikipedia.org/wiki/Dachau_concentration_camp
Office of Medical History – United States Army
Esmond R. Long, M. D., TUBERCULOSIS IN WORLD WAR I
Chapter 14 – Tuberculosis
http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Chapter Four, Tuberculosis in World War I
Chapter Five, “A Gigantic Task”: Treating and Paying for Tuberculosis in the Interwar Period
Chapter Six, “Good Tuberculosis Women”: Tuberculosis Nursing during the Interwar Period
Chapter Seven, Surviving the Great Depression: Fitzsimons and the New Deal
Chapter Eight, Camp Follower: Tuberculosis in World War II
http://www.cs.amedd.army.mil/FileDownload aspx?
Good Tuberculosis Men”: The Army Medical Department’s Struggle with Tuberculosis Carol R. Byerly
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=986faf8a-b833-46a8-a251-00f72c91da2f

The Global Distribution of Yellow Fever and Dengue
D.J. Rogers1, A.J. Wilson1, S.I. Hay1,2, and A.J. Graham1
Adv Parasitol. 2006 ; 62: 181–220. http://dx.doi.org:/10.1016/S0065-308X(05)62006-4.

http://www.historyofvaccines.org/content/timelines/yellow-fever

History of yellow fever
http://en.wikipedia.org/wiki/History_of_yellow_fever

Additional Reading:

Open Wound: The Tragic Obsession of William Beaumont.
Jason Karlawish

The Great Influenzs. John M. Barry.
Penguin. 2004.
Univ Mich Press. 2011.

Flu. The story of the great influenza pandemic of 1918 and
the search for the virus that caused it.
Gina Kolata.
Touchstone. 1999

Pestilence. A Medieval Tale of Plague.
Jeani Rector
The HorrorZime. 2012

Knife Man: The extraordinary life of John Hunter, Father of Modern Surgery
Wendy Moore.
Broadway Books. 2005

Hospital.
Julie Salamon.
Penguin Press. 2008.

Overdosed America.

John Abramson.
Harper. 2004.

Sick.
Jonathen Cohn.
Harper Collins. 2007.
.

Read Full Post »

The History of Hematology and Related Sciences

Curator: Larry H. Bernstein, MD, FCAP

 

The History of Hematology and Related Sciences: A Historical Review of Hematological Diagnosis from 1880 -1980

 

Blood Description: The Analysis of Blood Elements a Window into Diseases

Diagnosing bacterial infection (BI) remains a challenge for the attending physician. An ex vivo infection model based on human fixed polymorphonuclear neutrophils (PMNs) gives an autofluorescence signal that differs significantly between stimulated and unstimulated cells. We took advantage of this property for use in an in vivo pneumonia mouse model and in patients hospitalized with bacterial pneumonia. A 2-fold decrease was observed in autofluorescence intensity for cytospined PMNs from broncho-alveolar lavage (BAL) in the pneumonia mouse model and a 2.7-fold decrease was observed in patients with pneumonia when compared with control mice or patients without pneumonia, respectively. This optical method provided an autofluorescence mean intensity cut-off, allowing for easy diagnosis of BI. Originally set up on a confocal microscope, the assay was also effective using a standard epifluorescence microscope. Assessing the autofluorescence of PMNs provides a fast, simple, cheap and reliable method optimizing the efficiency and the time needed for early diagnosis of severe infections. Rationalized therapeutic decisions supported by the results from this method can improve the outcome of patients suspected of having an infection.

Monsel A, Le´cart S, Roquilly A, Broquet A, Jacqueline C, et al. (2014) Analysis of Autofluorescence in Polymorphonuclear Neutrophils: A New Tool for Early Infection Diagnosis. PLoS ONE 9(3): e92564.
http://dx.doi.org:/10.1371/journal.pone.0092564

This study was designed to validate or refute the reliability of total lymphocyte count (TLC) and other hematological parameters as a substitute for CD4 cell counts. Participants consisted of two groups, including 416 antiretroviral naive (G1) and 328 antiretroviral experienced (G2) patients. CD4+ T cell counts were performed using a Cyflow machine. Hematological parameters were analyzed using a hematology analyzer. The median ± SEM CD4 count (range) of participants in G1 was 199 ± 10.9 (5–1840 cells/μL) and the median ± SEM TLC (range) was 1. 61 ± 0.05 (0.07–6.63 × 103/μL). The corresponding values among G2 were 421 ± 15.8 (13–1801) and 2.13 ± 0.04 (0.06–5.58), respectively. Using a threshold value of 1.2 × 103/μL for TLC alone, the sensitivity of G1 was 88.4% (specificity (SP) 67.4%, the positive predictive value (PPV) 53.5% and negative predictive value (NPV) of 93.2% for CD4 , 200 cells/μL, the sensitivity for G2 was 83.3%, SP 85.3%, PPV 23.8%, and NPV of 93.2%. Using multiple parameters, including TLC , 1.2 × 103/μL, hemoglobin , 10 g/dL, and platelets , 150 × 103/L, the sensitivity increased to 96.0% (SP, 82.7%; PPV, 80%; NPV, 96.7%) among G1, while no change was observed in the G2 cohort. TLC , 1.2 × 103/μL alone is an insensitive predictor of CD4 count of , 200 cells/μL. Incorporating hemoglobin , 10 g/dL, and platelets , 150 × 103/L enhances the ability of TLC , 1.2 × 103/μL to predict CD4 count , 200 cells/μL among the antiretroviral-naïve cohort. We recommend the use of multiple, inexpensively measured hematological parameters in the form of an algorithm for predicting CD4 count level.

Evaluating Total Lymphocyte Counts and Other Hematological Parameters as a Substitute for CD4 Counts in the Management of HIV Patients in Northeastern Nigeria. BA Denue, AU Abja, IM Kida, AH Gabdo, AA Bukar and CB Akawu.
Retrovirology: Research and Treatment 2013:5 9–16 http://dx.doi.org:/10.4137/RRT.S11562

Sepsis is a syndrome that results in high morbidity and mortality. We investigated the delta neutrophil index (DN) as a predictive marker of early mortality in patients with gram-negative bacteremia. Retrospective study. The DN was measured at onset of bacteremia and 24 hours and 72 hours later. The DN was calculated using an automatic hematology analyzer. Factors associated with 10-day mortality were assessed using logistic regression. A total of 172 patients with gram-negative bacteremia were included in the analysis; of these, 17 patients died within 10 days of bacteremia onset. In multivariate analysis, Sequental organ failure assessment scores (odds ratio [OR]: 2.24, 95% confidence interval [CI]: 1.31 to 3.84; P = 0.003), DN-day 1 ≥ 7.6% (OR: 305.18, 95% CI: 1.73 to 53983.52; P = 0.030) and DN-day 3 ≥ DN day 1 (OR: 77.77, 95% CI: 1.90 to 3188.05; P = 0.022) were independent factors associated with early mortality in gram-negative bacteremia. Of four multivariate models developed and tested using various factors, the model using both DN-day 1 ≥ 7.6% and DN-day 3 ≥ DN-day 1 was most predictive early mortality. DN may be a useful marker of early mortality in patients with gram-negative bacteremia. We found both DN-day 1 and DN trend to be significantly associated with early mortality.

Delta Neutrophil Index as a Prognostic Marker of Early Mortality in Gram Negative Bacteremia. HW Kim, JH Yoon, SJ Jin, SB Kim, NS Ku, SJ Jeong,
et al. Infect Chemother 2014;46(2):94-102. pISSN 2093-2340·eISSN 2092-6448
http://dx.doi.org/10.3947/ic.2014.46.2.94
Various indices derived from red blood cell (RBC) parameters have been described for distinguishing thalassemia and iron deficiency. We studied the microcytic to hypochromic RBC ratio as a discriminant index in microcytic anemia and compared it to traditional indices in a learning set and confirmed our findings in a validation set. The learning set comprised samples from 371 patients with microcytic anemia mean cell volume (MCV < 80 fL), which were measured on a CELL-DYN Sapphire analyzer and various discriminant functions calculated. Optimal cutoff values were established using ROC analysis. These values were used in the validation set of 338 patients. In the learning set, a microcytic to hypochromic RBC ratio >6.4 was strongly indicative of thalassemia (area under the curve 0.948). Green-King and England-Fraser indices showed comparable area under the ROC curve. However, the microcytic to hypochromic ratio had the highest sensitivity (0.964). In the validation set, 91.1% of microcytic patients were correctly classified using the M/H ratio. Overall, the microcytic to hypochromic ratio as measured in CELL-DYN Sapphire performed equally well as the Green-King index in identifying thalassemia carriers, but with higher sensitivity, making it a quick and inexpensive screening tool.
Differential diagnosis of microcytic anemia: the role of microcytic and hypochromic erythrocytes. E. Urrechaga, J.J.M.L. Hoffmann, S. Izquierdo, J.F. Escanero. Intl Jf Lab Hematology Aug 2014. http://dx.doi.org:/10.1111/ijlh.12290

Achievement of complete response (CR) to therapy in chronic lymphocytic leukemia (CLL) has become a feasible goal, directly correlating with prolonged survival. It has been established that the classic definition of CR actually encompasses a variety of disease loads, and more sensitive multiparameter flow cytometry [and polymerase chain reaction methods] can detect the disease burden with a much higher sensitivity. Detection of malignant cells with a sensitivity of 1 tumor cell in 10,000 cells (10–4), using the above-mentioned sophisticated techniques, is the current cutoff for minimal residual disease (MRD). Tumor burdens lower than 10–4 are defined as MRD-negative. Several studies in CLL have determined the achievement of MRD negativity as an independent favorable prognostic factor, leading to prolonged disease-free and overall survival, regardless of the treatment protocol or the presence of other pre-existing prognostic indicators. Minimal residual disease evaluation using flow cytometry is a sensitive and applicable approach which is expected to become an integral part of future prospective trials in CLL designed to assess the role of MRD surveillance in treatment tailoring.

Minimal Residual Disease Surveillance in Chronic Lymphocytic Leukemia by Fluorescence-Activated Cell Sorting. S Ringelstein-Harlev, R Fineman.
Rambam Maimonides Med J. Oct 2014   5 (4)  e0027. http://dx.doi.org:/10.5041/RMMJ.10161

Natural Killer cells (CD3-CD16+CD56+) are a major players in innate immunity, both as direct cytotoxic effectors as well as regulators for other innate immunity cell types. We have shown that, using the FlowCellect™ human NK cell characterization kit, one can achieve accurate phenotyping on a variety of sample types, including whole blood samples. Using the same kit to perform an NK cell cytotoxicity test, we demonstrate that unbound K562 target cells can be clearly distinguished from those that have been engaged by CD56+ NK cells, and each of these populations can be further investigated for viability using the eFluor 660® dye.

Analysis of NK cell subpopulations in whole blood

Analysis of NK cell subpopulations in whole blood

Analysis of NK cell subpopulations in whole blood

A

Proportion of K562 target cells bound to NK cells

Proportion of K562 target cells bound to NK cells

In a 5:1 effector cell:target cell population, 8% of the K562 cells were bound to NK cells (Figure 3B). 84% of the bound K562 cells were viable (Figure 3C) stained with fixable viability dye), while 96% of the unbound K562 cells were viable (Figure 3D). (B,C,D not shown)

Characterization of Natural Killer Cells Using Flow Cytometry.
EMD Millipore is a division of Merck KGaA, Darmstadt, Germany.

Red blood cell distribution width (RDW) is increased in liver disease. Its clinical significance, however, remains largely unknown. The aim of this study was to identify whether RDW was a prognostic index for liver disease. Retrospective: 33 patients with non-cirrhotic HBV chronic hepatitis, 125 patients with liver cirrhosis after HBV infection, 81 newly diagnosed primary epatocellular carcinoma (pHCC) patients, 17 alcoholic liver cirrhosis patients and 42 patients with primary biliary cirrhosis (PBC). Sixty-six healthy individuals represented the control cohort. The relationship between RDW on admission and clinical features: The association between RDW and hospitalization outcome was estimated by receiver operating curve (ROC) analysis and a multivariable logistic regression model. Increased RDW was observed in liver disease patients. RDW was positively correlated with serum bilirubin and creatinine levels, prothrombin time, and negatively correlated with platelet counts and serum albumin concentration. A subgroup analysis, considering the different etiologies, revealed similar findings. Among the patients with liver cirrhosis, RDW increased with worsening of Child-Pugh grade. In patients with PBC, RDW positively correlated with Mayo risk score. Increased RDW was associated with worse hospital outcome, as shown by the AUC [95% confidence interval (CI)] of 0.76 (0.67 – 0.84). RDW above 15.15% was independently associated with poor hospital outcome after adjustment for serum bilirubin, platelet count, prothrombin time, albumin and age, with the odds ratio (95% CI) of 13.29 (1.67 – 105.68). RDW is a potential prognostic index for liver disease.

Red blood cell distribution width is a potential prognostic index for liver disease
Z Hua , Y Suna , Q Wanga , Z Han , Y Huang , X Liu , C Ding, et al.
Clin Chem Lab Med 2013; 51(7):1403–1408.
http://dx.doi.org:/10.1515/cclm-2012-0704

Blood Plasma and Red Blood Cells

Whole blood consists of red and white blood cells, as well as platelets suspended in a liquid referred to as blood plasma. According to the American Red Cross, plasma is 92% water and makes up 55% of blood volume. The permeability of blood plasma is equal to 1.

Red blood cells make up slightly lower blood volume than blood plasma — about 45% of whole blood. As you probably already know, these types of blood cells contain hemoglobin, which in turn consists of iron that helps transport oxygen throughout the body. The permeability of red blood cells is slightly less than 1,
(1 – 3.9e-6). Or to put it in words, red blood cell particles are diamagnetic.

Due to their magnetic properties, red blood cells may be separated from the plasma via a magnetophoretic approach. If the blood were to be in a channel subject to a magnetophoretic force, we could control where the red blood cells and the plasma go within the channels. In other words, because the red blood cells have different permeability, they can be separated from the flow channel. However, such methodology is beyond the year 1980.

Timeline of Major Hematology Landmarks

1877 Paul Ehrlich develops techniques to stain blood cells to improve microscopic visualization.

1897 The Diseases of Infancy and Childhood contains a 20-page chapter on diseases of the blood and is the first American pediatric medical textbook to provide significant hematologic information.

1821–1902 Rudolph Virchow, during a long and illustrious career, demonstrates the importance of fibrin in the blood coagulation process, coins the terms embolism and thrombosis, identifies the disease leukemia, and theorizes that leukocytes are made in response to inflammation.

1901 Karl Landsteiner and colleagues identify blood groups of A, B, AB, and O.

1907 Ludvig Hektoen suggests that the safety of transfusion might be improved by crossmatching blood between donors and patients to exclude incompatible mixtures. Reuben Ottenberg performs the first blood transfusion using blood typing and crossmatching in New York. Ottenberg also observes the Mendelian inheritance of blood groups and recognizes the “universal” utility of group O donors.

1910 The first clinical description of sickle cell published in medical literature.

1914 Sodium citrate is found to prevent blood from clotting, allowing blood to be stored between collection and transfusion.

1924 Pediatrics is the first comprehensive American publication on pediatric hematology.

1925 Alfred P. Hart performs the first exchange transfusion.

1925 Thomas Cooley describes a Mediterranean hematologic syndrome of anemia, erythroblastosis, skeletal disorders, and splenomegaly that is later called Cooley’s anemia and now thalassemia.

1936 Chicago’s Cook County Hospital establishes the first true “blood bank” in the United States.

1938 Dr. Louis Diamond (known as the “father of American pediatric hematology”) along with Dr. Kenneth Blackfan describes the anemia still known as Diamond-Blackfan anemia.

1941 The Atlas of the Blood of Children is published by Blackfan, Diamond, and Leister.

1945 Coombs, Mourant, and Race describe the use of antihuman globulin (later known as the “Coombs Test”) to identify “incomplete” antibodies.

1954 The blood product cryoprecipitate is developed to treat bleeds in people with hemophilia.

1950s The “butterfly” needle and intercath are developed, making IV access easier and safer.

1961 The role of platelet concentrates in reducing mortality from hemorrhage in cancer patients is recognized.

1962 The first antihemophilic factor concentrate to treat coagulation disorders in hemophilia patients is developed through fractionation.

1969 S. Murphy and F. Gardner demonstrate the feasibility of storing platelets at room temperature, revolutionizing platelet transfusion therapy.

1971 Hepatitis B surface antigen testing of blood begins in the United States.

1972 Apheresis is used to extract one cellular component, returning the rest of the blood to the donor.

1974 Hematology of Infancy and Childhood is published by Nathan and Oski.

As I write today my hospital celebrates its 150th anniversary. Great Ormond Street Children’s Hospital was founded on 14 February 1852 by the visionary Dr Charles West followed his belief that hospital care allied to research in children’s diseases would reduce child mortality from above 50% by the age of 15 years. It is foolish to believe that we can progress in medicine without a knowledge of the past and that much of life is based upon experience. When putting together a series of articles on the history of haematology, initially published in BJH, this was the main raison d’être, along with the belief that the practice of medicine has become increasingly serious but should also be fun and interesting and even occasionally uplifting to the spirit.

The central problem of any survey of the history of haematology is usually the question of balance. Achieving a degree of balance among themes and topics that will be satisfactory to practicing haematologists/physicians with an interest in blood diseases is essentially impossible. Our preference has been for themes of general interest rather than those of a purely scientific view into a field that has led the way in understanding the molecular basis of human disease.

  1. M. Hann, London, 2002; O. P. Smith, Dublin, 2002.

Origins of the Discipline `Neonatal Haematology’, 1925-75

In every modern neonatal intensive care unit (NICU), haematological problems are encountered daily. Many of these problems involve varieties of anaemia, neutropenia or thrombocytopenia that are unique to NICU patients. A characteristic aspect of these unique problems is that, if the neonate survives, the haematological problem will remit and will not recur later in life, nor will it evolve into a chronic illness (although the problem might occur in a future newborn sibling). This characteristic comes about because the common haematological problems of NICU patients are not genetic defects but are environmental stresses (such as infection, alloimmunization or a variety of maternal illnesses) that are imposed on a developmentally immature haematopoietic system.

In the USA, and in some parts of Europe, the unique haematological problems that occur among NICU patients are diagnosed and treated by neonatologists, not by paediatric haematologists. Although these haematological conditions were generally first described by haematologists, the conditions occur, obviously, in neonates. Thus, the neonatologist, who is familiar with intensive care management of neonates, has also become familiar with the diagnosis and management of the neonate’s common haematological disorders. A growing number of neonatologists have sought specific additional training in haematology, with the goals of discovering the mechanisms underlying the unique haematological problems of NICU patients and improving the management and outcome of the patients who have these conditions. These physicians have remained as neonatologists and they do not practice paediatric haematology, although their research contributions certainly come under the purview of haematology, or more precisely under the discipline of `neonatal haematology’. In many places in Europe, it is the haematologists rather than the neonatologists who have an academic and clinical interest in neonatal haematology.

The roots of the discipline of neonatal haematology can be traced to the early application of haematological methods to animal and human embryos and fetuses, such as found in the reports of Maximow (1924) and Wintrobe & Schumacker (1936). The clinical underpinnings of this discipline include reports of anaemia (Fikelstein, 1911) and jaundice (Blomfeld, 1901; YlppoÈ, 1913) among neonates.

Before the 1930s, very few studies and very few published clinical case reports originated from premature nurseries. Such nurseries had dubious beginnings, which were criticized by some physicians as more resembling circus exhibitions than medical care wards (Bonar, 1932). These units generally had mortality rates greatly exceeding 50% on the day of admission, with the majority of the first-day survivors having late deaths or serious long-term morbidity.

It was not until publication of the review of premature nursery care at the Children’s Hospital of Michigan, in 1932, that it was clear that some units had instituted systematic attempts to monitor and improve outcomes. A special care nursery had been established at the Children’s Hospital in 1926 and, in 1932, Drs Marsh Poole and Thomas Cooley reported their experience in that unit (Poole & Cooley, 1932). The report included  incubator design with temperature and humidity control, growth curves of patients on various feeding practices, mortality statistics and attempts to determine causes of death.

At the time premature nursery care was beginning to merit academic credentials, reports were published of haematological problems that were unique to the neonate. These papers included the seminal publication on erythroblastosis fetalis by Drs Diamond (Fig 1), Blackfan and Baty (Diamond et al, 1932), and the report of sepsis neonatorum at the Yale New Haven Hospital by Ethyl C. Dunham (Fig 2) (Dunham,

1933).

The first major textbook devoted to clinical haematology, as well as the first textbook of neonatology, contained very little information about what are today’s common NICU haematological problems. For instance, in the first edition of Clinical Hematology by Dr Maxwell M. Wintrobe (Fig 3), of the Johns Hopkins University Hospital (Wintrobe, 1942), several topics related to paediatric haematology were reviewed, but discussions of the haematological problems of neonates were limited to three – erythroblastosis fetalis, haemorrhagic disease of the newborn and the `anaemia of prematurity’. Similarly, Premature Infants: A Manual for

Physicians, the original neonatology textbook, published in 1948 by Dr Ethyl C. Dunham (Fig 2; Dunham, 1948), had only a few pages devoted to haematological problems – the same three discussed by Dr Wintrobe. Also, the classic neonatology text book, `The Physiology of the Newborn Infant’, published in 1945 by Dr Clement A. Smith, contained almost no discussion of haematological problems (Smith, 1945). hrombocytopenia, which is now diagnosed among 25-30% of NICU patients, and neutropenia, now diagnosed in 8-10% of NICU patients, were not mentioned.

The first article published in Paediatrics (1948) dealing with a neonatal haematological problem was in volume two, in which Dr Diamond detailed his technique for performing a replacement transfusion (which later became known as an `exchange’ transfusion) as a treatment for erythroblastosis fetalis (Diamond, 1949). The second paper published by Paediatrics containing aspects of neonatal haematology was 1 year later, when Sliverman & Homan (1949) described leucopenia among neonates with sepsis. Most of the 25 infants they described, who were treated at Babies Hospital in New York over an 11-year period, had `late-onset’ sepsis, beginning after 3 days of life. They reported 14 neonates with Escherichia coli sepsis and four with streptococcal or staphylococcal sepsis, and observed that leucopenia occurred occasionally among these patients but was uncommon. (Indeed, today neutropenia remains uncommon in `late-onset’ sepsis, but common in congenital or `early onset’ sepsis.)

Louis K. Diamond, MD, at Children's Hospital, Boston,

Louis K. Diamond, MD, at Children’s Hospital, Boston,

Louis K. Diamond, MD, at Children’s Hospital, Boston, MA. , date unknown (obtained with the kind assistance of Charles F. Simmons, MD, Harvard University).

Diagnosing neutropenia, anaemia or thrombocytopenia in a neonate obviously requires knowledge of the expected normal range for neutrophil concentration, haematocrit and platelet concentration in the appropriate reference population. Early contributions to neonatal haematology included the publications of these reference ranges. The landmark studies included the range of blood leucocyte and neutrophil concentrations in neonates published in 1935 by Dr Katsuji Kato from the Department of Paediatrics at the University of Chicago (Kato, 1935). He tabulated the leucocyte concentrations and differential counts of 1081 children, ranging from birth to 15 years of age. A striking finding of his report (Fig 4) was the very high neutrophil counts during the first hours and days of life. Blood neutrophil concentrations among neonates with infections were published during the early and mid-1970s by Dr Marietta Xanthou (Fig 5) at the Hammersmith Hospital in London (Xanthou, 1970, 1972), and by Drs Barbara Manroe and Charles Rosenfeld (Fig 6) at the University of Texas Southwestern Medical Center in Dallas, Texas (Manroe et al, 1977).

Normal values for haemoglobin, haematocrit, erythrocyte indices and leucocyte concentrations were refined by DeMarsh et al (1942, 1948), and in a series of publications in the early 1950s in Archives of Diseases of Children by Gairdner et al (1952a, b). These were followed by observations on human fetal haematopoiesis by Thomas and Yoffey in the British Journal of Haematology (Thomas & Yoffey, 1962, 1964), and by the work on blood volume during the 1960s (Usher et al, 1963, Usher & Lind, 1965; Yao et al, 1967, 1968). Normal ranges for blood platelet counts in ill and well preterm and term infants were published in the early 1970s (Sell et al, 1973; Corrigan, 1974).

The first publication addressing the problem of neutropenia accompanying fatal early onset bacterial sepsis was that of Tygstrup et al (1968). This was a report of a near-term male with congenital Listeria sepsis who lived for only 4 h. The platelet count was 80*109/l and the leucocyte count was 13´7*109/l, but no granulocytes were observed on the differential count, which consisted of 84% lymphocytes, 8% monocytes and 8% leucocyte precursors. A sternal marrow aspirate was taken of the infant shortly before death that revealed myeloblasts, promyelocytes and myelocytes, but no band or segmented neutrophils.

An important advance in understanding the blood neutrophil count during neonatal sepsis occurred with the back-to-back papers in Archives of Diseases of Childhood in 1972 by Dr Marietta Xanthou of Hammersmith Hospital, London (Xanthou, 1972), and Drs Gregory and Hey of Babies’ Hospital, Newcastle upon Tyne (Gregory & Hey, 1972). Both papers reported that neonates who had life threatening (or indeed fatal) infections became neutropenic prior to death. Dr Xanthou reported 35 ill preterm and term babies within their first 28 d of life. Twenty-four were ill but not infected, and these had normal blood neutrophil concentrations and morphology. However, among the 11 who were ill with a bacterial infection, neutrophilia was observed in the survivors, but neutropenia, a `left shift’, and toxic granulation were observed in the non-survivors. Consistent with this observation, Gregory and Hey reported three neonates who died with overwhelming bacterial sepsis and noted that all had profound neutropenia. Neutrophilia was common among the survivors and neutropenia, a “left shift’, and specific neutrophil morphological changes were seen among those who subsequently died.

A pivotal publication that launched the search for mechanistic information and successful treatments was that of Dr Barbara Manroe, a fellow in Neonatal Medicine, and her mentor Dr Charles Rosenfeld (Fig 6) from the University of Texas, South-western, Parkland Hospital in Dallas, Texas (Manroe et al, 1977). They evaluated 45 neonates who had culture-proven group B streptococcal infection and found that 39 had abnormal leucocyte counts: 25 neutrophilia and 14 neutropenia, and that 41 had a `left shift’. This paper was the first to quantify the `left shift’ using a method that has since become popular in neonatology – the ratio of immature neutrophils to total neutrophils on the differential cell count.

From these beginning, hundreds of studies using experimental models and clinical observations and trials were published, detailing the kinetic and molecular mechanisms accounting for this common variety of neutropenia. Marked improvements in the survival of neonates with this condition have come about through combined efforts, including early maternal screening for GBS carriage, early anti-microbial administration to ill neonates, non-specific antibody administration and a variety of measures to improve supportive care of neonates with early onset sepsis.

In the early 1930s, Dr Helen Mackay worked as a paediatrician in Mother’s Hospital, a maternity hospital located in the north-east section of London. Acting on the observation of Lichtenstein (1921) that infants of subnormal birth weight regularly became anaemic in the first months of life, she measured and reported serial heel-stick haemoglobin levels on 150 infants during their first 6 months. Thirty-nine of these infants weighed under five pounds at birth (six were under four pounds), 52 weighed five to six pounds, and 59 weighed six pounds and upwards. She showed that babies of the lightest birth weights had the most rapid fall in haemoglobin and that these fell to lower levels than those of babies of heavier birth weight (MacKay et al, 1935). Figure 7 contrasts this fall in babies weighing `3-4 lbs odd at birth’ with those weighing `5 lbs odd at birth’.

Her attempts to prevent the anaemia of prematurity failed,  but her work constituted the first clear definition of the `anaemia of prematurity’ and showed that iron administration did not prevent this condition. In the early 1950s, Douglas Gairdner, John Marks and Janet D. Roscoe, of the Department of Pathology of Cambridge Maternity Hospital, published pioneering studies in blood formation in infancy (Gairdner et al, 1952a, b). Studying 105 blood samples and 102 bone marrow samples, they concluded that `erythropoiesis ceases when the oxygen saturation just after birth increases from about 65% in the umbilical vein to .95% just after birth’. Publications by Dr Irving Schulman, in the mid- to late 1950s, defined three phases of the anaemia of prematurity and provided a mechanistic explanation for the anaemia (Schulman & Smith, 1954; Schulman, 1959). His work illustrated that the early and intermediate phases of this anaemia occur in the face of relative iron excess and are unaffected by prophylactic iron administration.

Haemoglobin levels during the first 25 weeks of life among

Haemoglobin levels during the first 25 weeks of life among

Haemoglobin levels during the first 25 weeks of life among neonates in London [by permission; Archives Diseases of Children, (MacKay, 1935)].

In 1963, Dr Sverre Halvorsen of the Department of Paediatrics at Rikshospatalet in Oslo, Norway (Fig 9), provided an underlying explanation for the observations made by MacKay, Gairdner and Schulman (Halvorson, 1963). He observed that, compared with the blood of healthy adults, umbilical cord blood of healthy neonates had a high erythropoietin concentration, but the concentration was considerably higher in the plasma of severely erythroblastotic, anaemic infants. Among the healthy infants, erythropoietin levels fell to unmeasurably low concentrations after delivery, but levels remained elevated in hypoxic and cyanotic infants. Dr Per Haavardsholm Finne, also of the Children’s Department, Paediatric Research Institute and Department of Obstetrics and Gynaecology at Rikshospitalet in Oslo, observed high oncentrations of erythropoietin in the amniotic fluid and the umbilical cord blood after fetal hypoxia (Finne, 1964, 1967).

In subsequent studies, Dr Halvorsen observed lower plasma erythropoietin concentrations in the cord blood of preterm infants at delivery than in term neonates at delivery (Halvorsen & Finne, 1968). These observations supported the concept of Gairdner et al (1952a, b) that the postnatal fall in erythropoiesis (the `physiologic anaemia’ of neonates) is as a result of an increase in oxygen delivery to tissues following birth and is mediated by a fall in circulating erythropoietin concentration. The observations gave rise to the postulate that the `anaemia of prematurity’ was an exaggeration of this physiological anaemia and involved a limitation of preterm infants to appropriately increase erythropoietin production.

Many landmark reports of haematological findings of neonates that were published between 1925 and 1975 were not detailed in this review because they were outside the restricted topics selected.

Robert D. Christensen, MD, Gainesville, FL
Brit J Haem 2001; 113: 853-860

Towards Molecular Medicine; Reminiscences of the Haemoglobin Field

When historians of medicine in the twentieth century start to piece together the complex web of events that led from a change of emphasis of medical research from studies of patients and their organs to disease at the levels of cells and molecules they will undoubtedly have their attention drawn to the haemoglobin field, particularly the years that followed Linus Pauling’s seminal paper in 1949 which described sickle-cell anaemia as a `molecular disease’. These are personal reminiscences of some of the highlights of those exciting times, and of those who made them happen.

One of my first patients serving the RAMC was a Nepalese Ghurka child who was kept alive from the first few months of life with regular blood transfusion without a diagnosis. Henry Kunkel published a paper which described how, using electrophoresis in slabs of starch, he had found a minor component of human haemoglobin (Hb), Hb A2, the proportion of which was elevated in some carriers of thalassaemia. After several weeks spent knee deep in potato starch, we found that the Ghurka child’s parents had increased Hb A2 levels and, hence, that she was likely to be homozygous for thalassaemia. I was hauled up before the Director General of Medical Services for the Far East Land Forces and told that I could be court marshalled for not getting permission from the War House (Office) to publish information about military personnel. `And, in any case’, he added, `it is bad form to tell the world that one of our pukka regiments has bad genes; don’t do it again’.

Just before the end of my National Service I arranged to go to Johns Hopkins Hospital in Baltimore to train in genetics and haematology. I was told that I was wasting my time working on haemoglobin because there was `nothing left to do’. `Start exploring red cell enzymes’, he suggested. On arriving in Baltimore in 1960 it turned out that human genetics, and the haemoglobin field in particular, were bubbling with excitement and potential. The only lessons for those contemplating careers in medical research from this chapter of academic and military gaffs are that, regardless of the working conditions, when there are sick people there are always interesting research questions to be asked.

The excitement of the haemoglobin field in 1960 reflected the chance amalgamation of several disciplines in the 1950s, particularly X-ray crystallography, protein chemistry, human genetics and haematology.

From the early 1930s the structure of proteins became one of the central problems of biochemistry. At that time, the only way of tackling this problem was by X-ray crystallography. In 1937 Felix Haurowitz suggested to Max Perutz (Fig 1) that an X-ray study of haemoglobin might be a good subject for his doctoral thesis. He was given some large crystals of horse methaemoglobin which gave excellent Xray diffraction patterns.

Max Perutz

Max Perutz

However, there was a major snag; an X-ray diffraction pattern provided only half the information required to solve the structure of a protein, that is the amplitudes of diffracted rays, while the other half, their phases, could not be determined. But in 1953, they discovered that it could be solved in two dimensions by comparison of the diffraction patterns of a crystal of native haemoglobin with that of haemoglobin reacted with mecuribenzoate, which combines with its two reactive sulphydryl groups. In short, to solve the structure in three dimensions required the comparison of the diffraction patterns of at least three crystals, one native and two with heavy atoms combined with different sites on the haemoglobin molecule. In 1959 this approach yielded the first three-dimensional model of haemoglobin, at 5´5 AÊ resolution.

Protein chemistry evolved side-by-side with X-ray crystallography during the 1950s. In 1951 Fred Sanger solved the structure of insulin, a remarkable tour de force which showed that proteins have unique chemical structures and amino acid sequences. Sanger had perfected methods for fractionation and characterization of small peptides by paper chromatography or electrophoresis. In 1956 Vernon Ingram (Fig 2), who, like Max Perutz, was a refugee from Germany, was set the task of studying the structure of haemoglobin from patients with sickle-cell anaemia. Ingram separated the peptides produced after globin had been hydrolysed with the enzyme trypsin, which cuts only at lysine and arginine residues. Although these amino acids accounted for 60 residues per mol of haemoglobin, only 30 tryptic peptides were obtained, indicating that haemoglobin consists of two identical half molecules. Re-examination of the amino-terminal sequences of haemoglobin by groups in the United States and Germany showed 2 mols of valine ± leucine and 2 mols of valine ± histidine ± leucine per mol of globin. These findings, which were in perfect agreement with the X-ray crystallographic results, suggested that haemoglobin is a tetramer composed of two pairs of unlike peptide chains, which were called α and β.

A seminal advance, and one which was to mark the beginning of molecular medicine, was the chance result of an overnight conversation on a train journey between Denver and Chicago. Linus Pauling, the protein chemist, and William Castle (Fig 3), one of the founding fathers of experimental haematology, were returning from a meeting in Denver and Castle mentioned to Pauling that he and his colleagues had noticed that when red cells from patients with sickle-cell anaemia are deoxygenated and sickle they show birefringence in polarized light.

Five generations of Boston haematology. Seated is William Castle. Standing (left to right) are Stuart Orkin, David Nathan and Alan Michelson. The picture on the left is of Dean David Edsall of Harvard Medical School who established the Thorndyke Laboratory at the Boston City Hospital. He was succeeded by Dean Peabody, who recruited both George Minot, who won the Nobel Prize for his work on pernicious anaemia, and William Castle, who should have also received it.

Pauling guessed that this might reflect a structural difference between normal and sickle-cell haemoglobin which could be detected by a change in charge. He gave this problem to one of his postdoctoral students, a young medical graduate called Harvey Itano. At that time they knew that a Swede, Arne Tiselius, had invented a machine for separating proteins according to their charge by electrophoresis. As there was no machine of this kind in Pauling’s laboratory, Itano and his colleagues set to and built one. Eventually they found that the haemoglobin of patients with sickle-cell anaemia behaves differently to that of normal people in an electric field, indicating that it must have a different amino acid composition. Even better, the haemoglobin of sickle-cell carriers was a mixture of both types of haemoglobin. This work was published in Science in 1949, under the title `Sickle-cell anaemia: a molecular disease’.

Perutz and Crick suggested to Ingram that he should apply Sanger’s techniques of peptide analysis to see if he could find any difference between normal and sickle cell haemoglobin. After digesting haemoglobin with trypsin, Ingram separated the peptides by electrophoresis and chromatography in two dimensions to produce what he later called `fingerprints’. He recalls that his first efforts looked like a watercolour that had been left out in the rain. But gradually things improved and he was able to show that the fingerprints of Hbs A and S were identical except for the position of one peptide. Using a method that had been developed a few years earlier by Pehr Edman, which allowed a peptide to be degraded one amino acid at a time in a stepwise fashion, Ingram found that this difference was due to the substitution of valine for glutamic acid at position 6 in the β chain of Hb S.

As well as demonstrating how a crippling disease can result from only a single amino acid difference in the haemoglobin molecule, this beautiful work had broader implications for molecular genetics. Although nothing was known about the nature of the genetic code at the time, the findings were compatible with the notion that the primary product of the β-globin gene is a peptide chain, a further development of the one-gene-one-enzyme concept, suggested earlier by Beadle and Tatum from their studies of Neurospora, and a prelude to the later studies of Yanofsky on Escherichia coli, which were to confirm this principle.

With the advent of simple filter paper electrophoresis, haemoglobin analysis became the province of clinical research laboratories during the 1950s and `new’ abnormal haemoglobins appeared almost by the week. Although many scientists were involved it was Hermann Lehmann (Fig 4) who became the father figure. Like Handel, Hermann was born in Halle and, also like the composer, made his home in Great Britain. He came to England as a refugee and at the beginning of the Second World War had a short period of internment as a `friendly alien’ at Huyton, close to Liverpool, an experience shared with many others, including Max Perutz. He travelled widely during his later war service in the RAMC and developed a wide international network which enabled him to discover 81 haemoglobin variants during his career.

Harvey Itano and Elizabeth Robinson showed that Hb Hopkins 2 is an a chain variant. Hence, it was now clear that there must be at least two unlinked loci involved in regulating haemoglobin production, a and b. The discovery of the λ and δ chains of Hbs F and A2, respectively, meant that there must be at least four loci involved. Subsequent family studies and analyses of unusual variants resulting from the production of δβ or λβ fusion chains led to the ordering of the non-α globin genes.

It had been known for some years that children with severe forms of thalassaemia might have persistent production of HbF and it was found later that some carriers might have elevated levels of Hb A2. The seminal observation in favour of this notion came from the study of patients who had inherited the sickle-cell gene from one parent and thalassaemia from the other. Sickle-cell thalassaemia was first described by Ezio Silvestroni and his wife Ida Bianco in 1946, although at the time they could not have known the full significance of their finding.  Phillip Sturgeon and his colleagues in the USA found that the pattern of haemoglobin production in patients with sickle-cell thalassaemia is quite different to that of heterozygotes for the sickle-cell gene; the effect of the thalassaemia gene is to reduce the amount of Hb A to below that of Hb S, i.e. exactly the  opposite to the ratio observed in sickle-cell carriers. As it was known that the sickle-cell mutation occurs in the β globin gene, it could be inferred that the action of the thalassaemia gene was to reduce the amount of β globin production from the normal allele. Indeed, from the few family studies available in 1960 there was a hint that this form of thalassaemia might be an allele of the β globin gene. Another major observation that was made in the mid-50 s was the association of unusual tetramer haemoglobins, β4 (Hb H) and λ4 (Hb Bart’s), with a thalassaemia phenotype. In 1959 Vernon Ingram and Tony Stretton proposed in a seminal article that there are two major classes, α and β, just as there are two major types of structural haemoglobin variants. They extended the ideas of Linus Pauling and Harvey Itano, who had suggested that defective globin synthesis in thalassaemia might be due to `silent’ mutations of the β globin genes, and postulated that the defects might lie outside the structural gene in the area of DNA in the connecting unit. work on the interactions of thalassaemia and haemoglobin variants in the late 1950s had moved the field to a considerably higher level of understanding than is apparent in the earlier papers of Pauling and Itano. In any case, in their paper Ingram and Stretton generously acknowledged the ideas of other workers, including Lehmann, Gerald, Neel and Ceppellini, that had allowed them to develop their conceptual framework of the general nature of thalassaemia. This interpretation of events, and the input of scientists from many different disciplines into these concepts, is supported by the published discussions of several conferences on haemoglobin held in the late 1950s.

Historical Review. Towards Molecular Medicine; Reminiscences of the Haemoglobin Field. D. J. Weatherall, Weatherall Institute of Molecular Medicine, University of Oxford. Brit J  Haem 115:729-738.

The Emerging Understanding of Sickle Cell Disease

The first indisputable case of sickle cell disease in the literature was described in a dental student studying in Chicago between 1904 and 1907 (Herrick, 1910). Coming from the north of the island of Grenada in the eastern Caribbean, he was first admitted to the Presbyterian Hospital, Chicago, in late December 1904 and a blood test showed the features characteristic of homozygous sickle cell (SS) disease. It was a happy coincidence that he was under the care of Dr James Herrick (Fig 1) and his intern Dr Ernest Irons because both had an interest in laboratory investigation and Herrick had previously presented a paper on the value of blood examination in reaching a diagnosis (Herrick, 1904-05). The resulting blood test report by Dr Irons described and contained drawings of the abnormal red cells (Fig 2) and the photomicrographs, showing irreversibly sickled cells.

People with positive sickle tests were divided into asymptomatic cases, `latent sicklers’, and those with features of the disease, `active sicklers’, and it was Dr Lemuel Diggs of Memphis who first clearly distinguished symptomatic cases called sickle cell anaemia from the latent asymptomatic cases which were termed the sickle cell trait (Diggs et al, 1933).

Prospective data collection in 29 cases of the disease showed sickling in all 42 parents tested (Neel, 1949), providing strong support for the theory of homozygous inheritance. A Colonial Medical Officer working in Northern Rhodesia (Beet, 1949) reached similar conclusions at the same time with a study of one large family (the Kapokoso-Chuni pedigree). The implication that sickle cell anaemia should occur in all communities in which the sickle cell trait was common and that its frequency would be determined by the prevalence of the trait did not appear to fit the observations from Africa. Despite a sickle cell trait prevalence of 27% in Angola, Texeira (1944) noted the active form of the disease to be `extremely rare’ and similar observations were made from East Africa. Lehmann and Raper (1949, 1956) found a positive sickling test in 45% of one community, from which homozygous inheritance would have predicted that nearly 10% of children had SS disease, yet not a single case was found. The discrepancy led to a hypothesis that some factor inherited from non-black ancestors in America might be necessary for expression of the disease (Raper, 1950).

The explanation for this apparent discrepancy gradually emerged. Working with the Jaluo tribe in Kenya, Foy et al (1951) found five cases of sickle cell anaemia among very young children and suggested that cases might be dying at an age before those sampled in surveys. A similar hypothesis was advanced by Jelliffe (1952) and was supported by data from the then Belgian Congo (Lambotte-Legrand Lambotte-Legrand, 1951, Lambotte-Legrand, 1952, Vandepitte, 1952). Although most cases were consistent with the concept of homozygous inheritance, exceptions continued to occur. Patients with a non-sickling parent of Mediterranean ancestry were later recognized to have sickle cell-β thalassaemia (Powell et al, 1950; Silvestroni & Bianco, 1952; Sturgeon et al, 1952; Neel et al, 1953a), a condition also widespread in African and Indian subjects that presents a variable syndrome depending on the molecular basis of the β thalassaemia mutation and the amount of HbA produced.

Phenotypically, there are two major groups in subjects of African origin, sickle cell-β+ thalassaemia manifesting 20-30% HbA and mutations at 229(A,G) or 288(C,T), and sickle cell-β0 thalassaemia with no HbA and mutations at IVS2-849(A,G) or IVS2-1(G,A). In Indian subjects, a more severe β thalassaemia mutation IVS1-5(G,C) results in a sickle cell-β+ thalassaemia condition with 3-5% HbA and a relatively severe clinical course.

Other double heterozygote conditions causing sickle cell disease include sickle cell-haemoglobin C (SC) disease, (Kaplan et al, 1951; Neel et al, 1953b), sickle cellhaemoglobin O Arab (Ramot et al, 1960), sickle cellhaemoglobin Lepore Boston (Stammatoyannopoulos & Fessas, 1963) and sickle cell-haemoglobin D Punjab (Cooke & Mack, 1934). The latter condition was first described in siblings in 1934, who were reinvestigated for confirmation of HbD (Itano, 1951), the clinical features reported (Sturgeon et al, 1955) and who were finally identified as HbD Punjab (Babin et al, 1964), representing a remarkable example of longitudinal observation and investigation in the same family over 30 years.

The maintenance of high frequencies of the sickle cell trait in the presence of almost obligatory losses of homozygotes in Equatorial Africa implied that there was either a very high frequency of HbS arizing by fresh mutations or that the sickle cell trait conveyed a survival advantage in the African environment. There followed a remarkable period in the 1950s when three prominent scientists were each addressing this problem in East Africa, Dr Alan Raper and Dr Hermann Lehmann in Uganda and Dr Anthony Allison in Kenya. It was quickly calculated that mutation rates were far too low to balance the loss of HbS genes from deaths of homozygotes (Allison, 1954a). An increased fertility of heterozygotes was proposed (Foy et al, 1954; Allison, 1956a) but never convincingly demonstrated. Raper (1949) was the first to suggest that the sickle cell trait might have a survival advantage against some adverse condition in the tropics and Mackey & Vivarelli (1952) suggested that this factor might be malaria. The close geographical association between the distribution of malaria and the sickle cell gene supported this concept (Allison, 1954b) and led to an exciting period in the history of research in sickle cell disease.

The first observations on malaria and the sickle cell trait were from Northern Rhodesia where Beet (1946, 1947) noted that malarial parasites were less frequent in blood films from subjects with the sickle cell trait. Allison (1954c) drew attention to this association, concluding that persons with the sickle cell trait developed malaria less frequently and less severely than those without the trait. This communication marked the beginning of a considerable controversy.Two studies failed to document differences in parasite densities between `sicklers’ and `non-sicklers’ (Moore et al, 1954; Archibald & Bruce-Chwatt, 1955) and Beutler et al (1955) were unable to reproduce the inoculation experiments of Allison (1954c). Raper (1955) speculated that some feature of Allison’s observations had accentuated a difference of lesser magnitude and postulated that the sickle cell trait might inhibit the establishment of malaria in non-immune subjects. The conflicting results in these and other studies appear to have occurred because the protective effect of the sickle cell trait was overshadowed by the role of acquired immunity. Examination of young children before the development of acquired immunity confirmed both lower parasite rates and densities in children with the sickle cell trait (Colbourne & Edington, 1956; Edington & Laing, 1957; Gilles et al, 1967) and it is now generally accepted that the sickle cell trait confers some protection against falciparum malaria during a critical period of early childhood between the loss of passively acquired immunity and the development of active immunity (Allison, 1957; Rucknagel & Neel, 1961; Motulsky, 1964). The mechanism of such an effect is still debated, although possible factors include selective sickling of parasitized red cells (Miller et al, 1956; Luzzatto et al, 1970) resulting in their more effective removal by the reticulo-endothelial system, inhibition of parasite growth by the greater potassium loss and low pH of sickled red cells (Friedman et al, 1979), and greater endothelial adherence of parasitized red cells (Kaul et al, 1994).

The occurrence of the sickle cell mutation and the survival advantage conferred by malaria together determine the primary distribution of the sickle cell gene. Equatorial Africa is highly malarial and the sickle cell mutation appears to have arisen independently on at least three and probably four separate occasions in the African continent, and the mutations were subsequently named after the areas where they were first described and designated the Senegal, Benin, Bantu and Cameroon haplotypes of the disease (Kulozik et al, 1986; Chebloune et al, 1988; Lapoumeroulie et al, 1992). The disease seen in North and South America, the Caribbean and the UK is predominantly of African origin and mostly of the Benin haplotype, although the Bantu is proportionately more frequent in Brazil (Zago et al, 1992). It is therefore easy to understand the common misconception held in these areas that the disease is of African origin.

However, the sickle cell gene is widespread around the Mediterranean, occurring in Sicily, southern Italy, northern Greece and the south coast of Turkey, although these are all of the Benin haplotype and so, ultimately, of African origin. In the Eastern province of Saudi Arabia and in central India, there is a separate independent occurrence of the HbS gene, the Asian haplotype. The Shiite population of the Eastern Province traditionally marry first cousins, tending to increase the prevalence of SS disease above that expected from the gene frequency (Al-Awamy et al, 1984). Furthermore, extensive surveys performed by the Anthropological Survey of India estimate an average sickle cell trait frequency of 15% across the states of Orissa, Madhya Pradesh and Masharastra which, with the estimated population of 300 million people, implies that there may be more cases of sickle cell disease born in India than in Africa. The Asian haplotype of sickle cell disease is generally associated with very high frequencies of alpha thalassaemia and high levels of fetal haemoglobin, both factors believed to ameliorate the severity of the disease.

The promotion of sickling by low oxygen tension and acid conditions was first recognized by Hahn & Gillespie (1927) and further investigated by others (Lange et al, 1951; Allison, 1956b; Harris et al, 1956). The morphological and some functional characteristics of irreversibly sickled cells were described (Diggs & Bibb, 1939; Shen et al, 1949), but the essential features of the polymerization of reduced HbS molecules had to await the developments of electron microscopy (Murayama, 1966; Dobler & Bertles, 1968; Bertles & Dobler, 1969; White & Heagan, 1970) and Xray diffraction (Perutz & Mitchison, 1950; Perutz et al, 1951). The early observations on the inducement of sickling by hypoxia led to the first diagnostic tests utilizing sealed chambers in which oxygen was removed by white cells (Emmel, 1917), reducing agents such as sodium metabisulphite (Daland & Castle, 1948) or bacteria such as Escherichia coli (Raper, 1969). These slide sickling tests are very reliable with careful sealing and the use of positive controls, but require a microscope and some expertise in its use. An alternative method of detecting HbS utilizes its relative insolubility in hypermolar phosphate buffers (Huntsman et al, 1970), known as the solubility test. Both the slide sickle test and the solubility test detect the presence of HbS, but fail to make the vital distinction between the sickle cell trait and forms of sickle cell disease. This requires the process of haemoglobin electrophoresis, which detects the abnormal mobility of HbS, HbC and many other abnormal haemoglobins within an electric field.

The contributions of several workers on the determinants of sickling (Daland & Castle, 1948), birefringence of deoxygenated sickled cells (Sherman, 1940) the lesser degree of sickling in very young children which implied that it was a feature of adult haemoglobin (Watson, 1948) led Pauling to perform Tiselius moving boundary electrophoresis on haemoglobin solutions from subjects with sickle cell anaemia and the sickle cell trait. The demonstration of electrophoretic and, hence, implied chemical differences between normal, sickle cell trait and sickle cell disease led to the proposal that it was a molecular disease (Pauling et al, 1949). The chance encounter between Castle and Pauling who shared a train compartment returning from a meeting in Denver in 1945, its background and implications, has passed into the folklore of medical research (Conley, 1980; Feldman & Tauber, 1997).

The nature of this difference was soon elucidated. The haem groups appeared identical, suggesting that the difference resided in the globin, but early chemical analyses revealed no distinctive differences (Schroeder et al, 1950; Huisman et al, 1955). Analyses of terminal amino acids also failed to reveal differences, although an excess of valine in HbS was noted but considered an experimental error (Havinga, 1953). The development of more sensitive methods of fingerprinting combining high voltage electrophoresis and chromatography allowed the identification of the essential difference between HbA and HbS. This method enabled the separation of constituent peptides and demonstrated that a peptide in HbS was more positively charged than in HbA (Ingram, 1956). This peptide was found to contain less glutamic acid and more valine, suggesting that valine had replaced glutamic acid (Ingram, 1957). The sequence of this peptide was shown to be Val-His-Leu-Thr-Pro-Val-Glu-Lys in HbS instead of the Val-His-Leu-Thr-Pro-Glu-Glu-Lys in HbA (Hunt & Ingram, 1958), a sequence which was subsequently identified as the amino-terminus of the b chain (Hunt & Ingram, 1959). This amino acid substitution was consistent with the genetic code and was subsequently found to be attributable to the nucleotide change from GAG to GTG (Marotta et al, 1977).

Haemolysis and anaemia. The presence of anaemia and jaundice in the first four cases suggested accelerated haemolysis, which was supported by elevated reticulocyte counts (Sydenstricker et al, 1923) and expansion of the bone marrow (Sydenstricker et al, 1923; Graham, 1924). The bone changes of medullary expansion and cortical thinning were noted in early radiological reports (Vogt & Diamond, 1930; LeWald, 1932; Grinnan, 1935). Drawing on a comparison of sickle cell disease and hereditary spherocytosis, Sydenstricker (1924) introduced the term `haemolytic crisis’ that has persisted in the literature to this day, despite the lack of evidence for such an entity in sickle cell disease. The increased requirements of folic acid and the consequence of a deficiency leading to megaloblastic change was not noted until much later (Zuelzer & Rutzky, 1953; Jonsson et al, 1959; MacIver & Went, 1960).

The haemoglobin level in SS disease of African origin is typically between 6 and 9 g/dl and is well tolerated, partly because of a marked shift in the oxygen dissociation curve (Scriver & Waugh, 1930; Seakins et al, 1973) so that HbS within the red cell behaves with a low oxygen affinity. This explains why patients at their steady state haemoglobin levels rarely show classic symptoms of anaemia and fail to benefit clinically from blood transfusions intended to improve oxygen delivery.

Graham R. Serjeant
Sickle Cell Trust, Kingston, Jamaica
Brit J Haem 2001; 112: 3-18

The Immune Haemolytic Anaemias

The growth in knowledge of the scientific basis of haemolytic anaemias, which have been a main interest of the author, has been remarkable, as have consequent advances in the practice of medicine since the mid-1930s. At that time, the cause and mechanism of important disorders such as the acquired antibody determined (immune) haemolytic anaemias, haemolytic disease of the newborn, hereditary spherocytosis and paroxysmal nocturnal haemoglobinuria were unknown or but partially understood.

According to Crosby (1952), William Hunter of London, in an article on pernicious anaemia published in 1888, was the first to use the term `haemolytic’ to denote an anaemia caused by excessive blood destruction. By the turn of the century, the term was being widely used in clinical literature. Peyton Rous, in his comprehensive review `Destruction of the red blood corpuscles in health and disease’ (Rous, 1923), concluded that the generally held view in the early 1930s was that about one-fifteenth of the erythrocyte mass was destroyed daily. Rous was aware of the pioneer work of Winifred Ashby (1919), who, by following the survival of serologically distinct but compatible transfused erythrocytes, had found that normal erythrocytes might live for up to 100 d in the recipients’ circulation. Subsequent work using radioactive chromium (51Cr) as an erythrocyte label, showed that Ashby’s data and conclusions were in fact correct, i.e. that normal erythrocytes in health circulate in the peripheral blood for approximately 110 d. Erythrocyte labelling with 51Cr also had a further advantage over the Ashby method in addition to enabling the life-span of the patients’ erythrocytes to be assessed in the circulation by surface counting, to detect and measure the accumulation of radioactivity in the spleen and liver, and thereby assess the organs’ role in haemolysis

In the first decade of the twentieth century Widal et al (1908a) and Le Gendre & Brulea (1909) reported that autohaemoagglutination was a striking finding in some cases of icteare heamolytique acquis, and also Chauffard & Trosier (1908) and Chauffard & Vincent (1909) had described the presence of haemolysins in the serum of patients suffering from intense haemolysis. The conclusion was that abnormal immune processes, i.e. the development of auto-antibodies damaging the patients’ own erythrocytes, might play a part in the genesis of some cases of acquired haemolytic anaemia. This was indeed antedated by the classic observations of Donath & Landsteiner (1904) and Eason (1906) on the mechanism of haemolysis in paroxysmal cold haemoglobinuria.

That blood might auto-agglutinate when chilled had been described by Landsteiner (1903) and that an unusual degree of the phenomenon might complicate some types of respiratory disease was reported by Clough & Richter (1918) and later by Wheeler et al (1939). A few years later Peterson et al (1943) and Horstmann & Tatlock (1943) reported that cold auto-agglutinins at high titres were frequently found in the serum of patients who had suffered from the then so called primary atypical pneumonia.

Stats & Wasserman’s (1943) review on cold haemagglutination was a valuable contribution to contemporary knowledge. They listed in a table as many as 94 references to papers published between 1890 and 1943 in which cold haemagglutination had been described. In 32 of the papers the patients referred to had suffered from increased haemolysis

Recognition that cold auto-antibodies played an important role in the pathogenesis of some cases of haemolytic anaemia led to the concept that auto-immune haemolytic anaemia (AIMA) might usefully be classified into warm antibody or cold-antibody types, according to whether the patient is forming (warm) antibodies which react (perhaps optimally) at body temperature or (cold) antibodies which react strongly at low temperatures (e.g. 48C) but progressively less well as the temperature is raised and are perhaps inactive at 37oC. The clinical syndrome suffered by the patient would depend not only on the amount of antibody produced but also on its temperature requirement. Another important advance in understanding has been the realization that both types of AIHA could develop in association with a wide range of underlying disorders (secondary AIHA) as well as `idiopathically’, i.e. for no obvious cause (primary AIHA). The author’s own experience was summarized in a review (Dacie & Worlledge, 1969): 99 out of 210 cases of warm AIHA were judged to be secondary as were 39 out of 85 cases of cold AIHA. Petz & Garratty (1980), summarized the data from six centres: 55% out of a total of 656 cases had been reported as secondary. They listed the disorders with which warm antibody AIHA had been associated as chronic lymphocytic leukaemia, Hodgkin’s disease, non-Hodgkin’s lymphomas, thymomas, multiple myeloma, Waldenstrom’s macroglobulinaemia, systemic lupus erythematosus, scleroderma, rheumatoid arthritis, infectious disease/ childhood viral disorders, hypogammaglobulinaemia, dysglobulinaemias, other immune deficiency syndromes, and ulcerative colitis.

Conley (1981), in an interesting review of warm-antibody AIHA patients seen at the Johns Hopkins Hospital, emphasized how important it was to carry out a careful enquiry into the patient’s past history and also to undertake a prolonged follow-up. He stated that a retrospective review of 33 patients whose illnesses in the past have been designated `idiopathic” had revealed an associated immunologically related disorder in 19 of them. An additional three patients had developed a lymphoma 2±10 years after they had developed AIHA. As already referred to, warm-antibody AIHA is now known to complicate a wide range of underlying diseases, particularly malignant lymphoproliferative disorders, other auto-immune disorders and immune deficiency syndromes. What proportion of patients suffering from a lymphoproliferative disorder develop AIHA is an interesting question. Duehrsen et al (1987) stated that this had occurred in 12 out of 637 patients. Early data on the incidence of a positive DAT in SLE were provided by Harvey et al (1954) – in six out of 34 patients tested the DAT had been positive. Later, Mongan et al (1967), who had studied a large number of patients suffering from a variety of connective tissue disorders, reported that the DAT had been positive in 15 out of 23 patients with SLE, none of whom, however, had suffered from overt haemolytic anaemia. It has also been realized since the 1960s that warm-antibody AIHA may develop in patients suffering from a variety of immune deficiency syndromes, both congenital and acquired.

It was in the mid-1960s that it was realized that, in a significant proportion of patients thought to have `idiopathic’ warm-antibody AIHA, the development of the causal auto-antibodies had been triggered in some way by a drug the patient was taking. The first drug implicated was the antihypertensive drug a-methyldopa (Aldomet) (Carstairs et al, 1966a,b). Following the finding that treating hypertensive patients with a-methyldopa led to the formation of anti-erythrocyte auto-antibodies in a significant percentage of patients, renewed interest was taken in the possibility that other drugs might have the same effect. Two main hypotheses have been advanced in relation to how certain drugs in some patients appear to have caused the development of anti-erythrocyte auto-antibodies. One hypothesis was that the drug or its metabolites act on the immune system so as to impair immune tolerance; the other was that the drug affects antigens at the erythrocyte surface in such a way that a normally active immune system responds by developing anti-erythrocyte antibodies. Clearly, too, the patient’s individuality must be an important factor, for only a proportion of patients receiving the same dosage of the offending drug for the same period of time develop a positive DAT and only a small percentage develop overt AIHA.

An interesting development in the history of the immune haemolytic anaemias was the realization in the mid-1950s that, rather rarely, haemolysis was brought about by the patient developing antibodies that were directed against a drug the patient had been taking and that the erythrocytes were in some way secondarily involved. The first drug to be implicated was Fuadin (stibophen), which had been used to treat a patient with schistosomiasis (Harris, 1954, 1956). The patient’s serum contained an antibody that agglutinated his own or normal erythrocytes and/or sensitized them to agglutination by antiglobulin sera; however, this occurred only in the presence of the drug.

In the late 1940s, several accounts of patients with AIHA who had persistently low platelet counts were published, e.g. Fisher (1947) and Evans & Duane (1949); and it was suggested that the patients might have been forming autoantibodies directed against platelets. This concept was further developed by Evans et al (1951). Eight out of their 18 patients with AIHA were thrombocytopenic; four had clinically obvious purpura. Evans et al (1951) suggested that there exists `a spectrum-like relationship between acquired haemolytic anaemia and thrombocytopenic purpura’; also that `on the one hand, acquired haemolytic anaemia with sensitization of the red cells is often accompanied with thrombocytopenia, while, on the other hand, primary thrombocytopenic purpura is frequently accompanied with red cell sensitization with or without haemolytic anaemia’. Many further case reports of AIHA accompanied by severe thrombocytopenia have since been published

There are two features in the blood film of a patient with an acquired haemolytic anaemia which indicate that he or she is suffering from AIHA; one is auto-agglutination, the other is erythrophagocytosis. Spherocytosis, although often present to a marked degree, is of course found in other types of haemolytic anaemia.

The pioneer French observations on auto-agglutination already referred to were generally overlooked until the late 1930s, and serological studies seem seldom to have been undertaken until the publication of Dameshek & Schwartz’s (1938b) report in which they described the presence of `haemolysins’ in cases of acute apparently acquired haemolytic anaemia. Dameshek & Schwartz (1940) summarized contemporary knowledge in an extensive review. They concluded that it was not improbable that haemolysins of various types and `dosages’ were in fact responsible for many cases of human haemolytic anaemias, including congenital haemolytic anaemia, which they suggested might be caused by the `more or less continued action of an haemolysin’.

Six years were to pass before the concept that an abnormal immune mechanism played a decisive role in some cases of acquired haemolytic anaemia was clearly demonstrated by Boorman et al (1946), who reported that the erythrocytes of five patients with acquired acholuric jaundice had been agglutinated by an antiglobulin serum, i.e. that the newly described antiglobulin reaction or Coombs test (Coombs et al, 1945) was positive, while the test had been negative in 28 patients suffering from congenital acholuric jaundice. This work aroused great interest and was soon confirmed.

Until the 1950s, the auto-antibodies responsible for AIHA were generally concluded to be `non-specific’. According to Wiener et al (1953), `Red cell auto-antibodies react not only with the individual’s own red cells but also with the erythrocytes of all other human beings. The substances on the red blood cell envelope with which the auto-antibodies combine are agglutinogens like the ABO, MN and RhHr systems, except that, in the former case, the blood factors with which the auto-antibodies react are not type specific but are shared by all human beings.’ They suggested that the auto-antibodies might be directed to the `nucleus of the RhHr substance’. Earlier work had, however, indicated that the sensitivity of normal group-compatible erythrocytes to a patient’s auto-antibody might vary considerably (Denys & van den Broucke, 1947; Kuhns & Wagley, 1949). That auto-antibodies might have a clearly defined Rh specificity, e.g. anti-e, was described by Race & Sanger (1954) in the second edition of their book. Referring to Wiener et al (1953), they wrote: `This beautifully clear investigation made the present authors realize that a curious result obtained by one of them (Ruth Sanger) in 1953 in Australia had after all been true; the serum of a man who had died of a haemolytic anaemia 3000 miles away contained anti-e; his cells were clearly CDe-cde’. A similar finding, i.e. an auto-anti-e, was described by Weiner et al (1953).

A further development in the unravelling of a complicated story was the realization that some of the antibodies which appeared to be specific were reacting with more basic antigens, although showing a preference for specific antigens, i.e. some specific auto-antibodies appeared to be less specific than their allo-antibody counterparts. Moreover, some antibodies, reacting with specific antigens, have been shown to be partially or completely absorbable by antigen negative cells.

Many apparently `non-specific’ antidl antibodies have been shown to be not strictly `nonspecific’ but to react with antigens of very high frequency, e.g. to be anti-Wrb, anti-Ena, anti-LW or anti-U. Issitt et al (1980)) listed six additional very common antigens that had been identified as targets for anti-dl auto-antibodies, i.e. Hr, Hro, Rh34, Rh29, Kpb and K13.

In relation to human acquired haemolytic anaemia, the discovery in the late 1940s and 1950s that many cases were apparently brought about by the development of damaging anti-erythrocyte antibodies led to intense interest and speculation into the why and how of auto-antibody formation. Of seminal importance at the time were the experiments and theoretical arguments of Burnet (Burnet & Fenner, 1949; Burnet, 1957, 1959, 1972) and the studies on transplantation immunity of Medawar (Billingham et al, 1953; Medawar, 1961). Of particular interest, too, was the report by Bielschowsky et al (1959) of the occurrence of AIHA in an inbred strain of mice – the NZB/BL strain. Remarkably, by the time the mice were 9-months-old the DAT was positive in almost every mouse. Burnet (1963) referred to the gift of the mice to the Walter and Eliza Hall Institute of Medical Research, Melbourne as `the finest gift the Institute has ever received’.

Exactly how is it that auto-antibodies reacting with an erythrocyte surface antigen result in the cell’s premature destruction? The possible role of auto-agglutination in bringing about haemolysis was emphasized by Castle and colleagues as the result of a series of studies carried out in the 1940s and 1950s. As summarized by Castle et al (1950), an antibody which appears to be incapable of causing `lysis in vitro might bring about the following sequence of events in vivo. (1) Red cell agglutination in the peripheral blood; (2) red cell sequestration and separation from plasma in tissue capillaries; (3) ischaemic injury of tissue cells with release of substances that increase the osmotic and mechanical fragilities of red cells locally; (4) local osmotic lysis of red cells or subsequent escape of mechanically fragile red cells into the blood stream where the traumatic motion of the circulation causes their destruction’.

We can expect, as the years pass, that more and more will be known as to the intricate mechanisms that bring about self-tolerance and the mechanisms underlying the occurrence of auto-immune disorders in general, including the role of infectious agents, drugs and genetic factors. Patients with immune haemolytic anaemias can be expected to benefit from the new knowledge; for in parallel with a better understanding as to how immune self-tolerance breaks down will hopefully be the development of more effective drugs and therapies aimed at controlling the breakdown.

The Immune Haemolytic Anaemias: A Century of Exciting Progress in Understanding.  Sir John Dacie, Emeritus Professor of Haematology.
Brit J Haem 2001; 114: 770-785.

A History of Pernicious Anaemia

This is a review of the ideas and observations that have led to our current understanding of pernicious anaemia (PA). PA is a megaloblastic anaemia (MA) due to atrophy of the mucosa of the body of the stomach which, in turn, is brought about by autoimmune factors.

A case report by Osler & Gardner (1877) in Montreal could be that of PA. This anaemic patient had numbness of the fingers, hands and forearms; the red blood cells were large; at autopsy the gastric mucosa appeared atrophic and the marrow had large numbers of erythroblasts with finely granular nuclei. The increased marrow cellularity had also been noted by Cohnheim (1876).

Ehrlich (1880) (Fig 1) distinguished between cells he termed megaloblasts present in the blood in PA from normoblasts present in anaemia as a result of blood loss. Not only were large red blood cells noted in PA, but irregular red cells, ? poikilocytes, were reported in wet blood preparations by Quincke (1877). Megaloblasts in the marrow during life were first noted by Zadek (1921). Hypersegmented neutrophils in peripheral blood in PA were described by Naegeli (1923) and came to be widely recognized after Cooke’s study (Cooke, 1927). The giant metamyelocytes in the marrow were described by Tempka & Braun (1932).

Paul Ehrlich

Paul Ehrlich

Fig 1. Paul Ehrlich (Wellcome Institute Library, London).

The association between PA and spinal cord lesions was described by Lichtheim (1887) and a full account was published by Russell et al (1900), who coined the term `subacute combined degeneration of the spinal cord’ (SCDC) although they were not convinced of its relation to PA. Arthur Hurst at Guy’s Hospital, London, confirmed the association of the neuropathy with PA and added, too, the association of loss of hydrochloric acid in the gastric juice (Hurst & Bell, 1922). Cabot (1908) found that numbness and tingling of the extremities were present in almost all of his 1200 patients and 10% had ataxia. William Hunter (1901) noted the prevalence of a sore tongue in PA, which was present in 40% of Cabot’s series.

In 1934, the Nobel Prize in medicine and physiology was awarded to Whipple, Minot and Murphy. Was there ever an award more deserved? They saved the lives of their patients and pointed the way forward for further research. What was there in liver that was lacking in patients with PA? The effect of liver in restoring the anaemia in Whipple’s iron-deficient dogs was by supplying iron which is  abundant in liver.

Liver given by mouth also provides Cbl and folic acid. But patients with PA cannot absorb Cbl, although some 1% of an oral dose can cross the intestinal mucosa by passive diffusion; this, presumably, is what happened when large amounts of liver were eaten. Beef liver contains about 110 mg of Cbl per 100 g and about 140 mg of folate per 100 g. Cbl is stable and generally resistant to heat; folate is labile unless preserved with reducing agents. The daily requirement of Cbl by man is l-2 mg. The liver diet, if consumed, had enough of these haematinics to provide a response in most MAs.

George Richard Minot

George Richard Minot

George Richard Minot (Wellcome Institute Library, London).

The availability of liver extracts brought about interest in the nature of the haematological response. An optimal response required a peak rise of reticulocytes 5±7 d after the injection of liver extract and the height of the peak was greatest in those with severe anaemia; the flood of reticulocytes was as a result of a synchronous maturation of a vast number of megaloblasts into red cells. There is a steady rise in the red cell count to reach 3 x 1012/l in the 3rd week (Minot & Castle, 1935). Many liver extracts did not have enough antianaemic factor to achieve this and some assayed by the author had only 1-2 mg of Cbl.  It took another 22 years for a pure antianaemic factor to be isolated, although, admittedly, the Second World War intervened; in 1948, an American group led by Karl Folkers and an English group led by E. Lester-Smith published, within weeks of each other, the isolation of a red crystalline substance termed vitamin B12 and subsequently renamed cobalamin.

The structure of this red crystalline compound was studied by the nature of its degradation products and by X-ray crystallography. It soon became apparent that there was a cobalt atom at the heart of the structure and this heavy atom was of great aid to the crystallographers, so much so that, with additional information from the chemists, they were the first to come up with the complete structure. To quote Dorothy Hodgkin: `To be able to write down a chemical structure very largely from purely crystallographic evidence on the arrangement of atoms in space – and the chemical structure of a quite formidably large molecule at that – is for any crystallographer, something of a dream-like situation’. As Lester-Smith (1965) pointed out, it also required some 10 million calculations. In 1964, Dorothy Hodgkin was awarded the Nobel Prize for chemistry.

Barker et al (1958) published an account of the metabolism of glutamate by a Clostridium. The glutamate underwent an isomerization and an orange-coloured co-enzyme was involved that turned out to be Cbl with a deoxyadenosyl group attached to the cobalt.

This Cbl co-enzyme, deoxyadenosylCbl, is the major form of Cbl in tissues; it is also extremely sensitive to light, being changed rapidly to hydroxoCbl. DeoxyadenosylCbl is concerned with the metabolism of methylmalonic acid in man (Flavin & Ochoa, 1957). The other functional form of Cbl is methylCbl involved in conversion of homocysteine to methionine (Sakami & Welch, 1950). Both these pathways are impaired in PA in relapse.

Cbl consists of a ring of four pyrrole units very similar to that present in haem. These, however, have the cobalt atom in the centre instead of iron and the ring is called the corrin nucleus. The cobalamins have a further structure, a base, termed benzimidazole, set at right angles to the corrin nucleus and this may have a link to the cobalt atom (base on position).

By the time Cbl had been isolated from liver it was already known that it was also present in fermentation flasks growing bacteria such as streptomyces species. Other organisms gave higher yields so that kilogram quantities of pure Cbl were obtained; these sources have replaced liver in the production of Cbl. By adding radioactive form of cobalt to the fermentation flasks instead of ordinary cobalt, labelled Cbl became available (Chaiet et al, 1950). The importance of labelled Cbl is that it made it possible to carry out Cbl absorption tests in patients, to design isotope dilution assays for serum Cbl, to design ways of assaying intrinsic factor (IF), to detect antibodies to IF and even to measure glomerular filtratration rate, as free Cbl is excreted by the glomerulus without any reabsorption by the renal tubules.

William Castle at the Thorndike Memorial Laboratory, Boston City Hospital, devised experiments to explore the relationship between gastric juice, the anti-anaemic factor that Castle assumed, correctly, was also present in beef, and the response in PA. The question Castle asked was `Was it possible that the stomach of the normal person could derive something from ordinary food that for him was equivalent to eating liver?’.

The experiment in untreated patients with PA consisted of two consecutive periods of 10 d or more during which daily reticulocyte counts were made. During the first period of 10 d, the PA patient received 200 g of lean beef muscle (steak) each day. There was no reticulocyte response. During the second period, the contents of the stomach of a healthy man were recovered 1 h after the ingestion of 300 g of steak; about 100 g could not be recovered. The gastric contents were incubated for a few hours until liquefied and then given to the PA patient through a tube. This was done daily. On day 6 there was a rise in reticulocytes reaching a peak on day 10, followed by a rise in the red cell count. The response was similar to that obtained with large amounts of oral liver.

Thus, Castle concluded that a reaction was taking place between an unknown intrinsic factor (IF) in the gastric juice and an unknown extrinsic factor in beef muscle. Whereas Minot & Murphy (1926) found that 200-300 g of liver daily was needed to get a response in PA, 10 g liver was adequate when incubated with 10-20 ml normal gastric juice (Reiman & Fritsch, 1934). Castle’s extrinsic factor is the same as the anti-anaemic factor that is Cbl, and IF is needed for its absorption. Presumably the gastric juice in PA lacks IF.

The elegant studies of Hoedemaeker et al (1964) in Holland using autoradiography of frozen sections of human stomach incubated with [57Co]-Cbl showed that IF was produced in the gastric parietal cell. The binding of Cbl to

the parietal cell was abolished by first incubating the section with a serum containing antibodies to IF. The parietal cell in man is thus the source of both hydrochloric acid and IF. The parietal cell is the only source of IF in man as a total gastrectomy is invariably followed by a MA due to Cbl deficiency. IF is a glycoprotein with a molecular weight of 45 000.

Assay of protein fractions of serum after electrophoresis showed that endogenous Cbl is in the position of α-1 globulin. Chromatography of serum after addition of [57Co]-Cbl on Sephadex G-200 showed that Cbl was attached to two proteins, one eluting before the albumin termed transcobalamin I (TCI) and the other after the albumin termed transcobalamin II (TCII). Charles Hall showed that, when labelled Cbl given by mouth is absorbed, it first appears in the position of TCII and later in the position of TCI as well (Hall and Finkler, l965). They concluded that TCII is the prime Cbl transport protein carrying Cbl from the gut into the blood and then to the liver from where it is redistributed by both new TCII as well as TCI. Congenital absence of a functional TCII causes a severe MA in the first few months of life owing to an inability to transport Cbl. Most of the Cbl in serum is on TCI because it has a relatively long half-life of 9±10 d, whereas the half-life of TCII is about 1.5 h. Thus, in assaying the serum Cbl level, it is mainly TCI-Cbl that is being assayed.

With the availability of labelled Cbl, Cbl absorption tests began to be widely used in the 1950s. The commonest method was the urinary excretion test described by Schilling (1953). Here, an oral dose of radioactive Cbl is followed by an injection of 1000 mg of cyano-Cbl. The free cyano-Cbl is largely excreted into the urine over the next 24 h and carries with it about one third of the absorbed labelled Cbl.

Parietal cell antibodies (Taylor et al, 1962) are present in serum in 76-93% of different series of PAs and in the serum of 36% of the relatives of PA patients. The antibody is present in sera from 32% of patients with myxoedema, 28% of patients with Graves’ disease, 20% of relatives of thyroid patients and 23% of patients with Addison’s disease. Parietal cell antibodies are found in between 2-16% of controls, the high 16% figure being in elderly women. There is a higher frequency of PA in women, the female to male ratio being 1.7 to 1.0. The parietal cell antibody is probably important in the production of gastric atrophy. Thyroid antibodies are present in sera from 55% of PAs, in sera from 50% of PA relatives, in 87% of sera from myxoedema patients, in 53% of sera in Graves’ disease and in 46% of relatives of patients with thyroid disease.

There is a high frequency of PA among those disorders that have antibodies against the target organ. Thus, among 286 patients with myxoedema, 9.0% also had PA (Chanarin, 1979), as compared with a frequency of PA of about 1 per 1000 (0.01%) in the general population. Of 102 consecutive patients with vitiligo,
eight also had PA.

Patients with acquired hypogammaglobulinaemia are unable to make humoral antibodies; nevertheless, one third have PA as well. This cannot be as a result of action of IF antibodies and must be because of specific cell-mediated immunity. Tai & McGuigan (1969) demonstrated lymphocyte transformation in the presence of IF in six out of 16 PA patients and Chanarin & James (1974) found 10 out of 51 tests were positive.

Twenty-five patients with PA were tested for the presence of humoral IF antibody in serum and gastric juice and for cell-mediated immunity against IF. All but one gave positive results in one or more tests. It was concluded that these findings establish the autoimmune nature of PA and that the immunity is not merely an interesting byproduct.

Patients with PA treated with steroids show a reversal of the abnormal findings characterizing the disease. If they are still megaloblastic, the anaemia will respond in the first instance (Doig et al, 1957), but in the longer term Cbl neuropathy may be precipitated. The absorption of Cbl improves and may become `normal’ (Frost & Goldwein, 1958). There is a return of IF in the gastric juice (Kristensen and Friis, 1960) and a decline in the amount of IF antibody in serum (Taylor, 1959). In some patients there is return of acid in the gastric juice. Gastric biopsy shows a return of parietal and chief cells (Ardeman & Chanarin, 1965b; Jeffries, 1965). All this is as a result of suppression of cell-mediated immunity against the parietal cell and against IF. Withdrawal of steroids leads to a slow return to the status quo.

The author has dipped freely into the two volumes by the late M. M. Wintrobe. These are: Wintrobe, M.M. (1985) Hematology, the Blossoming of a Science. Lea & Febinge

A History of Pernicious Anaemia
I. Chanarin, Richmond, Surrey
Brit J Haem 111: 407-415
History of Folic Acid

1928 Lucy Wills studied macrocytic anaemia in pregnancy in Bombay, India

1932 Janet Vaughn studied macrocytic anemia associated with coeliac disease and idiopathic steatorrhea (1932) showed a response to marmite

1941 Folic acid extracted from spinach and is a growth factor for S. Faecalis

1941 pteroylglutamic acid synthesized at Amer Cyanamide – Pteridine ring, paraminobenzoic acid, glutamine –  PGA differed from natural compound in some respects

1945 PGA resolved the macrocytic anemia, but not the neuropathy

1979 Stokstad and associates at Berkeley obtained the first purified mammalian enzymes involved in synthesis

Folate antagonists inhibit tumor growth (Hitchings and Elion)(Nobel)

  • Misincorporation of uracil instead of thymine into DNA

Sidney Farber introduced Aminopterine and also Methotrexate for treatment of childhood lymphoblastic leukemia

  • MTX inhibits DHFR enzyme (dihydrofolate reductase) necessary for THF

Wellcome introduces trimethoprim (antibacterial), and also pyramethoprime (antimalarial)

Homocysteine isolated by Du Vineaud, but it was not noticed

Finkelstein and Mudd demonstrated the importance of remethylation for tHy and worked out the transsulfuration pathway

  1. Function of methyl THF is remethylation of homocysteine
  2. Synthesized by MTHFR
Metabolism of folate

Metabolism of folate

Metabolism of folate

Allosterically regulated by S-adenosyl methionine (Stokstad)

MTHF also inhibits glycine methyl transferase controlling excess SAM – transmethylation

JD Finkelstein

JD Finkelstein

James D Finkelstein

  • Homocysteinuria – mental retardation, skeletal malformation, thromboembolic disease; deficiency of cystathionine synthase (controls trans-sulfuration)
  • NTDs – pregnancy
  • Hyperhomocysteinemia and VD

AD Hoffbrand and DG Weir
Brit J Haem 2001; 113: 579-589

The History of Haemophilia in the Royal Families of Europe Queen Victoria.

On 17 July 1998 a historic ceremony of mourning and commemoration took place in the ancestral church of the Peter and Paul Fortress in St Petersburg. President Boris Yeltsin, in a dramatic eleventh-hour change of heart, decided to represent his country when the bones of the last emperor, Tsar Nicholas II, and his family were laid to rest 80 years to the day after their assassination in Yekaterinberg (Binyon, 1998). He described it as ‘ironic that the Orthodox Church, for so long the bedrock of the people’s faith, should find it difficult to give this blessing the country had expected’. ‘I have studied the results of DNA testing carried out in England and abroad and am convinced that the remains are those of the Tsar and his family’ (The Times, 1998a). Unfortunately, politicians and the hierarchy of the Russian Orthodox Church had argued about what to do with the bones previously stored in plastic bags in a provincial city mortuary. Politics, ecclesiastical intrigue, secular ambition, and emotions had fuelled the debate. Yeltsin and the Church wanted to honour a man many consider to be a saint, but many of the older generation are opposed to the rehabilitation of a family which symbolizes the old autocracy.

Our story starts, almost inevitably, with Queen Victoria of England who had nine children by Albert, Prince of Saxe-Coburg-Gotha. Victoria was certainly an obligate carrier for haemophilia as over 20 individuals subsequently inherited the condition (Figs 1 and 2). Princess Alice (1843–78) was Victoria’s third child and second daughter. Having married the Duke of Hesse at an early age, Alice went on to have seven children, one of whom, Frederick (‘Frittie’) was a haemophiliac who died at the age of 3 following a fall from a window.

Prince Leopold with Sir William Jenner at Balmoral in 1877

Prince Leopold with Sir William Jenner at Balmoral in 1877

Prince Leopold with Sir William Jenner at Balmoral in 1877. (Hulton Deutsch Collection Ltd.)

Alexandra was the sixth child and was only 6 years old when her mother and youngest sister died. ‘Sunny’, as she became known, was a favourite of Queen Victoria, who as far as possible directed her upbringing from across the channel: Alexandra (Alix) was forced to eat her baked apples and rice pudding with the same regularity as her English cousins. Alix visited her older sister Elizabeth (Ella) on her marriage to Grand Duke Serge and met Tsarevich Nicholas for the first time: she was 12 and not impressed. Five years later they met again and Alix fell in love, but by now she had been confirmed in the Lutheran Church and religion became the solemn core of her life.

Victoria had other aspirations for Alix. She hoped that she would marry her grandson Albert Victor (The Duke of Clarence) and the eldest son of the Prince of Wales (later Edward VII). The Duke was an unimpressive young man who was somewhat deaf and had limited intellectual abilities. If this arrangement had proceeded then Alix’s haemophilia carrier status would have been introduced into the British Royal Family and the possibility of a British monarch with haemophilia might have become a reality; however, the Duke died in 1892.

Nicholas and Alexandra. Alix and Nicholas were married in 1894 one week after the death of Nicholas’s father (Alexander III). In the same way that Victoria, with her personal aspirations of a marriage between Alix and the Duke of Clarence, had not considered the possibility of haemophilia, neither did the St Petersburg hierarchy consider a marriage to Nicholas undesirable. Haemophilia was already well recognized in Victoria’s descendants. Her youngest son, Leopold, had already died, as had Frittie her grandson. The inheritance of haemophilia had been known for some time since its description by John Conrad Otto (Otto, 1803). However, it was as late as 1913 before the first royal marriage was declined because of the risk of haemophilia, when the Queen of Rumania decided against an association between her son, Crown Prince Ferdinand, and Olga, the eldest daughter of Nicholas and Alexandra. The Queen of Rumania was herself a granddaughter of Queen Victoria and therefore a potential haemophilia carrier!

Alix was received into the Russian Orthodox Church, taking the name of Alexandra Fedorova. The first duty of a Tsarina was to maintain the dynasty and produce a male heir, but between 1895 and 1901 Alix produced four princesses, Olga, Tatiana, Maria and Anastasia. Failure to produce a son made Alix increasingly neurotic and she had at least one false pregnancy. However, in early 1904 she was definitely pregnant.

For a month or so all seemed well with little Alexis, but it was then noticed that the Tsarevitch was bleeding excessively from the umbilicus (a relatively uncommon feature of haemophilia). At first the diagnosis was not admitted by the parents, but eventually the truth had to be faced although even then only by the doctors and immediate family. Alix was grief stricken: ‘she hardly knew a day’s happiness after she realized her boy’s fate’. As a newly diagnosed haemophilia carrier she dwelt morbidly on the fact that she had transmitted the disease. These feelings are well known to some haemophiliac mothers but the situation was different in Russia in the early twentieth century. The people regarded any defect as divine intervention. The Tsar, as head of the Church and leader of the people, must be free of any physical defect, so the Tsarevich’s haemophilia was concealed. The family retreated into greater isolation and were increasingly dominated by the young heir’s affliction (Fig 3).

Up to a third of haemophiliac males do not have a family history of the condition. This is usually thought to be the result of a relatively high mutation rate occurring in either affected males or female carriers. None of Queen Victoria’s ancestors, for many generations, showed any evidence of haemophilia. Victoria was therefore either a victim of a mutation, or the Duke of Kent was not her father.The mutation is unlikely to have been in her mother, Victoire, who had a son and daughter by her first marriage, and there is no sign of haemophilia in their numerous descendants.

Victoire was under considerable pressure to produce an heir. The year before Victoria was born, Princess Charlotte, the only close heir to the throne, had died and the Duke of Kent had somewhat reluctantly agreed to marry Victoire with the aim of producing an heir. The postulate that the Queen’s gardener had a limp has not been substantiated!

The Duke of Kent had no evidence of haemophilia (he was 51 when Victoria was born) but did inherit another condition from his father (George III): porphyria. While a young man in Gibralter he suffered bilious attacks which were recognized as being similar to his father’s complaint.

Had Queen Victoria carried the gene for porphyria we might expect that she would have at least as many descendants with this condition as had haemophilia. Until recently only two possible cases of porphyria have been suggested amongst Victoria’s descendants: Kaiser Wilhelm’s sister and niece (MacAlpine & Hunter, 1969), but they could have inherited it from their Hohenzollern ancestor, Frederick the Great. A recent television programme (Secret History, 1998) claims to have identified two more cases in Victoria’s descendants, Princess Victoria, the Queen’s eldest daughter, and Prince William of Gloucester, nephew of George V. If these two cases are correct then they would tend to confirm that Victoria was indeed the daughter of the Duke of Kent, but the apparent lack of more cases in Victoria’s extended family is difficult to understand. The gene for acute intermittent porphyria has been isolated on chromosome 11. There is still plenty of scope for further genetic analysis on the European Royal Families!

We can only speculate as to the impact on European events over the last 150 years if the marriages within the Royal houses had been different. What is evident is the dramatic effect of haemophilia on the Royal Princes and their families.

Empress Alexandra at the Tsarevich’s bedside during a haemophiliac crisis

Empress Alexandra at the Tsarevich’s bedside during a haemophiliac crisis

Empress Alexandra at the Tsarevich’s bedside during a haemophiliac crisis in 1912. (Radio Times Hulton Picture Library.)

Richard F. Stevens
Royal Manchester Children’s Hospital
Brit J Haem 1999, 105, 25–32

`The longer you can look back ± the further you can look forward’: Winston Churchill in an address to The Royal College of Physicians, London 1944. At the time that Churchill was speaking in 1944, leukaemia was a fatal disease that had been identified 100 years before. The disease was described as the dreaded leukaemias, sinister and poorly understood.

Thomas Hodgkin chose a career in medicine and enrolled as a pupil at Guy’s Hospital in London. Being a Quaker, however, he could not enter the English universities of Oxford and Cambridge and decided to follow the medical courses at Edinburgh. At that times, Aristotelian and Hippocratic medicine were greatly influencing British physicians. Hodgkin, still a medical student, wrote a paper `On the Uses of the Spleen’ where he reported his beliefs on the purposes of the spleen: to regulate fluid volume, clean impurities from the body, supply expandability to the portal system. The subject was a presage of the disease that bears his name.

Hodgkin interrupted his studies at Edinburgh to spend a year in Paris where he met many people who had a great influence in his life and future activities. Among them, were Laennec (Hodgkin played an important role in bringing the stethoscope to Great Britain); Baron von Humboldt who introduced Hodgkin to the field of anthropology; Baron Cuvier, a distinguished anatomist and palaeontologist; and Thomas A. Bowditch, whose expeditions to Africa had a great impact on Hodgkin’s future activities.

In 1825, Thomas Hodgkin returned to London to join the staff at Guy’s Hospital, and in 1826 he was made `Inspector of the Dead’ and `Curator of the Museum of Morbid Anatomy’. In developing the museum he had accumulated, by 1829, over 1600 specimens demonstrating the effects of disease. The correlation of clinical disease to pathological material was quite new: from analyses of pathological specimens Hodgkin was able to describe appendicitis with perforation and peritonitis, the local spread of cancer to draining lymph nodes, noting that the tumour had similar characteristics at both sides, and features of other diseases.

In his historic paper `On Some Morbid Appearances of the Absorbent Glands and Spleen’ (Hodgkin, 1832), he briefly described the clinical histories and gross postmortem findings on six patients from the experience at Guy’s Hospital and included another case sent to him in a detailed drawing by his friend Carswell (Fig 2). In the very first paragraph he wrote: `The morbid alterations of structure which I am about to describe are probably familiar to many practical morbid anatomists, since they can scarcely have failed to have fallen under their observation in the course of cadaveric inspection’. Hodgkin’s studies had convinced him that he was dealing with a primary disease of the absorbent (lymphatic) glands. `This enlargement of the glands appeared to be a primitive affection of those bodies, rather than the result of an irritation propagated to them from some ulcerated surface or other inflamed texture – Unless the word inflammation be allowed to have a more indefinite and loose eaning, this affection – can hardly be attributed to that cause’ was stated on pages 85 and 86 of his 1832 paper. Hodgkin also mentioned that the first reference that he could find to this or similar disease was in fact by Malpighi in 1666.

Wilks (1865) described the disease in detail and, made aware by Bright that the first observations were done by Hodgkin, linked his name permanently to this new entity in a paper entitled `Cases of Enlargement of the Lymphatic Glands and Spleen (or Hodgkin’s Disease) with Remarks’ (Fig 3).

In 1837 Thomas Hodgkin was the outstanding candidate for the position of Assistant Physician at Guy’s Hospital in succession to Thomas Addison who had been promoted to Physician. After 10 years spent as Inspector of the Dead, he had published a great deal, including a two-volume work entitled The Morbid Anatomy of Serous and Mucous Membrane.

Hodgkin, acting in his other capacity, had sent Benjamin Harrison a report on the terrible consequences to native Indians of monopoly trading and on the inhuman treatment they received from officials of the Hudson Bay Company, of which Harrison was the financier. when the opportunity to appoint an Assistant Physician occurred, Harrison exercised an autocratic rule over the hospital and presided at the appointment made by the General Court. Thomas Hodgkin did not get the job and the next day he resigned all his appointments at Guy’s Hospital. Social medicine, medical problems associated with poverty, antislavery, concern for underpriviledged groups such as American Indians and Africans, as well as a strong sense of responsibility defined his life after this separation.

Sternberg (1898) and Reed (1902) are generally credited with the first definitive and thorough descriptions of the histopathology of Hodgkin’s disease. Based on the findings observed in her case series, Dorothy Reed concluded `We believe then, from the descriptions in the literature and the findings in 8 cases examined, that Hodgkin’s disease has a peculiar and typical histological picture and could thus rightly be considered a histopathological disease entity’.

During the successive decades, pathologists began to describe a broader spectrum of histological features. However, it was Jackson and Parker who, in scientific papers and in their well-known book Hodgkin’s Disease and Allied Disorders (Jackson & Parker, 1947), presented the first serious effort at a histopathological classification. They assigned the name `Hodgkin’s granuloma’ to the main body of typical cases. A much more malignant variant, usually characterized by a great abundance of pleomorphic and anaplastic Reed-Sternberg cells and seen in a relativelysmall number of cases was named `Hodgkin’s sarcoma’. A third, similarly infrequent, variant characterized by an extremely slow clinical evolution, a relative paucity of Reed-Sternberg cells and a great abundance of lymphocytes was termed `Hodgkin’s paragranuloma’. It was only approximately 20 years later that Lukes & Butler (1966) reported a characteristic subtype of the heterogeneous `granuloma’ category, to which they assigned the name `nodular sclerosis’. They also proposed a new histopathological classification, still in use to date, with an appreciably greater prognostic relevance and usefulness than the

previous Jackson-Parker classification.

The first human bone marrow transfusion was given to a patient with aplastic anemia in 1939.9 This patient received daily blood transfusions, and an attempt to raise her leukocyte and platelet counts was made using intravenous injection of bone marrow. After World War II and the use of the atomic bomb, researchers tried to find ways to restore the bone marrow function in aplasia caused by radiation exposure. In the 1950s, it was proven in a mouse model that marrow aplasia secondary to radiation can be overcome by syngeneic marrow graft.10 In 1956, Barnes and colleagues published their experiment on two groups of mice with acute leukemia: both groups were irradiated as anti-leukemic therapy and both were salvaged from marrow aplasia by bone marrow transplantation.

The topics of leukemias and lymphomas will not be discussed further in  this discussion.

The related references are:

Leukaemia – A Brief Historical Review from Ancient Times to 1950
British Journal of Haematology, 2001, 112, 282-292

The Story of Chronic Myeloid Leukaemia
British Journal of Haematology, 2000, 110, 2-11

Historical Review of Lymphomas
British Journal of Haematology 2000, 109, 466-476

Historical Review of Hodgkin’s Disease
British Journal of Haematology, 2000, 110, 504-511

Multiple Myeloma: an Odyssey of Discovery
British Journal of Haematology, 2000, 111, 1035-1044

The History of Blood Transfusion
British Journal of Haematology, 2000, 110, 758-767

Hematopoietic Stem Cell Transplantation—50 Years of Evolution and Future Perspectives. Henig I, Zuckerman T.
Rambam Maimonides Med J 2014;5 (4):e0028.
http://dx.doi.org/10.5041/RMMJ.10162

Landmarks in the history of blood transfusion.

1666 Richard Lower (Oxford) conducts experiments involving transfusion of blood from one animal to another

1667 Jean Denis (Paris) transfuses blood from animals to humans

1818 James Blundell (London) is credited with being the first person to transfuse blood from one human to another

1901 Karl Landsteiner (Vienna) discovers ABO blood groups. Awarded Nobel Prize for Medicine in 1930

1908 Alexis Carrel (New York) develops a surgical technique for transfusion, involving anastomosis of vein in the recipient with artery in the donor. Awarded Nobel Prize for Medicine in 1912

1915 Richard Lewinsohn (New York) develops 0.2% sodium citrate as anticoagulant

1921 The first blood donor service in the world was established in London by Percy Oliver

1937 Blood bank established in a Chicago hospital by Bernard Fantus

1940 Landsteiner and Wiener (New York) identify Rhesus antigens in man

1940 Edwin Cohn (Boston) develops a method for fractionation of plasma proteins. The following year, albumin produced by this method was used for the first time to treat victims of the Japanese attack on Pearl Harbour

1945 Antiglobulin test devised by Coombs (Cambridge), which also facilitated identification of several other antigenic systems such as Kell (Coombs et al, 1946), Duffy (Cutbush et al, 1950) and Kidd (Cutbush et al, 1950)

1948 National Blood Transfusion Service (NBTS) established in the UK

1951 Edwin Cohn (Boston) and colleagues develop the first blood cell separator

1964 Judith Pool (Palo Alto, California) develops cryoprecipitate for the treatment of haemophilia

1966 Cyril Clarke (Liverpool) reports the use of anti-Rh antibody to prevent haemolytic disease of the newborn

Read Full Post »

Outline of Medical Discoveries between 1880 and 1980

Curator: Larry H Bernstein, MD, FCAP

This is the first of a two part series tracing the developments in medical diagnosis and treatment, and herein, tracing the scientific events of the 19th century that accelerated and created the emergent events that brought together physics, organic and physical chemistry, electronics, computational biology.

Part I. Anatomy and Physiology

The first Nobel Prize in Physiology was awarded to Ivan Pavlov for work on digestion in 1904.  The presentation speech refers to the groundbreaking work of Vesalius and Harvey in his presentation address, citing their passionate pursuit of knowledge.  He credits the work of a young American physician, William Beaumont, who served as the only doctor on Michigan’s Mackinac Island in the French and Indian war in 1822, and who observed the gastric secretion from the gastric fistula of a wounded soldier. (see John Karlawish, Open Wound, University of Michigan Press, 2011). This was the basis for the work by Pavlov on dogs that extends our understanding of the telationship of the central nervous system to the digestive processes.

The Nobel Prize in Physiology or Medicine 1906 was awarded jointly to Camillo Golgi and Santiago Ramón y Cajal “in recognition of their work on the structure of the nervous system”. Golgi first opened the field of neuroanatomy with the silver staining method, and Cajal contributed equally to establishing the foundation for this research of great complexity.

The Nobel Prize in Physiology or Medicine 1909 was awarded to Emil Theodor Kocher for his work on the physiology, pathology, and surgery  of the thyroid gland. It had already been established that the enlargement of the thyroid compresses the trachea, and that complete removal has morbid effects. It was expressed by Kocher in 1883 that removal of the thyroid as a consequence of surgery must leave behind a functioning portion of the gland.

This was later followed by the establishment of a great medical institution Dr. William Worrall Mayo, a frontier doctor, and his two sons, Dr. William J. Mayo and Dr. Charles H. Mayo, Mayo Clinic.

The elder Dr. Mayo emigrated from his native England to the United States in 1846. He became a doctor in 1850. In 1863 he was appointed a surgeon for the enrollment board in southern Minnesota, to examine recruits for the Union Army, and settled in Rochester, Minn. His dedication to medicine became a family tradition when his sons, Drs. William James Mayo and Charles Horace Mayo, joined his practice in 1883 and 1888, respectively.

In 1883, a tornado swept through Rochester leaving in its wake many deaths and injuries. Temporary hospital quarters were set up in offices and hotels. Nuns from the Sisters of St. Francis, a teaching order, were recruited as nurses. The experience inspired Mother Alfred Moes to request that the Drs. Mayo join with the Sisters to build the first general hospital in southeastern Minnesota. The 27-bed Saint Mary’s Hospital opened in 1889 as a result of this partnership.

mayo-brothers

mayo-brothers

As the demand for their services increased, they asked other doctors and basic science researchers to join them in the world’s first private integrated group practice. In 1919, the Mayo brothers dissolved their partnership and turned the clinic’s name and assets, including the bulk of their life savings, to a private, not-for-profit, charitable organization now known as Mayo Foundation. It is worth noting that the Mayo Clinic became a favored place to have thyroid surgery, as its location is in the “goiter belt”.

Patients discovered the advantages to a “pooled resource” of knowledge and skills among doctors. In fact, the group practice concept that the Mayo family originated has influenced the structure and function of medical practice throughout the world.

The Nobel Prize in Physiology or Medicine 1912 was awarded to Alexis Carrel “in recognition of his work on vascular suture and the transplantation of blood vessels and organs”. He demonstrated the technique used to suture together open vessels, and even to transplant whole organs from one animal to another with excellent results.

The Nobel Prize in Physiology or Medicine 1920 was awarded to August Krogh “for his discovery of the capillary motor regulating mechanism”.  Harvey had shown in 1628 that the blood traverses the circulation returning to the heart in one minute. Malpighi showed that blood passes from the artery to the vein by capillaries  in 1661.  Krogh demonstrated by very elegant experiments that the quantity of gas that diffuses across the pulmonary alveoli is the same amount of gas that is released to the alveolar space. The importance of this is that the investigations having the aim to determine the process by which the oxygen requirement of the tissues is satisfied.

The Nobel Prize in Physiology or Medicine 1922 was divided equally between Archibald Vivian Hill “for his discovery relating to the production of heat in the muscle” and Otto Fritz Meyerhof “for his discovery of the fixed relationship between the consumption of oxygen and the metabolism of lactic acid in the muscle”. One need not be a physiologist to recognize that muscular activity is essentially bound up with the development of heat, or even with combustion. AV Hill determined the time relationships of heat production in muscle contraction measured galvanometrically, and Otto Meyerhof determined the oxygen consumption in the production of lactic acid. The muscle is regarded as a machine that converts chemical energy to mechanical energy (tension) with the production of heat. The development of heat entirely fails to appear if the supply of oxygen to the muscle is cut off, while the development of heat during the actual twitch, is independent of the presence of oxygen (consistent with Meyerhof’s glycolysis). The relaxation phase is consistent with oxygen uptake during recovery.

Fletcher and Hopkins had shown earlier that muscle not only forms, but also uses lactic acid in the presence of oxygen. Meyerhof determined by parallel determination of the lactic acid metabolism and the oxygen consumption during the recovery of the muscle, which yielded the result that the oxygen consumption does not account for more than1/3 – 1/4 of the lactic acid formed. When lactic
acid is formed an equivalent amount of glycogen in muscle disappears, and when lactic acid disappears, the quantity of
carbohydrate increases by the difference between lactic acid and quantity used in oxygen consumption.

The Nobel Prize in Physiology or Medicine 1923 was awarded jointly to Frederick Grant Banting and John James Rickard
Macleod “for the discovery of insulin”.  In 1857, Claude Bernard discovered that the liver contains glycogen, which converted to glucose, enters the blood stream (and thereby, the urine). Glycosuria became a starting point for the study of diabetes. It is of interest that he could not produce glycosuria by ligation of the pancreatic duct. But in 1889 Mering and Minkowsky did an operation on dogs that removed the pancreas, resulting in glycosuria, and creating a disease comparable to diabetes in humans. If part of the pancreas was left behind, it failed to produce diabetes. Brown-Sequard had called attention to ductless organs in the 1880s that are glands. These were
endocrine glands secreting hormones. Langerhans had shown in 1869 that the pancreas has glands that have no secretion into the pancreatic ducts, and in the beginning of the 1890s Languese surmised that these glands were involved in diabetes mellitus. Schulze and Ssobolev had shown that ligation of the duct resulted in atrophy of the pancreas sparing the islets. Frederick Banting at this time postulated that trypsin degraded the hormone, and with Best and Collip, under MacLeod’s guidance, Banting pursued his idea, and the effective extract was obtained in 1921, and demonstrated in 1922.

Arch Anat Histol Embryol. 1993-1994;75:151-82.

[History of histology in Strasbourg].

Le Minor JM.

Since the cellular theory was formulated in 1839, the University of Strasbourg has held a pioneer place in histology. This new morphological science has had, since its origin, close relations with physiology, and from 1846 to 1871, an original histophysiological school was organized in Strasbourg. The microscope and the study of tissues were considered as a fundamental approach for the progress of biological and medical knowledge. After the German annexation of Alsace, the scientists from this school participated in the renewal of histology in Nancy, Montpellier, and Paris. In 1872, when the new German university was created, an anatomical institute regrouped all aspects of normal morphology: anatomy, histology, and embryology. This was the case until 1918. In 1919, when the Faculty of Medicine was reorganized after Alsace was restored to France, a specific chair and institute of histology were created. This was the beginning of a school of histophysiology which was internationally renowned in the rise of experimental endocrinology. Great discoveries followed one after another: folliculin in 1924 and demonstration of the duality of ovarian hormones, the prominent place of the anterior part of the hypophysis and the demonstration of prolactin in 1928, thyreostimulin in 1929, then study of the other stimulins. In 1946 a chair and institute of medical biology were created. In 1948, a service of electron microscopy was opened.
P. Bouin (1870-1962), M. Aron (1892-1974), J. Benoit (1896-1982), R. Courrier (1895-1986) et M. Klein (1905-1975), were among the famous scientists who worked in histology in Strasbourg in the
period after the French restoration.
The Nobel Prize in Physiology or Medicine 1947

Bernardo Alberto Houssay

“for his discovery of the part played by the hormone of the anterior pituitary lobe in the metabolism of sugar”

He had already begun studying medicine and, in 1907, before completing his studies, he took up a post in the Department of Physiology. He began here his research on the hypophysis which resulted in his M.D.-thesis (1911), a thesis which earned him a University prize.

In 1919 he became Professor of Physiology in the Medical School at Buenos Aires University. He also organized the Institute of Physiology at the Medical School, making it a center with an international reputation. He remained Professor and Director of the Institute until 1943.  He made a lifelong study of the hypophysis and his most important discovery concerns the role of the anterior lobe of the hypophysis in carbohydrate metabolism and the onset of diabetes.

The Nobel Prize in Physiology or Medicine 1950

Edward Calvin Kendall, Tadeus Reichstein and Philip Showalter Hench

“for their discoveries relating to the hormones of the adrenal cortex, their structure and biological effects”

As late as in 1854 the German anatomist, Kölliker, was able to claim in a review of the subject that although the function of the adrenals was still unknown, yet in certain respects great advances had been made. Two quite different parts were now distinguished, an outer part, a fairly firm cortex, and an inner, softer medulla. Kölliker classified the adrenal cortices as ductless glands, which we now call the endocrine organs.

Thomas Addison, the English doctor, observed a rare disease with a fatal course, which was characterized chiefly by anemia, general weakness and fatigue, disturbances in the digestive apparatus, enfeebled heart activity and a peculiar dark pigmentation of the skin. He published a paper 1n 1855, suggesting that this morbid picture made its appearance in persons the greater part of whose adrenals was destroyed. Subsequent experiments in animals showed that removal of the adrenals led to speedy death, the symptoms recalling those known from Addison’s disease.

In 1894 Oliver and Schäfer proved that the injection of a watery extract from the adrenals had extremely pronounced effects. Within a few years adrenaline had been produced from the extract, its composition had been ascertained, and its artificial production accomplished. The more detailed analysis showed effects of the same kind as those resulting on increased activity of the so-called sympathetic nervous system, which innervates internal organs such as the heart and vessels, the intestinal canal, etc.  Attempts to prevent by means of adrenaline the deficiency symptoms following on the removal of the adrenals failed completely. The explanation of this was given when Biedl and others showed that it is the cortex which is of vital importance, not the medulla.

The isolation of the cortin proved to be a difficult task, calling for the combined efforts of a number of research workers. Particularly important contributions were made in this field by Wintersteiner and Pfiffner, and also by Edward Kendall at the Mayo Clinic in Rochester, and Tadeus Reichstein in Basel, and their co-workers. As early as in 1934, Kendall and his group succeeded in preparing from cortex extract what was at first assumed to be pure cortin in crystalline form. They found that it contained carbon, hydrogen, and oxygen, and indicated its empirical formula. But that was only a beginning. There was no reason to suspect that the cortin was not homogeneous; as further experiments proved. In reality Kendall and his co-workers had produced a mixture of different substances closely related to one another, and their work represents the early steps in the crystallization of a whole series of cortin substances. There is at least one active cortical substance – the best known of them all, first named Compound E and now called cortisone or cortone – which was isolated at four different laboratories, among them Kendall’s and Reichstein’s.

As all the cortin substances are closely related to one another, Reichstein’s finding implies that, like the sex hormones, they belong to the large and important group of steroids. The D vitamins and the bile acids, like our most important heart remedies, the active substances in Digitalis leaves and Strophanthus seeds, are also intimately associated with the steroids

The six definitely active cortical hormones are characterized, inter alia, by a double bond in the steroid skeleton; if this double bond disappears, inactive substances are obtained. They differ very inconsiderably from each other chemically. They are built up of 21 carbon atoms, but the number of oxygen atoms in the molecule is three, four, or five. The position of the additional oxygen atoms in the molecule was first established by Reichstein and Kendall, and thus a way was opened for semisynthetic production e.g. from the more easily obtainable bile acids or material from a certain species of Strophanthus. This is of particular importance, since the yield from the adrenals is very poor, at most about 1:1,000,000.

Thanks to the work of Kendall and his school, it has emerged that the comparatively inconsiderable dissimilarities in the matter of the structure of the cortical hormones are accompanied by material differences in respect of the effect. Thus some act especially strongly on the metabolism of sugar, others on the salt and fluid balances, and there are also several other differences. This was illustrated when Compound E was first tested. Pfiffner and Wintersteiner, like the Reichstein group, found that the substance had no, or extremely inconsiderable, life-prolonging effects on animals deprived of the adrenals. On the other hand, Ingle, Kendall’s coworker, observed that it stimulated the muscular work of such animals very strongly.

In the April of 1949, Hench, Kendall, Slocumb and Polley published their experiences in respect of the dramatic effects of cortisone in cases of chronic rheumatoid arthritis. A rapid improvement set in, pains and tenderness in the joints abated or disappeared, mobility increased, so that patients who had previously been complete invalids could walk about freely, and their general condition was also favourably affected. Similar results were obtained with a preparation from the anterior lobe of the pituitary, the so-called ACTH (Adreno-Cortico-Tropic Hormone), which, as the name indicates, stimulates the adrenal cortex to increased activity.

The value of a discovery lies not only in the immediate practical results, but equally much in the fact that it points out new lines of research. This is strikingly illustrated by the research during the last few decades into the cortical hormones, which has already led to unexpected and important new results within widely different spheres.

Nobel Prize in Physiology or Medicine 1966

Charles Huggins

Endocrine-Induced Regression of Cancers

The net increment of mass of a cancer is a function of the interaction of the tumor and its soil. Self-control of cancers results from a highly advantageous competition of host with his tumor. There are multiple factors which restrain cancer – enzymatic, nutritional, immunologic, the genotype and others.Prominent among them is the endocrine status, both of tumor and host – the subjects of this discourse.

The second quarter of our century found the biological sciences much pre-occupied with two noble topics :

  • chemistry and physiology of steroids and
  • biochemistry of organo-phosphorus compounds.

The key to the puzzle of the steroid hormones in cancer was the isolation of crystalline estrone by Doisy et al.2 from extracts of urine of pregnant women. In the phosphorus field there were magnificent findings of hexose phosphates, nucleotides, coenzymes and high-energy phosphate intermediates. These wonderful discoveries provided the Zeitgeist for our work.

Through the portal of phosphorus metabolism we entered on a series of interconnected observations in steroid endocrinology. A program was not prepared in advance for this basic physiologic study. The work was fascinating and informative so that it provided its own momentum and served as an end in itself.

The prostatic cell does not die in the absence of testosterone, it merely shrivels. But the hormone-dependent cancer cell is entirely different. It grows in the presence of supporting hormones but it dies in their absence and for this reason it cannot participate in growth cycles.

A remarkable effect of testosterone is the promotion of growth of its target cells during complete deprival of food. Androstane derivatives conferred on the prostate of puppies a selective nutritional advantage during starvation of 3 weeks whereby abundant growth of this gland-occurred while there was serious cell breakdown in most of the tissues of the body.

At first it was vexatious to encounter a dog with a prostatic tumor during a metabolic study but before long such dogs were sought. It was soon observed that orchiectomy or the administration of restricted amounts of phenolic estrogens caused a rapid shrinkage of canine prostatic tumors.

The experiments on canine neoplasia proved relevant to human prostate cancer; there had been no earlier reports indicating any relationship of hormones to this malignant growth.

Kutscher and Wolbergs9 discovered that acid phosphatase is rich in concentration in the prostate of adult human males. Gutman and Gutman10 found that many patients with metastatic prostate cancer have significant increases of acid phosphatase in their blood serum. Cancer of the prostate frequently metastasizes to bone.

Human prostate cancer which had metastasized to bone was studied at first. The activities of acid and alkaline phosphatases in the blood were measured concurrently at frequent intervals. The methods are reproducible and not costly in time or materials; both enzymes were measured in duplicate in a small quantity (0.5 ml) of serum. The level of acid phosphatase indicated activity of the disseminated cancer cells in all metastatic loci. The titer of alkaline phosphatase revealed the function of the osteoblasts as influenced by the presence of the prostatic cancer cells that were their near neighbors. By periodic measurement of the two enzymes one obtains a view of overall activity of the cancer and the reaction of non-malignant cells of the host to the presence of that cancer. Thereby the great but opposing influences of, respectively, the administration or deprival of androgenic hormones upon prostate cancer cells were revealed with precision and simplicity. Orchiectomy or the administration of phenolic estrogens resulted in regression of cancer of the human prostate whereas, in untreated cases, testosterone enhanced the rate of growth of the neoplasm.

The first indication that advanced cancer can be induced to regress was the beneficial effect of oöphorectomy on cancer of the breast of two women. This empirical observation17 of Beatson in 1896 was remarkable since it was made before the concept of hormones had been developed. The beneficial action of removal of ovaries was not understood until steroid hormones had been isolated 4 decades later.

But why does breast cancer thrive in folks who do not possess ovarian function – in men, old women, and females who have had oöphorectomy?

Farrow and Adair observed that benefits of great magnitude frequently follow orchiectomy in mammary cancer in the human male. Thereby, they established that testis function can sustain mammary cancer.

A half century after the classic invention of Beatson it was found out that adrenal function can maintain and promote growth of human mammary cancer. The adrenal factor supporting growth of cancer was identified when it was shown that bilateral adrenalectomy (with glucocorticoids as substitution therapy) can result in profound and prolonged regression of mammary carcinoma in men and women who do not possess gonadal function. In developing the idea of adrenalectomy for treatment of advanced cancer in man we were considerably influenced by the discovery of Woolley et al. that adrenals can evoke cancer of the breast in the mouse.

Mammary cancers induced in the male rat by aromatics were not influenced by orchiectomy and hypophysectomy; by definition, these neoplasms are hormone-independent. In contrast to male rat, most mammary cancers of men wither impressively after deprival of supporting hormones.

The hormone-responsiveness of established mammary cancers induced in female rat by aromatics or ionizing radiation is identical; it was a newly recognized property of experimental breast cancers. Prior to this finding, clinical study of patients with mammary cancer was the only material available for investigation of hormonal-restraint of neoplasms of the breast.

In female rat, many but far from all of the induced mammary cancers vanished after removal of ovaries or the pituitary. In our experiments hypophysectomy was the most efficient of all methods to cure rat’s mammary cancer.

Malignant cells which succumb to hormone-deprival, by definition, are hormone-dependent. The quality of hormone-dependence resides in the tumor cells whereas their growth is determined by the host’s endocrine status.

Both man and the animals can have some of their cancer cells which are hormone-dependent while other neoplastic cells in the same organism are not endocrine-responsive.

The cure of a cancer after hormone-deprival results from death of the cancer cells whereas their normal analogues in the same animal shrivel but survive. It is a basic proposition in endocrine-restraint of malignant disease that cancer cells can differ in a crucial way from ancestral normal cells in response to modification of the hormonal milieu intérieur of the body.

Cancer is not necessarily autonomous and intrinsically self-perpetuating. Its growth can be sustained and propagated by hormonal function in the host which is not unusual in kind or exaggerated in rate but which is operating at normal or even subnormal levels.

The control of cancer by endocrine methods can be described in three propositions:

  • Some types of cancer cells differ in a cardinal way from the cells from which they arose in their response to change in their hormonal environment.
  • Certain cancers are hormone-dependent and these cells die when supporting hormones are eliminated.
  • Certain cancers succumb when large amounts of hormones are administered.

The Nobel Prize in Physiology or Medicine 1971

Earl W. Sutherland, Jr.

“for his discoveries concerning the mechanisms of the action of hormones”

Part II. Vitamins

The Nobel Prize in Physiology or Medicine 1929

Christiaan Eijkman “for his discovery of the antineuritic vitamin”

Sir Frederick Gowland Hopkins “for his discovery of the growth-stimulating vitamins”

When the 20th century began, the prevailing thought about nutrition rested on the importance of energy requirements, as elucidated by  Rubner, Benedict and others, in the United States, that entails the quantitative measurement of the food value of carbohydrates, fats, and proteins. But there was a misconception of the process in its detail. The quantitative studies of the energetics and of respiratory exchange were not sufficient to explain problems that arise as a result of deficiencies of micronutrients in food intake.  The complexity of these nutritional needs as we now view them is indeed astonishing.

There is a need for indispensable organic substances specific in nature and function of which the quantitative supply is so small as to contribute little or nothing to the energy factor in nutrition. These substances, following the suggestion of Casimir Funk, we have agreed to call vitamins.

In 1881, Lunin, and associate of Bungel noted that a diet of milk was not sufficient to sustain the life of mice, even if the caloric nutrients were adequate. The main lesson taken from the findings was concerned with inorganic nutrients had not been determined that would answer the question. A decade later, Socin, in Bunge’s group, concluded that the deficiency was in the quality of protein.  In an important paper by Professor Pekelharing in 1905 published an astonishing paper following on the work in Bungel’s lab. He noted that there is a substance in milk in small quantities that he was unable to identify that is essential for life.  It is noteworthy that Pekelharing records prolonged endeavours towards the isolation of a vitamin.

Eikman’s work came in the 1880s. He did not at first visualize beriberi clearly as a deficiency disease. The view that the cortical substance in rice supplied a need rather than neutralized a poison was soon after put forward by Grijns and ultimately accepted by Professor Eijkman himself.  The prevailing thinking about nutritional requirements was preoccupied by the methods of calorimetry at the turn of the century.  The idea of “deficiency diseases” was obscured as a result. There was no concept of an indispensable portion of the food supply other than calories, proteins and minerals until 1911-1912.  Hopkins was convinced that the science of nutrition had to come to terms with an explanation for scurvy and rickets, and he needed to use the new science of biochemistry, which was ongoing at Cambridge.

In 1906-1907, he carried out studies of feeding rats casein, along the lines of Bungel.s experiments, and he found variability in the results with different casein preparations.  He next washed the casein so that any soluble substance was extracted and the rats died, but if he added the extract they grew.  He also used butter, with results more favorable than casein, and lard, with unfavorable results.  At the same time he was studying polyneuritis in birds, which took up much time.  He know that he had to extract the substance, but was unaware of the fat solubility in 1910. He published his work in 1912. Soon after the publication of his work, and duting WWI, much research was done in US, by Osborn and Mendel at Harvard, and by McCollum at Johns Hopkins, and the vitamins were separated into “water soluble” and “fat soluble”.

The Nobel Prize in Physiology or Medicine 1937

Albert von Szent-Györgyi Nagyrápolt

“for his discoveries in connection with the biological combustion processes, with special reference to vitamin C and the catalysis of fumaric acid”

http://pharmaceuticalintelligence.com/2014/08/18/studies-of-respiration-lead-to-acetyl-coa/

Szent Gyorgyi was a biochemist who worked with Otto Warburg and others, and had a special interest in muscle metabolism. He delineated a portion of the Krebs cycle (Krebs was also associated with Warburg), that which involves the conversion of fumaric acid to succinate.  He also purified vitamin C (ascorbic acid) from paprika in his native region of Hungary. He later turned his interest to cancer research, for which he was honored by the MD Anderson Cancer Center.

The Nobel Prize in Physiology or Medicine 1934

George Hoyt Whipple, George Richards Minot and William Parry Murphy

“for their discoveries concerning liver therapy in cases of anaemia”

The Nobel Prize in Physiology or Medicine 1943

Henrik Carl Peter Dam “for his discovery of vitamin K”

Edward Adelbert Doisy “for his discovery of the chemical nature of vitamin K”

To further his studies of the metabolism of sterols, Dam obtained a Rockefeller Fellowship and worked in Rudolph Schoenheimer’s Laboratory in Freiburg, Germany, during 1932-1933, and later worked with P. Karrer, of Zurich, in 1935. He discovered vitamin K while studying the sterol metabolism of chicks in Copenhagen. When he returned to Denmark after WWII in 1946, Dam’s main research subjects were vitamin K, vitamin E, fats, cholesterol.

Part III.  Microbiology and Plague

The Nobel Prize in Physiology or Medicine 1901

Emil Adolf von Behring

“for his work on serum therapy, especially its application against diphtheria, by which he has opened a new road in the domain of medical science and thereby placed in the hands of the physician a victorious weapon against illness and deaths”

The Nobel Prize in Physiology or Medicine 1902

Ronald Ross

“for his work on malaria, by which he has shown how it enters the organism and thereby has laid the foundation for successful research on this disease and methods of combating it”

The Nobel Prize in Physiology or Medicine 1905

Robert Koch

“for his investigations and discoveries in relation to tuberculosis”

The Nobel Prize in Physiology or Medicine 1908

The Nobel Prize in Physiology or Medicine 1928

Charles Jules Henri Nicolle

“for his work on typhus”

The Nobel Prize in Physiology or Medicine 1939

Gerhard Domagk

“for the discovery of the antibacterial effects of prontosil”

The Nobel Prize in Physiology or Medicine 1945

Sir Alexander Fleming, Ernst Boris Chain and Sir Howard Walter Florey

“for the discovery of penicillin and its curative effect in various infectious diseases”

The Nobel Prize in Physiology or Medicine 1951

Max Theiler

“for his discoveries concerning yellow fever and how to combat it”

The Nobel Prize in Physiology or Medicine 1952

Selman Abraham Waksman

“for his discovery of streptomycin, the first antibiotic effective against tuberculosis”

The Nobel Prize in Physiology or Medicine 1954

John Franklin Enders, Thomas Huckle Weller and Frederick Chapman Robbins

“for their discovery of the ability of poliomyelitis viruses to grow in cultures of various types of tissue”

The Nobel Prize in Physiology or Medicine 1976

Baruch S. Blumberg and D. Carleton Gajdusek

“for their discoveries concerning new mechanisms for the origin and dissemination of infectious diseases”

Part IV.

Ilya Ilyich Mechnikov and Paul Ehrlich

“in recognition of their work on immunity”

The Nobel Prize in Physiology or Medicine 1919

Jules Bordet

“for his discoveries relating to immunity”

The Nobel Prize in Physiology or Medicine 1930 was awarded to Karl Landsteiner “for his discovery of human blood groups”.

In 1901, in the course of his serological studies Landsteiner observed that when, under normal physiological conditions, blood serum of a human was added to normal blood of another human the red corpuscles in some cases coalesced into larger or smaller clusters. This observation of Landsteiner was the starting-point of his discovery of the human blood groups. In the following year, i.e. 1901, Landsteiner published his discovery that in man, blood types could be classified into three groups according to their different agglutinating properties. These agglutinating properties were identified more closely by two specific blood-cell structures, which can occur either singly or simultaneously in the same individual.

Landsteiner’s discovery of the blood groups was immediately confirmed but it was a long time before anyone began to realize the great importance of the discovery. The first incentive to pay greater attention to this discovery was provided by von Dungern and Hirszfeld when in 1910 they published their investigations into the hereditary transmission of blood groups. Thereafter the blood groups became the subject of exhaustive studies, on a scale increasing year by year, in more or less all civilized countries. In order to avoid, in the publication of research on this subject, detailed descriptions which would otherwise be necessary – of the four blood groups and their appropriate cell structures, certain short designations for the blood groups and corresponding specific cell structures have been introduced. Thus, one of the two specific cell structures, characterizing the agglutinating properties of human blood is designated by the letter A and another by B, and accordingly we speak of «blood group A» and «blood group B». These two cell structures can also occur simultaneously in the same individual, and this structure as well as the corresponding blood group is described as AB.

The fourth blood-cell structure and the corresponding blood group is known as O, which is intended to indicate that people belonging to this group lack the specific blood characteristics typical of each of the other blood groups. Landsteiner had shown that under normal physiological conditions the blood serum will not agglutinate the erythrocytes of the same individual or those of other individuals with the same structure. Thus, the blood serum of people whose erythrocytes have group structure A will not agglutinate erythrocytes of this structure but it will agglutinate those of group structure B, and where the erythrocytes have group structure B the corresponding serum does not agglutinate these erythrocytes but it does agglutinate those with group structure A. Blood serum of persons whose erythrocytes have structures A as well as B, i.e. who have structure AB, does not agglutinate erythrocytes having structures A, B, or AB. Blood serum of persons belonging to blood group O agglutinates erythrocytes of persons belonging to any of the group.

The group characteristics are handed down in accordance with Mendel’s laws. The characteristics of blood groups A, B, and AB are dominant, and opposing these dominant characteristics are the recessive ones which characterize blood group O. An individual cannot belong to blood group A, B, or AB, unless the specific characteristics of these groups are present in the parents, whereas the recessive characteristics of blood group O can occur if the parents belong to any one of the four groups. If both parents belong to group O, then the children never have the characteristics of A, B, or AB. The children must then likewise belong to blood group O. If one of the parents belongs to group A and the other to group B, then the child may belong to group A or B or it may possess both characteristics and therefore belong to group AB. If one of the parents belongs to group AB and the other to group O, then in accordance with Mendel’s law of segregation the AB characteristic can be segregated and the components can occur as separate characteristics in the children.

Even while he was a student he had begun to do biochemical research and in 1891 he published a paper on the influence of diet on the composition of blood ash. To gain further knowledge of chemistry he spent the next five years in the laboratories of Hantzsch at Zurich, Emil Fischer at Wurzburg, and E. Bamberger at Munich.

In 1896 he became an assistant under Max von Gruber in the Hygiene Institute at Vienna. Even at this time he was interested in the mechanisms of immunity and in the nature of antibodies. From 1898 till 1908 he held the post of assistant in the University Department of Pathological Anatomy in Vienna, the Head of which was Professor A. Weichselbaum, who had discovered the bacterial cause of meningitis, and with Fraenckel had discovered the pneumococcus. Here Landsteiner worked on morbid physiology rather than on morbid anatomy. In this he was encouraged by Weichselbaum, in spite of the criticism of others in this Institute.

Up to the year 1919, after twenty years of work on pathological anatomy, Landsteiner with a number of collaborators had published many papers on his findings in morbid anatomy and on immunology. He discovered new facts about the immunology of syphilis, added to the knowledge of the Wassermann reaction, and discovered the immunological factors which he named haptens (it then became clear that the active substances in the extracts of normal organs used in this reaction were, in fact, haptens). He made fundamental contributions to our knowledge of paroxysmal haemoglobinuria.

He also showed that the cause of poliomyelitis could be transmitted to monkeys by injecting into them material prepared by grinding up the spinal cords of children who had died from this disease, and, lacking in Vienna monkeys for further experiments, he went to the Pasteur Institute in Paris, where monkeys were available. His work there, together with that independently done by Flexner and Lewis, laid the foundations of our knowledge of the cause and immunology of poliomyelitis.

http://www.nobelprize.org/nobel_prizes/medicine/laureates/1930/landsteiner-bio.html

His discovery of the differences and identification of the groups that were alike made it possible for blood transfusions to become a routine procedure.  This paved the way for many other medical procedures that we don’t even think twice about today, such as surgery, blood banks, and transplants.

While in medical school, Landsteiner began experimental work in chemistry, as he was greatly inspired by Ernst Ludwig, one of his professors. After receiving his medical degree, Landsteiner spent the next five years doing advanced research in organic chemistry for Emil Fischer, although medicine remained his chief interest. During 1886-1897, he combined these interests at the Institute of Hygiene at the University of Vienna where he researched immunology and serology. These fields were developing rapidly in the late 1800s as scientists explored numerous physiological changes associated with bacterial infection. Immunology and serology then became Landsteiner’s lifelong focus. Landsteiner was primarily interested in the lack of safety and effectiveness of blood transfusions.

Landsteiner is known as the “melancholy genius” because he was so sad and intense, yet he was so systematic, thorough, and dedicated. He wrote 346 papers during his long career contributing to many areas of scientific knowledge. He is considered the father of Hematology (the study of blood), Immunology (the study of the immune system), Polio research, and Allergy research.

The fundamental contribution of Robert A. Good to the discovery of the crucial role of thymus in mammalian immunity

Domenico Ribatti

Immunology. Nov 2006; 119(3): 291–295.

http://dx.doi.org:/10.1111/j.1365-2567.2006.02484.x

Robert Alan Good was a pioneer in the field of immunodeficiency diseases. He and his colleagues defined the cellular basis and functional consequences of many of the inherited immunodeficiency diseases. His was one of the groups that discovered the pivotal role of the thymus in the immune system development and defined the separate development of the thymus-dependent and bursa-dependent lymphoid cell lineages and their responsibilities in cell-mediated and humoral immunity.

Keywords: bursa of Fabricius, history of medicine, immunology, thymus

Robert A. Good (Fig. 1) began his intellectual and experimental queries related to the thymus in 1952 at the University of Minnesota, initially with paediatric patients. However, his interest in the plasma cell, antibodies and the immune response began in 1944, while still in Medical School at the University of Minnesota in Minneapolis, with his first publication appearing in 1945.

Robert Good

Robert Good

Figure 1

Robert A. Good with two young patients. Source: http://www.robertagoodarchives.com.

Good described a new syndrome that would carry his name: ‘Good syndrome: thymoma with immunodeficiency’.7 The clinical characteristics of Good syndrome are increased susceptibility to bacterial infections by encapsulated organisms and opportunistic viral and fungal infections. Subsequently, Good saw several patients with thymic tumours, which regularly presented with immunodeficiencies, leukopenia, lymphopenia and eosinophylopenia. Plasma cells, however, were not completely absent: the patient was severely hypogammaglobulinaemic rather than agammaglobulinaemic.

The association of thymoma with profound and broadly based immunodeficiency provoked Good’s group to ask what role the thymus plays in immunity.

Good and others found that the patients lacked all of the subsequently described immunoglobulins. These patients were found not to have plasma cells or germinal centres in their haematopoietic and lymphoid tissues. They possessed circulating lymphocytes in normal numbers.

In the mouse and other rodents, immunological depression is profound after thymectomy in neonatal animals, resulting in considerable depression of antibody production, plus deficient transplantation immunity and delayed-type hypersensitivity. Speculation on the reason for immunological failure following neonatal thymectomy has centred on the thymus as a source of cells or humoral factors essential to normal lymphoid development and immunological maturation.

Three independent groups of experiments showed that neonatal thymectomy has a significant effect on immunological reactivity: (i) the studies of Fichtelius et al. in young guinea-pigs showed that the depression of antibody response is slight, but significant; (ii) the experiments of Archer, Good and co-workers in rabbits and mice; and (iii) the studies by Miller at the Chester Beatty Research Institute in London.

Stutman, in Good’s laboratory, demonstrated that non-lymphoid thymomas induced the restoration of immunological functions in neonatally thymectomized mice and that when thymomas were grafted into allogenic hosts, immunological restoration was mediated by lymphoid cells of host type. Comparable results were obtained with free thymus grafts.

Cooper et al. postulated that a lymphoid stem cell population exists that is induced to differentiate along two distinct and separate cell lines related to two central lymphoid organs. In birds this developmental influence is exercised by the thymus and the bursa of Fabricius. Removal of one or both in the early post-hatching period has strikingly different influences on immunological function in the maturing animals. The thymus in the chicken functions exactly as does the thymus of the mouse. It represents the site of differentiation of a population of lymphocytes that subserve largely the functions of cell-mediated immunity.

The athymic children described by Di George, who lacked lymphoid cells in the deep cortical areas of the nodes but not at the peripheral areas, seemed the equivalent of the neonatally thymectomized mice and chickens. These patients had severe deficiencies of small T lymhocytes and profound deficiencies of all cell-mediated immunities, including delayed allergies, deficient allograft immunities and deficiencies in resistance to viruses, fungi and opportunistic infections.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1819567/

The Nobel Prize in Physiology or Medicine 1960

Sir Frank Macfarlane Burnet and Peter Brian Medawar

“for discovery of acquired immunological tolerance”

The Nobel Prize in Physiology or Medicine 1980

Baruj Benacerraf, Jean Dausset and George D. Snell

“for their discoveries concerning genetically determined structures on the cell surface that regulate immunological reactions”

Part V.

Biochemistry and Molecular Biology

The Nobel Prize in Physiology or Medicine 1922

Archibald Vivian Hill

“for his discovery relating to the production of heat in the muscle”

Otto Fritz Meyerhof

“for his discovery of the fixed relationship between the consumption of oxygen and the metabolism of lactic acid in the muscle”

The Nobel Prize in Physiology or Medicine 1931

Otto Heinrich Warburg

“for his discovery of the nature and mode of action of the respiratory enzyme”

http://pharmaceuticalintelligence.com/2012/11/02/otto-warburg-a-giant-of-modern-cellular-biology/

http://pharmaceuticalintelligence.com/2013/11/28/warburg-effect-revisited/

http://pharmaceuticalintelligence.com/2013/03/12/ampk-is-a-negative-regulator-of-the-warburg-effect-and-suppresses-tumor-growth-in-vivo/

http://pharmaceuticalintelligence.com/2012/10/17/is-the-warburg-effect-the-cause-or-the-effect-of-cancer-a-21st-century-view/

The Nobel Prize in Physiology or Medicine 1933

Thomas Hunt Morgan

“for his discoveries concerning the role played by the chromosome in heredity”

The Nobel Prize in Physiology or Medicine 1947

Carl Ferdinand Cori and Gerty Theresa Cori, née Radnitz

“for their discovery of the course of the catalytic conversion of glycogen”

The Nobel Prize in Physiology or Medicine 1953

Hans Adolf Krebs

“for his discovery of the citric acid cycle”

http://pharmaceuticalintelligence.com/2014/10/22/introduction-to-metabolic-pathways/

Fritz Albert Lipmann

“for his discovery of co-enzyme A and its importance for intermediary metabolism”

http://pharmaceuticalintelligence.com/2014/10/22/introduction-to-metabolic-pathways/

http://pharmaceuticalintelligence.com/2014/11/07/summary-of-cell-structure-anatomic-correlates-of-metabolic-function-2/

http://pharmaceuticalintelligence.com/2014/08/18/studies-of-respiration-lead-to-acetyl-coa/

http://pharmaceuticalintelligence.com/2013/01/26/portrait-of-a-great-scientist-and-mentor-nathan-oram-kaplan/

The Nobel Prize in Physiology or Medicine 1955

Axel Hugo Theodor Theorell

“for his discoveries concerning the nature and mode of action of oxidation enzymes”

http://pharmaceuticalintelligence.com/2014/08/18/studies-of-respiration-lead-to-acetyl-coa/

The Nobel Prize in Physiology or Medicine 1958

George Wells Beadle and Edward Lawrie Tatum

“for their discovery that genes act by regulating definite chemical events”

The Nobel Prize in Physiology or Medicine 1959

Severo Ochoa and Arthur Kornberg

“for their discovery of the mechanisms in the biological synthesis of ribonucleic acid and deoxyribonucleic acid”

Joshua Lederberg

“for his discoveries concerning genetic recombination and the organization of the genetic material of bacteria”

The Nobel Prize in Physiology or Medicine 1962

Francis Harry Compton Crick, James Dewey Watson and Maurice Hugh Frederick Wilkins

“for their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material”

The Nobel Prize in Physiology or Medicine 1963

Sir John Carew Eccles, Alan Lloyd Hodgkin and Andrew Fielding Huxley

“for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central portions of the nerve cell membrane”

The Nobel Prize in Physiology or Medicine 1964

Konrad Bloch and Feodor Lynen

“for their discoveries concerning the mechanism and regulation of the cholesterol and fatty acid metabolism”
http://pharmaceuticalintelligence.com/2014/10/25/oxidation-and-synthesis-of-fatty-acids/

The Nobel Prize in Physiology or Medicine 1965

François Jacob, André Lwoff and Jacques Monod

“for their discoveries concerning genetic control of enzyme and virus synthesis”

http://pharmaceuticalintelligence.com/2014/10/06/isoenzymes-in-cell-metabolic-pathways/

The Nobel Prize in Physiology or Medicine 1967

Ragnar Granit, Haldan Keffer Hartline and George Wald

“for their discoveries concerning the primary physiological and chemical visual processes in the eye”

The Nobel Prize in Physiology or Medicine 1968

Robert W. Holley, Har Gobind Khorana and Marshall W. Nirenberg

“for their interpretation of the genetic code and its function in protein synthesis”

The Nobel Prize in Physiology or Medicine 1969

Max Delbrück, Alfred D. Hershey and Salvador E. Luria

“for their discoveries concerning the replication mechanism and the genetic structure of viruses”

The Nobel Prize in Physiology or Medicine 1970

Sir Bernard Katz, Ulf von Euler and Julius Axelrod

“for their discoveries concerning the humoral transmittors in the nerve terminals and the mechanism for their storage, release and inactivation”

The Nobel Prize in Physiology or Medicine 1972

Gerald M. Edelman and Rodney R. Porter

“for their discoveries concerning the chemical structure of antibodies”

The Nobel Prize in Physiology or Medicine 1974

Albert Claude, Christian de Duve and George E. Palade

“for their discoveries concerning the structural and functional organization of the cell”

The Nobel Prize in Physiology or Medicine 1975

David Baltimore, Renato Dulbecco and Howard Martin Temin

“for their discoveries concerning the interaction between tumour viruses and the genetic material of the cell”
The Nobel Prize in Physiology or Medicine 1977

Rosalyn Yalow

“for the development of radioimmunoassays of peptide hormones”

The Nobel Prize in Physiology or Medicine 1978

Werner Arber, Daniel Nathans and Hamilton O. Smith

“for the discovery of restriction enzymes and their application to problems of molecular genetics”

Read Full Post »

Selected Contributions to Chemistry from 1880 to 1980

Curator: Larry H. Bernstein, MD, FCAP

 

FUNDAMENTALS OF CHEMISTRY – Vol. I  The Contribution of Nobel Laureates to Chemistry

– Ferruccio Trifiro

http://www.eolss.net/sample-chapters/c06/e6-11-01-04.pdf

This chapter deals with the contribution to the development of chemistry of all the Nobel Prize winners in chemistry up to the end of the twentieth century, together with some in physics and medicine or physiology that have had particular relevance for the advances achieved in chemistry. The contributions of the various Nobel laureates cited are briefly summarized. The Nobel laureates in physics dealt with in this chapter are those who made important contributions to ard the understanding of the properties of atoms, the development of theoretical tools to treat the chemical bond, or the development of new analytical instrumentation. The Nobel laureates in medicine or physiology cited here are those whose contributions have been in the area of using chemistry to understand natural processes, such as the physiological aspects of living organisms through electron and ion exchange processes, enzymatic catalysis, and DNA-based chemistry. Eight areas of thought or thematic areas were chosen into which the contributions of the Nobel laureates to chemistry can be subdivided.

  1. The Properties of Molecules

4.1. The Discovery of Coordination and Metallorganic Compounds

4.2. The Discovery of New Organic Molecules

4.3. The Emergence of Quantum Chemistry

  1. The Dynamics of Chemical Reactions

6.1. Kinetics of Heterogeneous and Homogeneous Processes

6.2. The Identification of the Activated State

  1. The Understanding of Natural Processes

8.1. From Ferments to Enzymes

8.2. Understanding the Mechanism of Action of Enzymes

8.3. Mechanisms of Important Natural Processes

8.4. Characterization of Biologically Important Molecules

  1. The Identification of Chemical Entities

9.1. Analytical Methods

9.2. New Separation Techniques

9.3. The Development of New Instrumentation for Structure Analysis

The Nobel Prize in Chemistry: The Development of Modern Chemistry

by Bo G. Malmström and Bertil Andersson*

http://www.nobelprize.org/nobel_prizes/themes/chemistry/malmstrom/

Introduction

1.1 Chemistry at the Borders to Physics and Biology

The turn of the century 1900 was also a turning point in the history of chemistry. A survey of the Nobel Prizes in Chemistry during this century provides a view toward important trends in the development of Chemistry at the center of the sciences, bordering onto physics, which provides its theoretical foundation, on one side, and onto biology on the other. The fact that chemistry flourished during the beginning of the 20th century is intimately connected with fundamental developments in physics.

In 1897 Sir Joseph John Thomson of Cambridge announced his discovery of the electron, for which he was awarded the Nobel Prize for Physics in 1906. It took a number of years before its relevance to chemistry was seen. In 1911 Ernest Rutherford, who had worked in Thomson’s laboratory in the 1890s, formulated an atomic model, which depicted a cloud of electrons circling around the nucleus. Rutherford had received the Nobel Prize for Chemistry in 1908 for his work on radioactivity.

In Rutherford’s atomic model the stability of atoms was at variance with the laws of classical physics. Niels Bohr from Copenhagen brought clarity to this dilemma in the distinct lines observed in the spectra of atoms, the regularities of which had been discovered in 1890 by the physics professor Johannes (Janne) Rydberg at Lund University. This was the basis for Bohr’s formulation (1913) of an alternative atomic model. Only certain circular orbits of the electrons are allowed. In this model light is emitted (or absorbed), when an electron makes a transition from one orbit to another. For this, Bohr received the Nobel Prize for Physics in 1922

Gilbert Newton Lewis next suggested in 1916 that strong (covalent) bonds between atoms involve a sharing of two electrons between these atoms (electron-pair bond). Lewis also contributed fundamental work in chemical thermodynamics, and his brilliant textbook, Thermodynamics (1923), written together with Merle Randall, is counted as one of the masterworks in the chemical literature. Lewis never received a Nobel Prize.

However, important work was published in the 1890s, considered by the first Nobel Committee for Chemistry (see Section 2). Three of the Laureates during the first decade, Jacobus Henricus van’t Hoff, Svante Arrhenius and Wilhelm Ostwald, are generally regarded as the founders of a new branch of chemistry, physical chemistry. Fundamental work was also recognized in organic chemistry and in the chemistry of natural products, which is clearly reflected in the early prizes. Further, the Nobel Committee, recognized the border towards biology in 1907, with the prize to Eduard Buchner “for his biochemical researches and his discovery of cell-free fermentation”.

  1. The First Decade of Nobel Prizes for Chemistry

So much fundamental work in chemistry had been carried out during the last two decades of the 19th century that a decision for the first several prizes was not easy.  In 1901 the Academy had to consider 20 nominations, but no less than 11 of these named van’t Hoff, who was selected. van’t Hoff had already established the four valences for the carbon atom in his PhD thesis in Utrecht in 1874, foundation work for  modern organic chemistry. But the Nobel Prize was awarded for his later work on chemical kinetics and equilibria and on the osmotic pressure in solution, published in 1884 and 1886.

In his 1886 work van’t Hoff showed that most dissolved chemical compounds give an osmotic pressure equal to the gas pressure they would have exerted in the absence of the solvent. An apparent exception was aqueous solutions of electrolytes (acids, bases and their salts), but in the following year Arrhenius showed that this anomaly could be explained, if it is assumed that electrolytes in water dissociate into ions. Arrhenius had already presented the rudiments of his dissociation theory in his doctoral thesis, which was defended in Uppsala in 1884 and was not entirely well received by the faculty. It was, however, strongly supported by Ostwald in Riga, who, in fact, travelled to Uppsala to initiate a collaboration with Arrhenius. In 1886-1990 Arrhenius did work with Ostwald, first in Riga and then in Leipzig, and also with van’t Hoff in Berlin. Arrhenius was awarded the Nobel Prize for Chemistry in 1903,  and he was also nominated for the Prize for Physics (see Section 1).

The award of the Nobel Prize for Chemistry in 1909 to Ostwald was chiefly in recognition of his work on catalysis and the rates of chemical reactions. Ostwald had in his investigations, following up observations in his thesis in 1878, shown that the rate of acid-catalyzed reactions is proportional to the square of the strength of the acid, as measured by titration with base. His work offered support not only to Arrhenius’ theory of dissociation but also to van’t Hoff’s theory for osmotic pressure. Ostwald was founder and editor of Zeitschrift für Physikalische Chemie, the publication of which is generally regarded as the birth of this new branch of chemistry.

Three of the Nobel Prizes for Chemistry during the first decade were awarded for pioneering work in organic chemistry. In 1902 Emil Fischer, then in Berlin, was given the prize for “his work on sugar and purine syntheses”. Fischer’s work is an example of the growing interest biologically important substances, and was a foundation for the development of biochemistry. Another major influence from organic chemistry was the development of chemical industry, and a chief contributor here was Fischer’s teacher, Adolf von Baeyer in Munich, who was awarded the prize in 1905 “in recognition of his services in the advancement of organic chemistry and the chemical industry, … ” His contributions include, in particular, structure determination of organic

Ernest Rutherford [Lord Rutherford since 1931], professor of physics in Manchester, was awarded the Nobel Prize for Chemistry in 1908. In his studies of uranium disintegration he found two types of radiation, named α- and β-rays, and by their deviation in electric and magnetic fields he could show that α-rays consist of positively charged particles. He had received many nominations for the Nobel Prize for Physics (see Section 1).

In 1897 Eduard Buchner, at the time professor in Tübingen, published results demonstrating that the fermentation of sugar to alcohol and carbon dioxide can take place in the absence of yeast cells. Louis Pasteur had earlier maintained that alcoholic fermentation can only occur in the presence of living yeast cells. Buchner’s experiments showed unequivocally that fermentation is a catalytic process caused by the action of enzymes, as had been suggested by Berzelius for all life processes. Because of Buchner’s experiment, 1897 is generally regarded as the birth date for biochemistry proper. Buchner was awarded the Nobel Prize for Chemistry in 1907, when he was professor at the agricultural college in Berlin. This confirmed the prediction of his former teacher, Adolf von Baeyer: “This will make him famous, in spite of the fact that he lacks talent as a chemist.”

  1. The Nobel Prizes for Chemistry 1911-2000

3.1 General and Physical Chemistry

The Nobel Prize for Chemistry in 1914 was awarded to Theodore William Richards of Harvard University for “his accurate determinations of the atomic weight of a large number of chemical elements”. In 1913 Richards had discovered that the atomic weight of natural lead and of that formed in radioactive decay of uranium minerals differ. This pointed to the existence of isotopes, i.e. atoms of the same element with different atomic weights, which was accurately demonstrated by Francis William Aston at Cambridge University, with the aid of an instrument developed by him, the mass spectrograph. For his achievements Aston received the Nobel Prize for Chemistry in 1922.

One branch of physical chemistry deals with chemical events at the interface of two phases, for example, solid and liquid, and phenomena at such interfaces have important applications all the way from technical to physiological processes. Detailed studies of adsorption on surfaces, were carried out by Irving Langmuir at the research laboratory of General Electric Company, who was awarded the Nobel Prize for Chemistry in 1932, the first industrial scientist to receive this distinction.

Two of the Prizes for Chemistry in more recent decades have been given for fundamental work in the application of spectroscopic methods (Prizes for Physics in 1952, 1955 and 1961) to chemical problems. Gerhard Herzberg, a physicist at the University of Saskatchewan, received the Nobel Prize for Chemistry in 1971 for his molecular spectroscopy studies “of the electronic structure and geometry of molecules, particularly free radicals”. The most used spectroscopic method in chemistry is undoubtedly NMR (nuclear magnetic resonance), and Richard R. Ernst at ETH in Zürich was given the Nobel Prize for Chemistry in 1991 for “the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy”. Ernst’s methodology has now made it possible to determine the structure in solution (in contrast to crystals; cf. Section 3.5) of large molecules, such as proteins.

3.2 Chemical Thermodynamics

The Nobel Prize for Chemistry to van’t Hoff was in part for work in chemical thermodynamics, and many later contributions in this area have also been recognized with Nobel Prizes.  Walther Hermann Nernst of Berlin received this award in 1920 for work in thermochemistry, despite a 16-year opposition to this recognition from Arrhenius. Nernst had shown that it is possible to determine the equilibrium constant for a chemical reaction from thermal data, and in so doing he formulated what he himself called the third law of thermodynamics. This states that the entropy, a thermodynamic quantity, which is a measure of the disorder in the system, approaches zero as the temperature goes towards absolute zero. van’t Hoff had derived the mass action equation in 1886, with the aid of the second law which says, that the entropy increases in all spontaneous processes [this had already been done in 1876 by J. Willard Gibbs at Yale, who certainly had deserved a Nobel Prize].  Nernst showed in 1906 that it is possible with the aid of the third law, to derive the necessary parameters from the temperature dependence of thermochemical quantities. Nernst carried out thermo-chemical measurements at very low temperatures to prove his heat theorem. G.N. Lewis (see Section 1.1) in Berkeley extended these studies in the 1920s and his new formulation of the third law was confirmed by his student, William Francis Giauque, who extended the temperature range experimentally accessible by introducing the method of adiabatic demagnetization in 1933. He managed to reach temperatures a few thousandths of a degree above absolute zero and could thereby provide extremely accurate entropy estimates. He also showed that it is possible to determine entropies from spectroscopic data. Giauque was awarded the Nobel Prize for Chemistry in 1949 for his contributions to chemical thermodynamics.

The next Nobel Prize given for work in thermodynamics went to Lars Onsager of Yale University in 1968 for contributions to the thermodynamics of irreversible processes. Classical thermodynamics deals with systems at equilibrium, in which the chemical reactions are said to be reversible, but many chemical systems, for example, the most complex of all, living organisms, are far from equilibrium and their reactions are said to be irreversible. Onsager developed his so-called reciprocal relations in 1931, describing the flow of matter and energy in such systems, but the importance of his work was not recognized until the end of the 1940s. A further step forward in the development of non-equilibrium thermodynamics was taken by Ilya Prigogine in Bruxelles, whose theory of dissipative structures was awarded the Nobel Prize for Chemistry in 1977.

3.3 Chemical Change

The chief method to get information about the mechanism of chemical reactions is chemical kinetics, i.e. measurements of the rate of the reaction as a function of reactant concentrations as well as its dependence on temperature, pressure and reaction medium. Important work in this area had been done already in the 1880s by two of the early Laureates, van’t Hoff and Arrhenius, who showed that it is not enough for molecules to collide for a reaction to take place. Only molecules with sufficient kinetic energy in the collision do, in fact, react, and Arrhenius derived an equation in 1889 allowing the calculation of this activation energy from the temperature dependence of the reaction rate. With the advent of quantum mechanics in the 1920s (see Section 3.4), Eyring developed his transition-state theory in 1935 which showed that the activation entropy is also important. Strangely, Eyring never received a Nobel Prize (see Section 1.2).

In 1956 Sir Cyril Norman Hinshelwood of Oxford and Nikolay Nikolaevich Semenov from Moscow shared the Nobel Prize for Chemistry “for their researches into the mechanism of chemical reactions”.  A limit in investigating reaction rates is set by the speed with which the reaction can be initiated. If this is done by rapid mixing of the reactants, the time limit is about one thousandth of a second (millisecond). In the 1950s Manfred Eigen from Göttingen developed chemical relaxation methods that allow measurements in times as short as a thousandth or a millionth of a millisecond (microseconds or nanoseconds). The methods involve disturbing an equilibrium by rapid changes in temperature or pressure and then follow the passage to a new equilibrium. Another way to initiate some reactions rapidly is flash photolysis, i.e. by short light flashes, a method developed by Ronald G.W. Norrish at Cambridge and George Porter (Lord Porter since 1990) in London. Eigen received one-half and Norrish and Porter shared the other half of the Nobel Prize for Chemistry in 1967. The milli- to picosecond time scales gave important information on chemical reactions. However, it was not until it was possible to generate femtosecond laser pulses (10-15 s) that it became possible to reveal when chemical bonds are broken and formed. Ahmed Zewail (born 1946 in Egypt) at California Institute of Technology received the Nobel Prize for Chemistry in 1999 for his development of “femtochemistry” and in particular for being the first to experimentally demonstrate a transition state during a chemical reaction. His experiments relate back to 1889 when Arrhenius (Nobel Prize, 1903) made the important prediction that there must exist intermediates (transition states) in the transformation from reactants to products.

Henry Taube of Stanford University was awarded the Nobel Prize for Chemistry in 1983 “for his work on the mechanism of electron transfer reactions, especially in metal complexes”. Even if Taube’s work was on inorganic reactions, electron transfer is important in many catalytic processes used in industry and also in biological systems, for example, in respiration and photosynthesis.

3.4 Theoretical Chemistry and Chemical Bonding

Quantum mechanics, developed in the 1920s, offered a tool towards a more basic understanding of chemical bonds. In 1927 Walter Heitler and Fritz London showed that it is possible to solve exactly the relevant equations for the hydrogen molecule ion, i.e. two hydrogen nuclei sharing a single electron, and thereby calculate the attractive force between the nuclei. A pioneer in developing such methods was Linus Pauling at California Institute of Technology, who was awarded the Nobel Prize for Chemistry in 1954 “for his research into the nature of the chemical bond …” Pauling’s valence-bond (VB) method is rigorously described in his 1935 book Introduction to Quantum Mechanics (written together with E. Bright Wilson, Jr., at Harvard). A few years later (1939) he published an extensive non-mathematical treatment in The Nature of the Chemical Bond, a book which is one of the most read and influential in the entire history of chemistry. Pauling was not only a theoretician, but he also carried out extensive investigations of chemical structure by X-ray diffraction (see Section 3.5). On the basis of results with small peptides, which are building blocks of proteins, he suggested the α-helix as an important structural element. Pauling was awarded the Nobel Peace Prize for 1962, and he is the only person to date to have won two unshared Nobel Prizes.

α-helix   Pauling’s α-helix

α-carbon atoms are black, other carbon atoms grey, nitrogen atoms blue, oxygen atoms red and hydrogen atoms white; R designates amino-acid side chains. The dotted red lines are hydrogen bonds between amide and carbonyl groups in the peptide bonds.

Pauling’s VB method cannot give an adequate description of chemical bonding in many complicated molecules, and a more comprehensive treatment, the molecular-orbital (MO) method, was introduced already in 1927 by Robert S. Mulliken from Chicago and later developed further. MO theory considers, in quantum-mechanical terms, the interaction between all atomic nuclei and electrons in a molecule. Mulliken also showed that a combination of MO calculations with experimental (spectroscopic) results provides a powerful tool for describing bonding in large molecules. Mulliken received the Nobel Prize for Chemistry in 1966.

Theoretical chemistry has also contributed significantly to our understanding of chemical reaction mechanisms. In 1981 the Nobel Prize for Chemistry was shared between Kenichi Fukui in Kyoto and Roald Hoffmann of Cornell University “for their theories, developed independently, concerning the course of chemical reactions”. Fukui introduced in 1952 the frontier-orbital theory, according to which the occupied MO with the highest energy and the unoccupied one with the lowest energy have a dominant influence on the reactivity of a molecule. Hoffmann formulated in 1965, together with Robert B. Woodward (see Section 3.8), rules based on the conservation of orbital symmetry, for the reactivity and stereochemistry in chemical reactions.
3.5 Chemical Structure

The most commonly used method to determine the structure of molecules in three dimensions is X-ray crystallography. The diffraction of X-rays was discovered by Max von Laue in 1912, and this gave him the Nobel Prize for Physics in 1914. Its use for the determination of crystal structure was developed by Sir William Bragg and his son, Sir Lawrence Bragg, and they shared the Nobel Prize for Physics in 1915. The first Nobel Prize for Chemistry for the use of X-ray diffraction went to Petrus (Peter) Debye, then of Berlin, in 1936. Debye did not study crystals, however, but gases, which give less distinct diffraction patterns.

Many Nobel Prizes have been awarded for the determination of the structure of biological macromolecules (proteins and nucleic acids). Proteins are long chains of amino-acids, as shown by Emil Fischer (see Section 2), and the first step in the determination of their structure is to determine the order (sequence) of these building blocks. An ingenious method for this tedious task was developed by Frederick Sanger of Cambridge, and he reported the amino-acid sequence for a protein, insulin, in 1955. For this achievement he was awarded the Nobel Prize for Chemistry in 1958. Sanger later received part of a second Nobel Prize for Chemistry for a method to determine the nucleotide sequence in nucleic acids (see Section 3.12), and he is the only scientist so far who has won two Nobel Prizes for Chemistry.

The first protein crystal structures were reported by Max Perutz and Sir John Kendrew in 1960, and these two investigators shared the Nobel Prize for Chemistry in 1962. Perutz had started studying the oxygen-carrying blood pigment, hemoglobin, with Sir Lawrence Bragg in Cambridge already in 1937, and ten years later he was joined by Kendrew, who looked at crystals of the related muscle pigment, myoglobin. These proteins are both rich in Pauling’s α-helix (see Section 3.4), and this made it possible to discern the main features of the structures at the relatively low resolution first used. The same year that Perutz and Kendrew won their prize, the Nobel Prize for Physiology or Medicine went to Francis Crick, James Watson and Maurice Wilkins “for their discoveries concerning the molecular structure of nucleic acids … .” Two years later (1964) Dorothy Crowfoot Hodgkin received the Nobel Prize for Chemistry for determining the crystal structures of penicillin and vitamin B12.

Crystallographic electron microscopy was developed by Sir Aaron Klug in Cambridge, who was awarded the Nobel Prize for Chemistry in 1982. Attempts to prepare crystals of membrane proteins for structural studies were unsuccessful, but in 1982 Hartmut Michel managed to crystallize a photosynthetic reaction center after a painstaking series of experiments. He then proceeded to determine the three-dimensional structure of this protein complex in collaboration with Johann Deisenhofer and Robert Huber, and this was published in 1985. Deisenhofer, Huber and Michel shared the Nobel Prize for Chemistry in 1988. Michel has later also crystallized and determined the structure of the terminal enzyme in respiration, and his two structures have allowed detailed studies of electron transfer (cf. Sections 3.3 and 3.4) and its coupling to proton pumping, key features of the chemiosmotic mechanism for which Peter Mitchell had already received the Nobel Prize for Chemistry in 1978 (see Section 3.12). Functional and structural studies on the enzyme ATP synthase, connected to this proton pumping mechanism, was awarded one-half of the Nobel Prize for Chemistry in 1997, shared between Paul D. Boyer and John Walker (see Section 3.12).

3.6 Inorganic and Nuclear Chemistry

Much of the progress in inorganic chemistry during the 20th century has been associated with investigations of coordination compounds, i.e., a central metal ion surrounded by a number of coordinating groups, called ligands. In 1893 Alfred Werner in Zürich presented his coordination theory, and in 1905 he summarized his investigations in this new field in a book (Neuere Anschauungen auf dem Gebiete der anorganischen Chemie), which appeared in no less than five editions from 1905-1923. . Werner showed that a structure for compounds in which a metal ion binds several other molecules (ligands), all the ligand molecules are bound directly to the metal ion. Werner was awarded the Nobel Prize for Chemistry in 1913. Taube’s investigations of electron transfer, awarded in 1983 (see Section 3.3), were mainly carried out with coordination compounds, and vitamin B12 as well as the proteins hemoglobin and myoglobin, investigated by the Laureates Hodgkin, Perutz and Kendrew (see Section 3.5), also belong to this category.

Much inorganic chemistry in the early 1900s was a consequence of the discovery of radioactivity in 1896, for which Henri Becquerel from Paris was awarded the Nobel Prize for Physics in 1903, together with Pierre and Marie Curie. In 1911 Marie Curie received the Nobel Prize for Chemistry for her discovery of the elements radium and polonium and for the isolation of radium and studies of its compounds, and this made her the first investigator to be awarded two Nobel Prizes. The prize in 1921 went to Frederick Soddy of Oxford for his work on the chemistry of radioactive substances and on the origin of isotopes. In 1934 Frédéric Joliot and his wife Irène Joliot-Curie, the daughter of the Curies, discovered artificial radioactivity, i.e., new radioactive elements produced by the bombardment of non-radioactive elements with a-particles or neutrons. They were awarded the Nobel Prize for Chemistry in 1935 for “their synthesis of new radioactive elements”.

Many elements are mixtures of non-radioactive isotopes (see Section 3.1), and in 1934 Harold Urey of Columbia University had been given the Nobel Prize for Chemistry for his isolation of heavy hydrogen (deuterium). Urey had also separated uranium isotopes, and his work was an important basis for the investigations by Otto Hahn from Berlin. In attempts to make transuranium elements, i.e., elements with a higher atomic number than 92 (uranium), by radiating uranium atoms with neutrons, Hahn discovered that one of the products was barium, a lighter element. Lise Meitner, at the time a refugee from Nazism in Sweden, who had earlier worked with Hahn and taken the initiative for the uranium bombardment experiments, provided the explanation, namely, that the uranium atom was cleaved and that barium was one of the products. Hahn was awarded the Nobel Prize for Chemistry in 1944 “for his discovery of the fission of heavy nuclei”, and it can be wondered why Meitner was not included. Hahn’s original intention with his experiments was later achieved by Edwin M. McMillan and Glenn T. Seaborg of Berkeley, who were given the Nobel Prize for Chemistry in 1951 for “discoveries in the chemistry of transuranium elements”.

The use of stable as well as radioactive isotopes have important applications, not only in chemistry, but also in fields as far apart as biology, geology and archeology. In 1943 George de Hevesy from Stockholm received the Nobel Prize for Chemistry for his work on the use of isotopes as tracers, involving studies in inorganic chemistry and geochemistry as well as on the metabolism in living organisms. The prize in 1960 was given to Willard F. Libby of the University of California, Los Angeles (UCLA), for his method to determine the age of various objects (of geological or archeological origin) by measurements of the radioactive isotope carbon-14.

3.7 General Organic Chemistry

Contributions in organic chemistry have led to more Nobel Prizes for Chemistry than work in any other of the traditional branches of chemistry. Like the first prize in this area, that to Emil Fischer in 1902 (see Section 2), most of them have, however, been awarded for advances in the chemistry of natural products and will be treated separately (Section 3.9). Another large group, preparative organic chemistry, has also been given its own section (Section 3.8), and here only the prizes for more general contributions to organic chemistry will be discussed.

In 1969 the Nobel Prize for Chemistry went to Sir Derek H. R. Barton from London, and Odd Hassel from Oslo for developing the concept of conformation, i.e. the spatial arrangement of atoms in molecules, which differ only by the orientation of chemical groups by rotation around a single bond. This stereochemical concept rests on the original suggestion by van’t Hoff of the tetrahedral arrangement of the four valences of the carbon atom (see Section 2), and most organic molecules exist in two or more stable conformations.

The Nobel Prize for Chemistry in 1975 to Sir John Warcup Cornforth of the University of Sussex and Vladimir Prelog of ETH in Zürich was also based on research in stereochemistry. Not only can a compound have more than one geometric form, but chemical reactions can also have specificity in their stereochemistry, thereby forming a product with a particular three-dimensional arrangement of the atoms. This is especially true of reactions in living organisms, and Cornforth has mainly studied enzyme-catalyzed reactions, so his work borders onto biochemistry (Section 3.12). One of Prelog’s main contributions concerns chiral molecules, i.e. molecules that have two forms differing from one another as the right hand does from the left. Stereochemically specific reactions have great practical importance, as many drugs, for example, are active only in one particular geometric form.

Organometallic compounds constitute a group of organic molecules containing one or more carbon-metal bond, and they are thus the organic counterpart to Werner’s inorganic coordination. In 1952 Ernst Otto Fischer and Sir Geoffrey Wilkinson independently described a completely new group of organometallic molecules, called sandwich compounds in which compounds a metal ion is bound not to a single carbon atom but is “sandwiched” between two aromatic organic molecules. Fischer and Wilkinson shared the Nobel Prize for Chemistry in 1973.

3.8 Preparative Organic Chemistry

One of the chief goals of the organic chemist is to be able to synthesize increasingly complex compounds of carbon in combination with various other elements. The first Nobel Prize for Chemistry recognizing pioneering work in preparative organic chemistry was that to Victor Grignard from Nancy and Paul Sabatier from Toulouse in 1912. Grignard had discovered that organic halides can form compounds with magnesium. Sabatier was given the prize for developing a method to hydrogenate organic compounds in the presence of metallic catalysts. The prize in 1950 was presented to Otto Diels from Kiel and Kurt Alder from Cologne “for their discovery and development of the diene synthesis”, developed in 1928, by which organic compounds containing two double bonds (“dienes”) can effect the syntheses of many cyclic organic substances.

The German organic chemist Hans Fischer from Munich had already done significant work on the structure of hemin, the organic pigment in hemoglobin, when he synthesized it from simpler organic molecules in 1928. He also contributed much to the elucidation of the structure of chlorophyll, and for these important achievements he was awarded the Nobel Prize for Chemistry in 1930 (cf. Section 3.5). He finished his determination of the structure of chlorophyll in 1935, and by the time of his death he had almost completed its synthesis as well.

Robert Burns Woodward from Harvard is rightly considered the founder of the most advanced, modern art of organic synthesis. He designed methods for the total synthesis of a large number of complicated natural products, for example, cholesterol, chlorophyll and vitamin B12. He received the Nobel Prize for Chemistry in 1965, and he would probably have received a second chemistry prize in 1981 for his part in the formulation of the Woodward-Hoffmann rules (see Section 3.4), had it not been for his early death.

The Nobel Prize for Chemistry in 1984 was given to Robert Bruce Merrifield of Rockefeller University “for his development of methodology for chemical synthesis on a solid matrix”. Specifically, the synthesis of large peptides and small proteins.

3.9 Chemistry of Natural Product

The synthesis of complex organic molecules must be based on detailed knowledge of their structure. Early work on plant pigments was carried out by Richard Willstätter, a student of Adolf von Baeyer from Munich (see Section 2). Willstätter showed a structural relatedness between chlorophyll and hemin, and he demonstrated that chlorophyll contains magnesium as an integral component. He also carried out pioneering investigations on other plant pigments, such as the carotenoids, and he was awarded the Nobel Prize for Chemistry in 1915 for these achievements. Willstätter’s work laid the ground for the synthetic accomplishments of Hans Fischer (see Section 3.8). In addition, Willstätter contributed to the understanding of enzyme reactions.

The prizes for 1927 and 1928 were both presented to Heinrich Otto Wieland from Munich and Adolf Windaus from Göttingen, respectively, at the Nobel ceremony in 1928. These two chemists had done closely related work on the structure of steroids. The award to Wieland was primarily for his investigations of bile acids, whereas Windaus was recognized mainly for his work on cholesterol and his demonstration of the steroid nature of vitamin D. Wieland had already in 1912, before his prize-winning work, formulated a theory for biological oxidation, according to which removal of hydrogen (dehydrogenation) rather than reaction with oxygen is the dominating process.

Investigations on vitamins were recognized in 1937 and 1938 with the prizes to Sir Norman Haworth from Birmingham and Paul Karrer from Zürich and to Richard Kuhn from Heidelberg. Haworth did outstanding work in carbohydrate chemistry, establishing the ring structure of glucose. He was the first chemist to synthesize vitamin C, and this is the basis for the present large-scale production of this nutrient. Haworth shared the prize with Karrer, who determined the structure of carotene and of vitamin A. Kuhn also worked on carotenoids, and he published the structure of vitamin B2 at the same time as Karrer. He also isolated vitamin B6. In 1939 the Nobel Prize for Chemistry was shared between Adolf Butenandt from Berlin and Leopold Ruzicka (1887-1976) of ETH, Zurich. Butenandt was recognized “for his work on sex hormones”, having isolated estrone, progesterone and androsterone. Ruzicka synthesized androsterone and also testosterone.

The awards for outstanding work in natural-product chemistry continued after World War II. In 1947 Sir Robert Robinson from Oxford received the prize for his studies on plant substances, particularly alkaloids, such as morphine. Robinson also synthesized steroid hormones, and he elucidated the structure of penicillin. Many hormones are of a polypeptide nature, and in 1955 Vincent du Vigneaud of Cornell University was given the prize for his synthesis of two such hormones, vasopressin and oxytocin. Finally, in this area, Alexander R. Todd (Lord Todd since 1962) was recognized in 1957 “for his work on nucleotides and nucleotide co-enzymes”. Todd had synthesized ATP (adenosine triphosphate) and ADP (adenosine diphosphate), the main energy carriers in living cells, and he determined the structure of vitamin B12 (cf. Section 3.5) and of FAD (flavin-adenine dinucleotide).

3.10 Analytical Chemistry and Separation Science

A prize in analytical chemistry was given to Jaroslav Heyrovsky from Prague in 1959 for his development of polarographic methods of analysis. In these a dropping mercury electrode is employed to determine current-voltage curves for electrolytes. A given ion reacts at a specific voltage, and the current is a measure of the concentration of this ion.

The analysis of macromolecular constituents in living organisms requires specialized methods of separation. Ultracentrifugation wad developed by The Svedberg from Uppsala a few years before he was awarded the Nobel Prize for Chemistry in 1926 “for his work on disperse systems” (see Section 3.11). Svedberg’s student, Arne Tiselius, studied the migration of protein molecules in an electric field, and with this method, named electrophoresis, he demonstrated the complex nature of blood proteins. Tiselius also refined adsorption analysis, a method first used by the Russian botanist, Michail Tswett, for the separation of plant pigments and named chromatography by him. In 1948 Tiselius was given the prize for these achievements. A few years later (1952) Archer J.P. Martin from London and Richard L.M. Synge from Bucksburn (Scotland) shared the prize “for their invention of partition chromatography”, and this method was a major tool in many biochemical investigations later awarded with Nobel Prizes (see Section 3.12).

3.11 Polymers and Colloids

The Svedberg who received the Nobel Prize for Chemistry in 1926, also investigated gold sols. He used Zsigmond’s ultramicroscope to study the Brownian movement of colloidal particles, so named after the Scottish botanist Robert Brown, and confirmed a theory developed by Albert Einstein in 1905 and, independently, by M. Smoluchowski. His greatest achievement was, however, the construction of the ultracentrifuge, with which he studied not only the particle size distribution in gold sols but also determined the molecular weight of proteins, for example, hemoglobin. In the same year as Svedberg got the prize the Nobel Prize for Physics was awarded to Jean Baptiste Perrin of Sorbonne for developing equilibrium sedimentation in colloidal solutions, a method which Svedberg later perfected in his ultracentrifuge. Svedberg’s investigations with the ultracentrifuge and Tiselius’s electrophoresis studies (see Section 3.10) were instrumental in establishing that protein molecules have a unique size and structure, and this was a prerequisite for Sanger’s determination of their amino-acid sequence and the crystallographic work of Kendrew and Perutz (see Section 3.5).

3.12 Biochemistry

The second Nobel Prize for discoveries in biochemistry came in 1929, when Sir Arthur Harden from London and Hans von Euler-Chelpin from Stockholm shared the prize for investigations of sugar fermentation, which formed a direct continuation of Buchner’s work awarded in 1907. With his young co-worker, William John Young, Harden had shown in 1906 that fermentation requires a dialysable substance, called co-zymase, which is not destroyed by heat. Harden and Young also demonstrated that the process stops before all sugar (glucose) has been used up, but it starts again on addition of inorganic phosphate, and they suggested that hexose phosphates are formed in the early steps of fermentation. von Euler had done important work on the structure of co-zymase, shown to be nicotinamide adenine dinucleotide (NAD, earlier called DPN). As the number of Laureates can be three, it may seem appropriate for Young to have been included in the award, but Euler’s discovery was published together with Karl Myrbäck, and the number of Laureates is limited to three.

The next biochemical Nobel Prize was given in 1946 for work in the protein field. James B. Sumner of Cornell University received half the prize “for his discovery that enzymes can be crystallized” and John H. Northrop together with Wendell M. Stanley, both of the Rockefeller Institute, shared the other half “for their preparation of enzymes and virus proteins in a pure form”. Sumner had in 1926 crystalized an enzyme, urease, from jack beans and suggested that the crystals were the pure protein. His claim was, however, greeted with great scepticism, and the crystals were suggested to be inorganic salts with the enzyme adsorbed or occluded. Just a few years after Sumner’s discovery Northrop, however, managed to crystalize three digestive enzymes, pepsin, trypsin and chymotrypsin, and by painstaking experiments shown them to be pure proteins. Stanley started his attempt to purify virus proteins in the 1930s, but not until 1945 did he get virus crystals, and this then made it possible to show that viruses are complexes of protein and nucleic acid. The pioneering studies of these three investigators form the basis for the enormous number of new crystal structures of biological macromolecules, which have been published in the second half of the 20th century (cf. Section 3.5).

Several Nobel Prizes for Chemistry have been awarded for work in photosynthesis and respiration, the two main processes in the energy metabolism of living organisms (cf. Section 3.5). In 1961 Melvin Calvin of Berkeley received the prize for elucidating the carbon dioxide assimilation in plants. With the aid of carbon-14 (cf. Section 3.6) Calvin had shown that carbon dioxide is fixed in a cyclic process involving several enzymes. Peter Mitchell of the Glynn Research Laboratories in England was awarded in 1978 for his formulation of the chemiosmotic theory. According to this theory, electron transfer (cf. Sections 3.3 and 3.4) in the membrane-bound enzyme complexes in both respiration and photosynthesis, is coupled to proton translocation across the membranes, and the electrochemical gradient thus created is used to drive the synthesis of ATP (adenosine triphosphate), the energy storage molecule in all living cells. Paul D. Boyer of UCLA and John C. Walker of the MRC Laboratory in Cambridge shared one-half of the 1997 prize for their elucidation of the mechanism of ATP synthesis; the other half of the prize went to Jens C. Skou in Aarhus for the first discovery of an ion-transporting enzyme. Walker had determined the crystal structure of ATP synthase, and this structure confirmed a mechanism earlier proposed by Boyer, mainly on the basis of isotopic studies.

Luis F. Leloir from Buenos Aires was awarded in 1970 “for the discovery of sugar nucleotides and their role in the biosynthesis of carbohydrates”. In particular, Leloir had elucidated the biosynthesis of glycogen, the chief sugar reserve in animals and many microorganisms. Two years later the prize went with one half to Christian B. Anfinsen of NIH and the other half shared by Stanford Moore and William H. Stein, both from Rockefeller University, for fundamental work in protein chemistry. Anfinsen had shown, with the enzyme ribonuclease, that the information for a protein assuming a specific three-dimensional structure is inherent in its amino-acid sequence, and this discovery was the starting point for studies of the mechanism of protein folding, one of the major areas of present-day biochemical research. Moore and Stein had determined the amino-acid sequence of ribonuclease, but they received the prize for discovering anomalous properties of functional groups in the enzyme’s active site, which is a result of the protein fold.

Naturally a number of Nobel Prizes for Chemistry have been given for work in the nucleic acid field. In 1980 Paul Berg of Stanford received one half of the prize for studies of recombinant DNA, i.e. a molecule containing parts of DNA from different species, and the other half was shared by Walter Gilbert from Harvard and Frederick Sanger (see Section 3.5) for developing methods for the determination of the base sequences of nucleic acids. Berg’s work provides the basis of genetic engineering, which has led to the large biotechnology industry. Base sequence determinations are essential steps in recombinant-DNA technology, which is the rationale for Gilbert and Sanger sharing the prize with Berg.

Sidney Altman of Yale and Thomas R. Cech of the University of Colorado shared the prize in 1989 “for their discovery of the catalytic properties of RNA”. The central dogma of molecular biology is: DNA –> RNA –> enzyme. The discovery that not only enzymes but also RNA possesses catalytic properties have led to new ideas about the origin of life. The 1993 prize was shared by Kary B. Mullis from La Jolla and Michael Smith from Vancouver, who both have given important contributions to DNA technology. Mullis developed the PCR (“polymerase chain reaction”) technique, which makes it possible to replicate millions of times a specific DNA segment in a complicated genetic material. Smith’s work forms the basis for site-directed mutagenesis, a technique by which it is possible to change a specific amino-acid in a protein and thereby illuminate its functional role.

  1. Concluding Remarks

The first eighty years of Nobel Prizes for Chemistry outlines the development of modern chemistry. The prizes cover a broad spectrum of the basic chemical sciences, from theoretical chemistry to biochemistry, and also a number of contributions to applied chemistry. Organic chemistry dominates with no less than 25 awards. This is not surprising, since the special valence properties of carbon result in an almost infinite variation in the structure of organic compounds. Also, a large number of the prizes in organic chemistry were given for investigations of the chemistry of natural products of increasing complexity, and have lead to pharmaceutical development .

As many as 11 prizes have been awarded for biochemical discoveries. The first biochemical prize was already given in 1907 (Buchner), but only three awards in this area came in the first half of the century, illustrating the explosive growth of biochemistry in recent decades (8 prizes in 1970-1997). At the other end of the chemical spectrum, physical chemistry, including chemical thermodynamics and kinetics, dominates with 14 prizes, but there have also been 6 prizes in theoretical chemistry. Chemical structure is a large area with 8 prizes, including awards for methodological developments as well as for the determination of the structure of large biological molecules or molecular complexes. Industrial chemistry was first recognized in 1931 (Bergius, Bosch), but many more recent prizes for basic contributions lie close to industrial applications.

Read Full Post »

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Author and Curator: Larry H. Bernstein, MD, FCAP

Article ID #160: Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer. Published on 11/9/2014

WordCloud Image Produced by Adam Tubman

This summary is the last of a series on the impact of transcriptomics, proteomics, and metabolomics on disease investigation, and the sorting and integration of genomic signatures and metabolic signatures to explain phenotypic relationships in variability and individuality of response to disease expression and how this leads to  pharmaceutical discovery and personalized medicine.  We have unquestionably better tools at our disposal than has ever existed in the history of mankind, and an enormous knowledge-base that has to be accessed.  I shall conclude here these discussions with the powerful contribution to and current knowledge pertaining to biochemistry, metabolism, protein-interactions, signaling, and the application of the -OMICS to diseases and drug discovery at this time.

The Ever-Transcendent Cell

Deriving physiologic first principles By John S. Torday | The Scientist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41282/title/The-Ever-Transcendent-Cell/

Both the developmental and phylogenetic histories of an organism describe the evolution of physiology—the complex of metabolic pathways that govern the function of an organism as a whole. The necessity of establishing and maintaining homeostatic mechanisms began at the cellular level, with the very first cells, and homeostasis provides the underlying selection pressure fueling evolution.

While the events leading to the formation of the first functioning cell are debatable, a critical one was certainly the formation of simple lipid-enclosed vesicles, which provided a protected space for the evolution of metabolic pathways. Protocells evolved from a common ancestor that experienced environmental stresses early in the history of cellular development, such as acidic ocean conditions and low atmospheric oxygen levels, which shaped the evolution of metabolism.

The reduction of evolution to cell biology may answer the perennially unresolved question of why organisms return to their unicellular origins during the life cycle.

As primitive protocells evolved to form prokaryotes and, much later, eukaryotes, changes to the cell membrane occurred that were critical to the maintenance of chemiosmosis, the generation of bioenergy through the partitioning of ions. The incorporation of cholesterol into the plasma membrane surrounding primitive eukaryotic cells marked the beginning of their differentiation from prokaryotes. Cholesterol imparted more fluidity to eukaryotic cell membranes, enhancing functionality by increasing motility and endocytosis. Membrane deformability also allowed for increased gas exchange.

Acidification of the oceans by atmospheric carbon dioxide generated high intracellular calcium ion concentrations in primitive aquatic eukaryotes, which had to be lowered to prevent toxic effects, namely the aggregation of nucleotides, proteins, and lipids. The early cells achieved this by the evolution of calcium channels composed of cholesterol embedded within the cell’s plasma membrane, and of internal membranes, such as that of the endoplasmic reticulum, peroxisomes, and other cytoplasmic organelles, which hosted intracellular chemiosmosis and helped regulate calcium.

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.  ….

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.

Given that the unicellular toolkit is complete with all the traits necessary for forming multicellular organisms (Science, 301:361-63, 2003), it is distinctly possible that metazoans are merely permutations of the unicellular body plan. That scenario would clarify a lot of puzzling biology: molecular commonalities between the skin, lung, gut, and brain that affect physiology and pathophysiology exist because the cell membranes of unicellular organisms perform the equivalents of these tissue functions, and the existence of pleiotropy—one gene affecting many phenotypes—may be a consequence of the common unicellular source for all complex biologic traits.  …

The cell-molecular homeostatic model for evolution and stability addresses how the external environment generates homeostasis developmentally at the cellular level. It also determines homeostatic set points in adaptation to the environment through specific effectors, such as growth factors and their receptors, second messengers, inflammatory mediators, crossover mutations, and gene duplications. This is a highly mechanistic, heritable, plastic process that lends itself to understanding evolution at the cellular, tissue, organ, system, and population levels, mediated by physiologically linked mechanisms throughout, without having to invoke random, chance mechanisms to bridge different scales of evolutionary change. In other words, it is an integrated mechanism that can often be traced all the way back to its unicellular origins.

The switch from swim bladder to lung as vertebrates moved from water to land is proof of principle that stress-induced evolution in metazoans can be understood from changes at the cellular level.

http://www.the-scientist.com/Nov2014/TE_21.jpg

A MECHANISTIC BASIS FOR LUNG DEVELOPMENT: Stress from periodic atmospheric hypoxia (1) during vertebrate adaptation to land enhances positive selection of the stretch-regulated parathyroid hormone-related protein (PTHrP) in the pituitary and adrenal glands. In the pituitary (2), PTHrP signaling upregulates the release of adrenocorticotropic hormone (ACTH) (3), which stimulates the release of glucocorticoids (GC) by the adrenal gland (4). In the adrenal gland, PTHrP signaling also stimulates glucocorticoid production of adrenaline (5), which in turn affects the secretion of lung surfactant, the distension of alveoli, and the perfusion of alveolar capillaries (6). PTHrP signaling integrates the inflation and deflation of the alveoli with surfactant production and capillary perfusion.  THE SCIENTIST STAFF

From a cell-cell signaling perspective, two critical duplications in genes coding for cell-surface receptors occurred during this period of water-to-land transition—in the stretch-regulated parathyroid hormone-related protein (PTHrP) receptor gene and the β adrenergic (βA) receptor gene. These gene duplications can be disassembled by following their effects on vertebrate physiology backwards over phylogeny. PTHrP signaling is necessary for traits specifically relevant to land adaptation: calcification of bone, skin barrier formation, and the inflation and distention of lung alveoli. Microvascular shear stress in PTHrP-expressing organs such as bone, skin, kidney, and lung would have favored duplication of the PTHrP receptor, since sheer stress generates radical oxygen species (ROS) known to have this effect and PTHrP is a potent vasodilator, acting as an epistatic balancing selection for this constraint.

Positive selection for PTHrP signaling also evolved in the pituitary and adrenal cortex (see figure on this page), stimulating the secretion of ACTH and corticoids, respectively, in response to the stress of land adaptation. This cascade amplified adrenaline production by the adrenal medulla, since corticoids passing through it enzymatically stimulate adrenaline synthesis. Positive selection for this functional trait may have resulted from hypoxic stress that arose during global episodes of atmospheric hypoxia over geologic time. Since hypoxia is the most potent physiologic stressor, such transient oxygen deficiencies would have been acutely alleviated by increasing adrenaline levels, which would have stimulated alveolar surfactant production, increasing gas exchange by facilitating the distension of the alveoli. Over time, increased alveolar distension would have generated more alveoli by stimulating PTHrP secretion, impelling evolution of the alveolar bed of the lung.

This scenario similarly explains βA receptor gene duplication, since increased density of the βA receptor within the alveolar walls was necessary for relieving another constraint during the evolution of the lung in adaptation to land: the bottleneck created by the existence of a common mechanism for blood pressure control in both the lung alveoli and the systemic blood pressure. The pulmonary vasculature was constrained by its ability to withstand the swings in pressure caused by the systemic perfusion necessary to sustain all the other vital organs. PTHrP is a potent vasodilator, subserving the blood pressure constraint, but eventually the βA receptors evolved to coordinate blood pressure in both the lung and the periphery.

Gut Microbiome Heritability

Analyzing data from a large twin study, researchers have homed in on how host genetics can shape the gut microbiome.
By Tracy Vence | The Scientist Nov 6, 2014

Previous research suggested host genetic variation can influence microbial phenotype, but an analysis of data from a large twin study published in Cell today (November 6) solidifies the connection between human genotype and the composition of the gut microbiome. Studying more than 1,000 fecal samples from 416 monozygotic and dizygotic twin pairs, Cornell University’s Ruth Ley and her colleagues have homed in on one bacterial taxon, the family Christensenellaceae, as the most highly heritable group of microbes in the human gut. The researchers also found that Christensenellaceae—which was first described just two years ago—is central to a network of co-occurring heritable microbes that is associated with lean body mass index (BMI).  …

Of particular interest was the family Christensenellaceae, which was the most heritable taxon among those identified in the team’s analysis of fecal samples obtained from the TwinsUK study population.

While microbiologists had previously detected 16S rRNA sequences belonging to Christensenellaceae in the human microbiome, the family wasn’t named until 2012. “People hadn’t looked into it, partly because it didn’t have a name . . . it sort of flew under the radar,” said Ley.

Ley and her colleagues discovered that Christensenellaceae appears to be the hub in a network of co-occurring heritable taxa, which—among TwinsUK participants—was associated with low BMI. The researchers also found that Christensenellaceae had been found at greater abundance in low-BMI twins in older studies.

To interrogate the effects of Christensenellaceae on host metabolic phenotype, the Ley’s team introduced lean and obese human fecal samples into germ-free mice. They found animals that received lean fecal samples containing more Christensenellaceae showed reduced weight gain compared with their counterparts. And treatment of mice that had obesity-associated microbiomes with one member of the Christensenellaceae family, Christensenella minuta, led to reduced weight gain.   …

Ley and her colleagues are now focusing on the host alleles underlying the heritability of the gut microbiome. “We’re running a genome-wide association analysis to try to find genes—particular variants of genes—that might associate with higher levels of these highly heritable microbiota.  . . . Hopefully that will point us to possible reasons they’re heritable,” she said. “The genes will guide us toward understanding how these relationships are maintained between host genotype and microbiome composition.”

J.K. Goodrich et al., “Human genetics shape the gut microbiome,” Cell,  http://dx.doi.org:/10.1016/j.cell.2014.09.053, 2014.

Light-Operated Drugs

Scientists create a photosensitive pharmaceutical to target a glutamate receptor.
By Ruth Williams | The Scentist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41279/title/Light-Operated-Drugs/

light operated drugs MO1

light operated drugs MO1

http://www.the-scientist.com/Nov2014/MO1.jpg

The desire for temporal and spatial control of medications to minimize side effects and maximize benefits has inspired the development of light-controllable drugs, or optopharmacology. Early versions of such drugs have manipulated ion channels or protein-protein interactions, “but never, to my knowledge, G protein–coupled receptors [GPCRs], which are one of the most important pharmacological targets,” says Pau Gorostiza of the Institute for Bioengineering of Catalonia, in Barcelona.

Gorostiza has taken the first step toward filling that gap, creating a photosensitive inhibitor of the metabotropic glutamate 5 (mGlu5) receptor—a GPCR expressed in neurons and implicated in a number of neurological and psychiatric disorders. The new mGlu5 inhibitor—called alloswitch-1—is based on a known mGlu receptor inhibitor, but the simple addition of a light-responsive appendage, as had been done for other photosensitive drugs, wasn’t an option. The binding site on mGlu5 is “extremely tight,” explains Gorostiza, and would not accommodate a differently shaped molecule. Instead, alloswitch-1 has an intrinsic light-responsive element.

In a human cell line, the drug was active under dim light conditions, switched off by exposure to violet light, and switched back on by green light. When Gorostiza’s team administered alloswitch-1 to tadpoles, switching between violet and green light made the animals stop and start swimming, respectively.

The fact that alloswitch-1 is constitutively active and switched off by light is not ideal, says Gorostiza. “If you are thinking of therapy, then in principle you would prefer the opposite,” an “on” switch. Indeed, tweaks are required before alloswitch-1 could be a useful drug or research tool, says Stefan Herlitze, who studies ion channels at Ruhr-Universität Bochum in Germany. But, he adds, “as a proof of principle it is great.” (Nat Chem Biol, http://dx.doi.org:/10.1038/nchembio.1612, 2014)

Enhanced Enhancers

The recent discovery of super-enhancers may offer new drug targets for a range of diseases.
By Eric Olson | The Scientist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41281/title/Enhanced-Enhancers/

To understand disease processes, scientists often focus on unraveling how gene expression in disease-associated cells is altered. Increases or decreases in transcription—as dictated by a regulatory stretch of DNA called an enhancer, which serves as a binding site for transcription factors and associated proteins—can produce an aberrant composition of proteins, metabolites, and signaling molecules that drives pathologic states. Identifying the root causes of these changes may lead to new therapeutic approaches for many different diseases.

Although few therapies for human diseases aim to alter gene expression, the outstanding examples—including antiestrogens for hormone-positive breast cancer, antiandrogens for prostate cancer, and PPAR-γ agonists for type 2 diabetes—demonstrate the benefits that can be achieved through targeting gene-control mechanisms.  Now, thanks to recent papers from laboratories at MIT, Harvard, and the National Institutes of Health, researchers have a new, much bigger transcriptional target: large DNA regions known as super-enhancers or stretch-enhancers. Already, work on super-enhancers is providing insights into how gene-expression programs are established and maintained, and how they may go awry in disease.  Such research promises to open new avenues for discovering medicines for diseases where novel approaches are sorely needed.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions (Cell, 153:307-19, 2013). They also appear to bind a large percentage of the transcriptional machinery compared to typical enhancers, allowing them to better establish and enforce cell-type specific transcriptional programs (Cell, 153:320-34, 2013).

Super-enhancers are closely associated with genes that dictate cell identity, including those for cell-type–specific master regulatory transcription factors. This observation led to the intriguing hypothesis that cells with a pathologic identity, such as cancer cells, have an altered gene expression program driven by the loss, gain, or altered function of super-enhancers.

Sure enough, by mapping the genome-wide location of super-enhancers in several cancer cell lines and from patients’ tumor cells, we and others have demonstrated that genes located near super-enhancers are involved in processes that underlie tumorigenesis, such as cell proliferation, signaling, and apoptosis.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions.

Genome-wide association studies (GWAS) have found that disease- and trait-associated genetic variants often occur in greater numbers in super-enhancers (compared to typical enhancers) in cell types involved in the disease or trait of interest (Cell, 155:934-47, 2013). For example, an enrichment of fasting glucose–associated single nucleotide polymorphisms (SNPs) was found in the stretch-enhancers of pancreatic islet cells (PNAS, 110:17921-26, 2013). Given that some 90 percent of reported disease-associated SNPs are located in noncoding regions, super-enhancer maps may be extremely valuable in assigning functional significance to GWAS variants and identifying target pathways.

Because only 1 to 2 percent of active genes are physically linked to a super-enhancer, mapping the locations of super-enhancers can be used to pinpoint the small number of genes that may drive the biology of that cell. Differential super-enhancer maps that compare normal cells to diseased cells can be used to unravel the gene-control circuitry and identify new molecular targets, in much the same way that somatic mutations in tumor cells can point to oncogenic drivers in cancer. This approach is especially attractive in diseases for which an incomplete understanding of the pathogenic mechanisms has been a barrier to discovering effective new therapies.

Another therapeutic approach could be to disrupt the formation or function of super-enhancers by interfering with their associated protein components. This strategy could make it possible to downregulate multiple disease-associated genes through a single molecular intervention. A group of Boston-area researchers recently published support for this concept when they described inhibited expression of cancer-specific genes, leading to a decrease in cancer cell growth, by using a small molecule inhibitor to knock down a super-enhancer component called BRD4 (Cancer Cell, 24:777-90, 2013).  More recently, another group showed that expression of the RUNX1 transcription factor, involved in a form of T-cell leukemia, can be diminished by treating cells with an inhibitor of a transcriptional kinase that is present at the RUNX1 super-enhancer (Nature, 511:616-20, 2014).

Fungal effector Ecp6 outcompetes host immune receptor for chitin binding through intrachain LysM dimerization 
Andrea Sánchez-Vallet, et al.   eLife 2013;2:e00790 http://elifesciences.org/content/2/e00790#sthash.LnqVMJ9p.dpuf

LysM effector

LysM effector

http://img.scoop.it/ZniCRKQSvJOG18fHbb4p0Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9

While host immune receptors

  • detect pathogen-associated molecular patterns to activate immunity,
  • pathogens attempt to deregulate host immunity through secreted effectors.

Fungi employ LysM effectors to prevent

  • recognition of cell wall-derived chitin by host immune receptors

Structural analysis of the LysM effector Ecp6 of

  • the fungal tomato pathogen Cladosporium fulvum reveals
  • a novel mechanism for chitin binding,
  • mediated by intrachain LysM dimerization,

leading to a chitin-binding groove that is deeply buried in the effector protein.

This composite binding site involves

  • two of the three LysMs of Ecp6 and
  • mediates chitin binding with ultra-high (pM) affinity.

The remaining singular LysM domain of Ecp6 binds chitin with

  • low micromolar affinity but can nevertheless still perturb chitin-triggered immunity.

Conceivably, the perturbation by this LysM domain is not established through chitin sequestration but possibly through interference with the host immune receptor complex.

Mutated Genes in Schizophrenia Map to Brain Networks
From www.nih.gov –  Sep 3, 2013

Previous studies have shown that many people with schizophrenia have de novo, or new, genetic mutations. These misspellings in a gene’s DNA sequence

  • occur spontaneously and so aren’t shared by their close relatives.

Dr. Mary-Claire King of the University of Washington in Seattle and colleagues set out to

  • identify spontaneous genetic mutations in people with schizophrenia and
  • to assess where and when in the brain these misspelled genes are turned on, or expressed.

The study was funded in part by NIH’s National Institute of Mental Health (NIMH). The results were published in the August 1, 2013, issue of Cell.

The researchers sequenced the exomes (protein-coding DNA regions) of 399 people—105 with schizophrenia plus their unaffected parents and siblings. Gene variations
that were found in a person with schizophrenia but not in either parent were considered spontaneous.

The likelihood of having a spontaneous mutation was associated with

  • the age of the father in both affected and unaffected siblings.

Significantly more mutations were found in people

  • whose fathers were 33-45 years at the time of conception compared to 19-28 years.

Among people with schizophrenia, the scientists identified

  • 54 genes with spontaneous mutations
  • predicted to cause damage to the function of the protein they encode.

The researchers used newly available database resources that show

  • where in the brain and when during development genes are expressed.

The genes form an interconnected expression network with many more connections than

  • that of the genes with spontaneous damaging mutations in unaffected siblings.

The spontaneously mutated genes in people with schizophrenia

  • were expressed in the prefrontal cortex, a region in the front of the brain.

The genes are known to be involved in important pathways in brain development. Fifty of these genes were active

  • mainly during the period of fetal development.

“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” King says. “Mutations can lead to loss of integrity of a whole pathway,
not just of a single gene.”

These findings support the concept that schizophrenia may result, in part, from

  • disruptions in development in the prefrontal cortex during fetal development.

James E. Darnell’s “Reflections”

A brief history of the discovery of RNA and its role in transcription — peppered with career advice
By Joseph P. Tiano

James Darnell begins his Journal of Biological Chemistry “Reflections” article by saying, “graduate students these days

  • have to swim in a sea virtually turgid with the daily avalanche of new information and
  • may be momentarily too overwhelmed to listen to the aging.

I firmly believe how we learned what we know can provide useful guidance for how and what a newcomer will learn.” Considering his remarkable discoveries in

  • RNA processing and eukaryotic transcriptional regulation

spanning 60 years of research, Darnell’s advice should be cherished. In his second year at medical school at Washington University School of Medicine in St. Louis, while
studying streptococcal disease in Robert J. Glaser’s laboratory, Darnell realized he “loved doing the experiments” and had his first “career advancement event.”
He and technician Barbara Pesch discovered that in vivo penicillin treatment killed streptococci only in the exponential growth phase and not in the stationary phase. These
results were published in the Journal of Clinical Investigation and earned Darnell an interview with Harry Eagle at the National Institutes of Health.

Darnell arrived at the NIH in 1956, shortly after Eagle  shifted his research interest to developing his minimal essential cell culture medium, still used. Eagle, then studying cell metabolism, suggested that Darnell take up a side project on poliovirus replication in mammalian cells in collaboration with Robert I. DeMars. DeMars’ Ph.D.
adviser was also James  Watson’s mentor, so Darnell met Watson, who invited him to give a talk at Harvard University, which led to an assistant professor position
at the MIT under Salvador Luria. A take-home message is to embrace side projects, because you never know where they may lead: this project helped to shape
his career.

Darnell arrived in Boston in 1961. Following the discovery of DNA’s structure in 1953, the world of molecular biology was turning to RNA in an effort to understand how
proteins are made. Darnell’s background in virology (it was discovered in 1960 that viruses used RNA to replicate) was ideal for the aim of his first independent lab:
exploring mRNA in animal cells grown in culture. While at MIT, he developed a new technique for purifying RNA along with making other observations

  • suggesting that nonribosomal cytoplasmic RNA may be involved in protein synthesis.

When Darnell moved to Albert Einstein College of Medicine for full professorship in 1964,  it was hypothesized that heterogenous nuclear RNA was a precursor to mRNA.
At Einstein, Darnell discovered RNA processing of pre-tRNAs and demonstrated for the first time

  • that a specific nuclear RNA could represent a possible specific mRNA precursor.

In 1967 Darnell took a position at Columbia University, and it was there that he discovered (simultaneously with two other labs) that

  • mRNA contained a polyadenosine tail.

The three groups all published their results together in the Proceedings of the National Academy of Sciences in 1971. Shortly afterward, Darnell made his final career move
four short miles down the street to Rockefeller University in 1974.

Over the next 35-plus years at Rockefeller, Darnell never strayed from his original research question: How do mammalian cells make and control the making of different
mRNAs? His work was instrumental in the collaborative discovery of

  • splicing in the late 1970s and
  • in identifying and cloning many transcriptional activators.

Perhaps his greatest contribution during this time, with the help of Ernest Knight, was

  • the discovery and cloning of the signal transducers and activators of transcription (STAT) proteins.

And with George Stark, Andy Wilks and John Krowlewski, he described

  • cytokine signaling via the JAK-STAT pathway.

Darnell closes his “Reflections” with perhaps his best advice: Do not get too wrapped up in your own work, because “we are all needed and we are all in this together.”

Darnell Reflections - James_Darnell

Darnell Reflections – James_Darnell

http://www.asbmb.org/assets/0/366/418/428/85528/85529/85530/8758cb87-84ff-42d6-8aea-96fda4031a1b.jpg

Recent findings on presenilins and signal peptide peptidase

By Dinu-Valantin Bălănescu

γ-secretase and SPP

γ-secretase and SPP

Fig. 1 from the minireview shows a schematic depiction of γ-secretase and SPP

http://www.asbmb.org/assets/0/366/418/428/85528/85529/85530/c2de032a-daad-41e5-ba19-87a17bd26362.png

GxGD proteases are a family of intramembranous enzymes capable of hydrolyzing

  • the transmembrane domain of some integral membrane proteins.

The GxGD family is one of the three families of

  • intramembrane-cleaving proteases discovered so far (along with the rhomboid and site-2 protease) and
  • includes the γ-secretase and the signal peptide peptidase.

Although only recently discovered, a number of functions in human pathology and in numerous other biological processes

  • have been attributed to γ-secretase and SPP.

Taisuke Tomita and Takeshi Iwatsubo of the University of Tokyo highlighted the latest findings on the structure and function of γ-secretase and SPP
in a recent minireview in The Journal of Biological Chemistry.

  • γ-secretase is involved in cleaving the amyloid-β precursor protein, thus producing amyloid-β peptide,

the main component of senile plaques in Alzheimer’s disease patients’ brains. The complete structure of mammalian γ-secretase is not yet known; however,
Tomita and Iwatsubo note that biochemical analyses have revealed it to be a multisubunit protein complex.

  • Its catalytic subunit is presenilin, an aspartyl protease.

In vitro and in vivo functional and chemical biology analyses have revealed that

  • presenilin is a modulator and mandatory component of the γ-secretase–mediated cleavage of APP.

Genetic studies have identified three other components required for γ-secretase activity:

  1. nicastrin,
  2. anterior pharynx defective 1 and
  3. presenilin enhancer 2.

By coexpression of presenilin with the other three components, the authors managed to

  • reconstitute γ-secretase activity.

Tomita and Iwatsubo determined using the substituted cysteine accessibility method and by topological analyses, that

  • the catalytic aspartates are located at the center of the nine transmembrane domains of presenilin,
  • by revealing the exact location of the enzyme’s catalytic site.

The minireview also describes in detail the formerly enigmatic mechanism of γ-secretase mediated cleavage.

SPP, an enzyme that cleaves remnant signal peptides in the membrane

  • during the biogenesis of membrane proteins and
  • signal peptides from major histocompatibility complex type I,
  • also is involved in the maturation of proteins of the hepatitis C virus and GB virus B.

Bioinformatics methods have revealed in fruit flies and mammals four SPP-like proteins,

  • two of which are involved in immunological processes.

By using γ-secretase inhibitors and modulators, it has been confirmed

  • that SPP shares a similar GxGD active site and proteolytic activity with γ-secretase.

Upon purification of the human SPP protein with the baculovirus/Sf9 cell system,

  • single-particle analysis revealed further structural and functional details.

HLA targeting efficiency correlates with human T-cell response magnitude and with mortality from influenza A infection

From www.pnas.org –  Sep 3, 2013 4:24 PM

Experimental and computational evidence suggests that

  • HLAs preferentially bind conserved regions of viral proteins, a concept we term “targeting efficiency,” and that
  • this preference may provide improved clearance of infection in several viral systems.

To test this hypothesis, T-cell responses to A/H1N1 (2009) were measured from peripheral blood mononuclear cells obtained from a household cohort study
performed during the 2009–2010 influenza season. We found that HLA targeting efficiency scores significantly correlated with

  • IFN-γ enzyme-linked immunosorbent spot responses (P = 0.042, multiple regression).

A further population-based analysis found that the carriage frequencies of the alleles with the lowest targeting efficiencies, A*24,

  • were associated with pH1N1 mortality (r = 0.37, P = 0.031) and
  • are common in certain indigenous populations in which increased pH1N1 morbidity has been reported.

HLA efficiency scores and HLA use are associated with CD8 T-cell magnitude in humans after influenza infection.
The computational tools used in this study may be useful predictors of potential morbidity and

  • identify immunologic differences of new variant influenza strains
  • more accurately than evolutionary sequence comparisons.

Population-based studies of the relative frequency of these alleles in severe vs. mild influenza cases

  • might advance clinical practices for severe H1N1 infections among genetically susceptible populations.

Metabolomics in drug target discovery

J D Rabinowitz et al.

Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ.
Cold Spring Harbor Symposia on Quantitative Biology 11/2011; 76:235-46.
http://dx.doi.org:/10.1101/sqb.2011.76.010694 

Most diseases result in metabolic changes. In many cases, these changes play a causative role in disease progression. By identifying pathological metabolic changes,

  • metabolomics can point to potential new sites for therapeutic intervention.

Particularly promising enzymatic targets are those that

  • carry increased flux in the disease state.

Definitive assessment of flux requires the use of isotope tracers. Here we present techniques for

  • finding new drug targets using metabolomics and isotope tracers.

The utility of these methods is exemplified in the study of three different viral pathogens. For influenza A and herpes simplex virus,

  • metabolomic analysis of infected versus mock-infected cells revealed
  • dramatic concentration changes around the current antiviral target enzymes.

Similar analysis of human-cytomegalovirus-infected cells, however, found the greatest changes

  • in a region of metabolism unrelated to the current antiviral target.

Instead, it pointed to the tricarboxylic acid (TCA) cycle and

  • its efflux to feed fatty acid biosynthesis as a potential preferred target.

Isotope tracer studies revealed that cytomegalovirus greatly increases flux through

  • the key fatty acid metabolic enzyme acetyl-coenzyme A carboxylase.
  • Inhibition of this enzyme blocks human cytomegalovirus replication.

Examples where metabolomics has contributed to identification of anticancer drug targets are also discussed. Eventual proof of the value of

  • metabolomics as a drug target discovery strategy will be
  • successful clinical development of therapeutics hitting these new targets.

 Related References

Use of metabolic pathway flux information in targeted cancer drug design. Drug Discovery Today: Therapeutic Strategies 1:435-443, 2004.

Detection of resistance to imatinib by metabolic profiling: clinical and drug development implications. Am J Pharmacogenomics. 2005;5(5):293-302. Review. PMID: 16196499

Medicinal chemistry, metabolic profiling and drug target discovery: a role for metabolic profiling in reverse pharmacology and chemical genetics.
Mini Rev Med Chem.  2005 Jan;5(1):13-20. Review. PMID: 15638788 [PubMed – indexed for MEDLINE] Related citations

Development of Tracer-Based Metabolomics and its Implications for the Pharmaceutical Industry. Int J Pharm Med 2007; 21 (3): 217-224.

Use of metabolic pathway flux information in anticancer drug design. Ernst Schering Found Symp Proc. 2007;(4):189-203. Review. PMID: 18811058

Pharmacological targeting of glucagon and glucagon-like peptide 1 receptors has different effects on energy state and glucose homeostasis in diet-induced obese mice. J Pharmacol Exp Ther. 2011 Jul;338(1):70-81. http://dx.doi.org:/10.1124/jpet.111.179986. PMID: 21471191

Single valproic acid treatment inhibits glycogen and RNA ribose turnover while disrupting glucose-derived cholesterol synthesis in liver as revealed by the
[U-C(6)]-d-glucose tracer in mice. Metabolomics. 2009 Sep;5(3):336-345. PMID: 19718458

Metabolic Pathways as Targets for Drug Screening, Metabolomics, Dr Ute Roessner (Ed.), ISBN: 978-953-51-0046-1, InTech, Available from: http://www.intechopen.com/books/metabolomics/metabolic-pathways-as-targets-for-drug-screening

Iron regulates glucose homeostasis in liver and muscle via AMP-activated protein kinase in mice. FASEB J. 2013 Jul;27(7):2845-54.
http://dx.doi.org:/10.1096/fj.12-216929. PMID: 23515442

Metabolomics and systems pharmacology: why and how to model the human metabolic network for drug discovery

Drug Discov. Today 19 (2014), 171–182     http://dx.doi.org:/10.1016/j.drudis.2013.07.014

Highlights

  • We now have metabolic network models; the metabolome is represented by their nodes.
  • Metabolite levels are sensitive to changes in enzyme activities.
  • Drugs hitchhike on metabolite transporters to get into and out of cells.
  • The consensus network Recon2 represents the present state of the art, and has predictive power.
  • Constraint-based modelling relates network structure to metabolic fluxes.

Metabolism represents the ‘sharp end’ of systems biology, because changes in metabolite concentrations are

  • necessarily amplified relative to changes in the transcriptome, proteome and enzyme activities, which can be modulated by drugs.

To understand such behaviour, we therefore need (and increasingly have) reliable consensus (community) models of

  • the human metabolic network that include the important transporters.

Small molecule ‘drug’ transporters are in fact metabolite transporters, because

  • drugs bear structural similarities to metabolites known from the network reconstructions and
  • from measurements of the metabolome.

Recon2 represents the present state-of-the-art human metabolic network reconstruction; it can predict inter alia:

(i) the effects of inborn errors of metabolism;

(ii) which metabolites are exometabolites, and

(iii) how metabolism varies between tissues and cellular compartments.

However, even these qualitative network models are not yet complete. As our understanding improves

  • so do we recognise more clearly the need for a systems (poly)pharmacology.

Introduction – a systems biology approach to drug discovery

It is clearly not news that the productivity of the pharmaceutical industry has declined significantly during recent years

  • following an ‘inverse Moore’s Law’, Eroom’s Law, or
  • that many commentators, consider that the main cause of this is
  • because of an excessive focus on individual molecular target discovery rather than a more sensible strategy
  • based on a systems-level approach (Fig. 1).
drug discovery science

drug discovery science

Figure 1.

The change in drug discovery strategy from ‘classical’ function-first approaches (in which the assay of drug function was at the tissue or organism level),
with mechanistic studies potentially coming later, to more-recent target-based approaches where initial assays usually involve assessing the interactions
of drugs with specified (and often cloned, recombinant) proteins in vitro. In the latter cases, effects in vivo are assessed later, with concomitantly high levels of attrition.

Arguably the two chief hallmarks of the systems biology approach are:

(i) that we seek to make mathematical models of our systems iteratively or in parallel with well-designed ‘wet’ experiments, and
(ii) that we do not necessarily start with a hypothesis but measure as many things as possible (the ’omes) and

  • let the data tell us the hypothesis that best fits and describes them.

Although metabolism was once seen as something of a Cinderella subject,

  • there are fundamental reasons to do with the organisation of biochemical networks as
  • to why the metabol(om)ic level – now in fact seen as the ‘apogee’ of the ’omics trilogy –
  •  is indeed likely to be far more discriminating than are
  • changes in the transcriptome or proteome.

The next two subsections deal with these points and Fig. 2 summarises the paper in the form of a Mind Map.

metabolomics and systems pharmacology

metabolomics and systems pharmacology

http://ars.els-cdn.com/content/image/1-s2.0-S1359644613002481-gr2.jpg

Metabolic Disease Drug Discovery— “Hitting the Target” Is Easier Said Than Done

David E. Moller, et al.   http://dx.doi.org:/10.1016/j.cmet.2011.10.012

Despite the advent of new drug classes, the global epidemic of cardiometabolic disease has not abated. Continuing

  • unmet medical needs remain a major driver for new research.

Drug discovery approaches in this field have mirrored industry trends, leading to a recent

  • increase in the number of molecules entering development.

However, worrisome trends and newer hurdles are also apparent. The history of two newer drug classes—

  1. glucagon-like peptide-1 receptor agonists and
  2. dipeptidyl peptidase-4 inhibitors—

illustrates both progress and challenges. Future success requires that researchers learn from these experiences and

  • continue to explore and apply new technology platforms and research paradigms.

The global epidemic of obesity and diabetes continues to progress relentlessly. The International Diabetes Federation predicts an even greater diabetes burden (>430 million people afflicted) by 2030, which will disproportionately affect developing nations (International Diabetes Federation, 2011). Yet

  • existing drug classes for diabetes, obesity, and comorbid cardiovascular (CV) conditions have substantial limitations.

Currently available prescription drugs for treatment of hyperglycemia in patients with type 2 diabetes (Table 1) have notable shortcomings. In general,

Therefore, clinicians must often use combination therapy, adding additional agents over time. Ultimately many patients will need to use insulin—a therapeutic class first introduced in 1922. Most existing agents also have

  • issues around safety and tolerability as well as dosing convenience (which can impact patient compliance).

Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics,

  • the quantification and analysis of metabolites produced by the body.

It refers to the direct measurement of metabolites in an individual’s bodily fluids, in order to

  • predict or evaluate the metabolism of pharmaceutical compounds, and
  • to better understand the pharmacokinetic profile of a drug.

Alternatively, pharmacometabolomics can be applied to measure metabolite levels

  • following the administration of a pharmaceutical compound, in order to
  • monitor the effects of the compound on certain metabolic pathways(pharmacodynamics).

This provides detailed mapping of drug effects on metabolism and

  • the pathways that are implicated in mechanism of variation of response to treatment.

In addition, the metabolic profile of an individual at baseline (metabotype) provides information about

  • how individuals respond to treatment and highlights heterogeneity within a disease state.

All three approaches require the quantification of metabolites found

relationship between -OMICS

relationship between -OMICS

http://upload.wikimedia.org/wikipedia/commons/thumb/e/eb/OMICS.png/350px-OMICS.png

Pharmacometabolomics is thought to provide information that

Looking at the characteristics of an individual down through these different levels of detail, there is an

  • increasingly more accurate prediction of a person’s ability to respond to a pharmaceutical compound.
  1. the genome, made up of 25 000 genes, can indicate possible errors in drug metabolism;
  2. the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed;
  3. and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions.

Pharmacometabolomics complements the omics with

  • direct measurement of the products of all of these reactions, but with perhaps a relatively
  • smaller number of members: that was initially projected to be approximately 2200 metabolites,

but could be a larger number when gut derived metabolites and xenobiotics are added to the list. Overall, the goal of pharmacometabolomics is

  • to more closely predict or assess the response of an individual to a pharmaceutical compound,
  • permitting continued treatment with the right drug or dosage
  • depending on the variations in their metabolism and ability to respond to treatment.

Pharmacometabolomic analyses, through the use of a metabolomics approach,

  • can provide a comprehensive and detailed metabolic profile or “metabolic fingerprint” for an individual patient.

Such metabolic profiles can provide a complete overview of individual metabolite or pathway alterations,

This approach can then be applied to the prediction of response to a pharmaceutical compound

  • by patients with a particular metabolic profile.

Pharmacometabolomic analyses of drug response are

Pharmacogenetics focuses on the identification of genetic variations (e.g. single-nucleotide polymorphisms)

  • within patients that may contribute to altered drug responses and overall outcome of a certain treatment.

The results of pharmacometabolomics analyses can act to “inform” or “direct”

  • pharmacogenetic analyses by correlating aberrant metabolite concentrations or metabolic pathways to potential alterations at the genetic level.

This concept has been established with two seminal publications from studies of antidepressants serotonin reuptake inhibitors

  • where metabolic signatures were able to define a pathway implicated in response to the antidepressant and
  • that lead to identification of genetic variants within a key gene
  • within the highlighted pathway as being implicated in variation in response.

These genetic variants were not identified through genetic analysis alone and hence

  • illustrated how metabolomics can guide and inform genetic data.

en.wikipedia.org/wiki/Pharmacometabolomics

Benznidazole Biotransformation and Multiple Targets in Trypanosoma cruzi Revealed by Metabolomics

Andrea Trochine, Darren J. Creek, Paula Faral-Tello, Michael P. Barrett, Carlos Robello
Published: May 22, 2014   http://dx.doi.org:/10.1371/journal.pntd.0002844

The first line treatment for Chagas disease, a neglected tropical disease caused by the protozoan parasite Trypanosoma cruzi,

  • involves administration of benznidazole (Bzn).

Bzn is a 2-nitroimidazole pro-drug which requires nitroreduction to become active. We used a

  • non-targeted MS-based metabolomics approach to study the metabolic response of T. cruzi to Bzn.

Parasites treated with Bzn were minimally altered compared to untreated trypanosomes, although the redox active thiols

  1. trypanothione,
  2. homotrypanothione and
  3. cysteine

were significantly diminished in abundance post-treatment. In addition, multiple Bzn-derived metabolites were detected after treatment.

These metabolites included reduction products, fragments and covalent adducts of reduced Bzn

  • linked to each of the major low molecular weight thiols:
  1. trypanothione,
  2. glutathione,
  3. g-glutamylcysteine,
  4. glutathionylspermidine,
  5. cysteine and
  6. ovothiol A.

Bzn products known to be generated in vitro by the unusual trypanosomal nitroreductase, TcNTRI,

  • were found within the parasites,
  • but low molecular weight adducts of glyoxal, a proposed toxic end-product of NTRI Bzn metabolism, were not detected.

Our data is indicative of a major role of the

  • thiol binding capacity of Bzn reduction products
  • in the mechanism of Bzn toxicity against T. cruzi.

 

 

Read Full Post »

Twitter is Becoming a Powerful Tool in Science and Medicine

 Curator: Stephen J. Williams, Ph.D.

Article ID #159: Twitter is Becoming a Powerful Tool in Science and Medicine. Published on 11/6/2014

WordCloud Image Produced by Adam Tubman

Updated 4/2016

Life-cycle of Science 2

A recent Science article (Who are the science stars of Twitter?; Sept. 19, 2014) reported the top 50 scientists followed on Twitter. However, the article tended to focus on the use of Twitter as a means to develop popularity, a sort of “Science Kardashian” as they coined it. So the writers at Science developed a “Kardashian Index (K-Index) to determine scientists following and popularity on Twitter.

Now as much buzz Kim Kardashian or a Perez Hilton get on social media, their purpose is solely for entertainment and publicity purposes, the Science sort of fell flat in that it focused mainly on the use of Twitter as a metric for either promotional or public outreach purposes. A notable scientist was mentioned in the article, using Twitter feed to gauge the receptiveness of his presentation. In addition, relying on Twitter for effective public discourse of science is problematic as:

  • Twitter feeds are rapidly updated and older feeds quickly get buried within the “Twittersphere” = LIMITED EXPOSURE TIMEFRAME
  • Short feeds may not provide the access to appropriate and understandable scientific information (The Science Communication Trap) which is explained in The Art of Communicating Science: traps, tips and tasks for the modern-day scientist. “The challenge of clearly communicating the intended scientific message to the public is not insurmountable but requires an understanding of what works and what does not work.” – from Heidi Roop, G.-Martinez-Mendez and K. Mills

However, as highlighted below, Twitter, and other social media platforms are being used in creative ways to enhance the research, medical, and bio investment collaborative, beyond a simple news-feed.  And the power of Twitter can be attributed to two simple features

  1. Ability to organize – through use of the hashtag (#) and handle (@), Twitter assists in the very important task of organizing, indexing, and ANNOTATING content and conversations. A very great article on Why the Hashtag in Probably the Most Powerful Tool on Twitter by Vanessa Doctor explains how hashtags and # search may be as popular as standard web-based browser search. Thorough annotation is crucial for any curation process, which are usually in the form of database tags or keywords. The use of # and @ allows curators to quickly find, index and relate disparate databases to link annotated information together. The discipline of scientific curation requires annotation to assist in the digital preservation, organization, indexing, and access of data and scientific & medical literature. For a description of scientific curation methodologies please see the following links:

Please read the following articles on CURATION

The Methodology of Curation for Scientific Research Findings

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

Science and Curation: The New Practice of Web 2.0

  1. Information Analytics

Multiple analytic software packages have been made available to analyze information surrounding Twitter feeds, including Twitter feeds from #chat channels one can set up to cover a meeting, product launch etc.. Some of these tools include:

Twitter Analytics – measures metrics surrounding Tweets including retweets, impressions, engagement, follow rate, …

Twitter Analytics – Hashtags.org – determine most impactful # for your Tweets For example, meeting coverage of bioinvestment conferences or startup presentations using #startup generates automatic retweeting by Startup tweetbot @StartupTweetSF.

 

  1. Tweet Sentiment Analytics

Examples of Twitter Use

A. Scientific Meeting Coverage

In a paper entitled Twitter Use at a Family Medicine Conference: Analyzing #STFM13 authors Ranit Mishori, MD, Frendan Levy, MD, and Benjamin Donvan analyzed the public tweets from the 2013 Society of Teachers of Family Medicine (STFM) conference bearing the meeting-specific hashtag #STFM13. Thirteen percent of conference attendees (181 users) used the #STFM13 to share their thoughts on the meeting (1,818 total tweets) showing a desire for social media interaction at conferences but suggesting growth potential in this area. As we have also seen, the heaviest volume of conference-tweets originated from a small number of Twitter users however most tweets were related to session content.

However, as the authors note, although it is easy to measure common metrics such as number of tweets and retweets, determining quality of engagement from tweets would be important for gauging the value of Twitter-based social-media coverage of medical conferences.

Thea authors compared their results with similar analytics generated by the HealthCare Hashtag Project, a project and database of medically-related hashtag use, coordinated and maintained by the company Symplur.  Symplur’s database includes medical and scientific conference Twitter coverage but also Twitter usuage related to patient care. In this case the database was used to compare meeting tweets and hashtag use with the 2012 STFM conference.

These are some of the published journal articles that have employed Symplur (www.symplur.com) data in their research of Twitter usage in medical conferences.

B. Twitter Usage for Patient Care and Engagement

Although the desire of patients to use and interact with their physicians over social media is increasing, along with increasing health-related social media platforms and applications, there are certain obstacles to patient-health provider social media interaction, including lack of regulatory framework as well as database and security issues. Some of the successes and issues of social media and healthcare are discussed in the post Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

However there is also a concern if social media truly engages the patient and improves patient education. In a study of Twitter communications by breast cancer patients Tweeting about breast cancer, authors noticed Tweeting was a singular event. The majority of tweets did not promote any specific preventive behavior. The authors concluded “Twitter is being used mostly as a one-way communication tool.” (Using Twitter for breast cancer prevention: an analysis of breast cancer awareness month. Thackeray R1, Burton SH, Giraud-Carrier C, Rollins S, Draper CR. BMC Cancer. 2013;13:508).

In addition a new poll by Harris Interactive and HealthDay shows one third of patients want some mobile interaction with their physicians.

Some papers cited in Symplur’s HealthCare Hashtag Project database on patient use of Twitter include:

C. Twitter Use in Pharmacovigilance to Monitor Adverse Events

Pharmacovigilance is the systematic detection, reporting, collecting, and monitoring of adverse events pre- and post-market of a therapeutic intervention (drug, device, modality e.g.). In a Cutting Edge Information Study, 56% of pharma companies databases are an adverse event channel and more companies are turning to social media to track adverse events (in Pharmacovigilance Teams Turn to Technology for Adverse Event Reporting Needs). In addition there have been many reports (see Digital Drug Safety Surveillance: Monitoring Pharmaceutical Products in Twitter) that show patients are frequently tweeting about their adverse events.

There have been concerns with using Twitter and social media to monitor for adverse events. For example FDA funded a study where a team of researchers from Harvard Medical School and other academic centers examined more than 60,000 tweets, of which 4,401 were manually categorized as resembling adverse events and compared with the FDA pharmacovigilance databases. Problems associated with such social media strategy were inability to obtain extra, needed information from patients and difficulty in separating the relevant Tweets from irrelevant chatter.  The UK has launched a similar program called WEB-RADR to determine if monitoring #drug_reaction could be useful for monitoring adverse events. Many researchers have found the adverse-event related tweets “noisy” due to varied language but had noticed many people do understand some principles of causation including when adverse event subsides after discontinuing the drug.

However Dr. Clark Freifeld, Ph.D., from Boston University and founder of the startup Epidemico, feels his company has the algorithms that can separate out the true adverse events from the junk. According to their web site, their algorithm has high accuracy when compared to the FDA database. Dr. Freifeld admits that Twitter use for pharmacovigilance purposes is probably a starting point for further follow-up, as each patient needs to fill out the four-page forms required for data entry into the FDA database.

D. Use of Twitter in Big Data Analytics

Published on Aug 28, 2012

http://blogs.ischool.berkeley.edu/i29…

Course: Information 290. Analyzing Big Data with Twitter
School of Information
UC Berkeley

Lecture 1: August 23, 2012

Course description:
How to store, process, analyze and make sense of Big Data is of increasing interest and importance to technology companies, a wide range of industries, and academic institutions. In this course, UC Berkeley professors and Twitter engineers will lecture on the most cutting-edge algorithms and software tools for data analytics as applied to Twitter microblog data. Topics will include applied natural language processing algorithms such as sentiment analysis, large scale anomaly detection, real-time search, information diffusion and outbreak detection, trend detection in social streams, recommendation algorithms, and advanced frameworks for distributed computing. Social science perspectives on analyzing social media will also be covered.

This is a hands-on project course in which students are expected to form teams to complete intensive programming and analytics projects using the real-world example of Twitter data and code bases. Engineers from Twitter will help advise student projects, and students will have the option of presenting their final project presentations to an audience of engineers at the headquarters of Twitter in San Francisco (in addition to on campus). Project topics include building on existing infrastructure tools, building Twitter apps, and analyzing Twitter data. Access to data will be provided.

Other posts on this site on USE OF SOCIAL MEDIA AND TWITTER IN HEALTHCARE and Conference Coverage include:

Methodology for Conference Coverage using Social Media: 2014 MassBio Annual Meeting 4/3 – 4/4 2014, Royal Sonesta Hotel, Cambridge, MA

Strategy for Event Joint Promotion: 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

REAL TIME Cancer Conference Coverage: A Novel Methodology for Authentic Reporting on Presentations and Discussions launched via Twitter.com @ The 2nd ANNUAL Sachs Cancer Bio Partnering & Investment Forum in Drug Development, 19th March 2014 • New York Academy of Sciences • USA

PCCI’s 7th Annual Roundtable “Crowdfunding for Life Sciences: A Bridge Over Troubled Waters?” May 12 2014 Embassy Suites Hotel, Chesterbrook PA 6:00-9:30 PM

CRISPR-Cas9 Discovery and Development of Programmable Genome Engineering – Gabbay Award Lectures in Biotechnology and Medicine – Hosted by Rosenstiel Basic Medical Sciences Research Center, 10/27/14 3:30PM Brandeis University, Gerstenzang 121

Tweeting on 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

http://pharmaceuticalintelligence.com/press-coverage/

Statistical Analysis of Tweet Feeds from the 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

1st Pitch Life Science- Philadelphia- What VCs Really Think of your Pitch

What VCs Think about Your Pitch? Panel Summary of 1st Pitch Life Science Philly

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

Read Full Post »

Introduction to Signaling

Curator: Larry H. Bernstein, MD, FCAP

 

We have laid down a basic structure and foundation for the remaining presentations.  It was essential to begin with the genome, which changed the course of teaching of biology and medicine in the 20th century, and introduced a central dogma of translation by transcription.  Nevertheless, there were significant inconsistencies and unanswered questions entering the twenty first century, accompanied by vast improvements in technical advances to clarify these issues. We have covered carbohydrate, protein, and lipid metabolism, which function in concert with the development of cellular structure, organ system development, and physiology.  To be sure, the progress in the study of the microscopic and particulate can’t be divorced from the observation of the whole.  We were left in the not so distant past with the impression of the Sufi story of the elephant and the three blind men, who one at a time held the tail, the trunk, and the ear, each proclaiming that it was the elephant.

I introduce here a story from the Brazilian biochemist, Jose

Eduardo des Salles Rosalino, on a formativr experience he had with the Nobelist, Luis Leloir.

Just at the beginning, when phosphorylation of proteins is presented, I assume you must mention that some proteins are activated by phosphorylation. This is fundamental in order to present self –organization reflex upon fast regulatory mechanisms. Even from an historical point of view. The first observation arrived from a sample due to be studied on the following day of glycogen synthetase. It was unintended left overnight out of the refrigerator. The result was it has changed from active form of the previous day to a non-active form. The story could have being finished here, if the researcher did not decide to spent this day increasing substrate levels (it could be a simple case of denaturation of proteins that changes its conformation despite the same order of amino acids). He kept on trying and found restoration of maximal activity. This assay was repeated with glycogen phosphorylase and the result was the opposite – it increases its activity. This led to the discovery

  • of cAMP activated protein kinase and
  • the assembly of a very complex system in the glycogen granule
  • that is not a simple carbohydrate polymer.

Instead, it has several proteins assembled and

  • preserves the capacity to receive from a single event (rise in cAMP)
  • two opposing signals with maximal efficiency,
  • stops glycogen synthesis,
  • as long as levels of glucose 6 phosphate are low
  • and increases glycogen phosphorylation as long as AMP levels are high).

I did everything I was able to do by the end of 1970 in order to repeat the assays with PK I, PKII and PKIII of M. Rouxii and using the Sutherland route to cAMP failed in this case. I then asked Leloir to suggest to my chief (SP) the idea of AA, AB, BB subunits as was observed in lactic dehydrogenase (tetramer) indicating this as his idea. The reason was my “chief”(SP) more than once, had said to me: “Leave these great ideas for the Houssay, Leloir etc…We must do our career with small things.” However, as she also had a faulty ability for recollection she also used to arrive some time later, with the very same idea but in that case, as her idea.
Leloir, said to me: I will not offer your interpretation to her as mine. I think it is not phosphorylation, however I think it is glycosylation that explains the changes in the isoenzymes with the same molecular weight preserved. This dialogue explains why during the reading and discussing “What is life” with him he asked me if as a biochemist in exile, talking to another biochemist, I expressed myself fully. I had considered that Schrödinger would not have confronted Darlington & Haldane because he was in U.K. in exile. This might explain why Leloir could have answered a bad telephone call from P. Boyer, Editor of The Enzymes, in a way that suggested that the pattern could be of covalent changes over a protein. Our FEBS and Eur J. Biochemistry papers on pyruvate kinase of M. Rouxii is wrongly quoted in this way on his review about pyruvate kinase of that year (1971).

 

Another aspect I think you must call attention to the following. Show in detail with different colors what carbons belongs to CoA, a huge molecule in comparison with the single two carbons of acetate that will produce the enormous jump in energy yield

  • in comparison with anaerobic glycolysis.

The idea is

  • how much must have been spent in DNA sequences to build that molecule in order to use only two atoms of carbon.

Very limited aspects of biology could be explained in this way. In case we follow an alternative way of thinking, it becomes clearer that proteins were made more stable by interaction with other molecules (great and small). Afterwards, it’s rather easy to understand how the stability of protein-RNA complexes where transmitted to RNA (vibrational +solvational reactivity stability pair of conformational energy).

Millions of years later, or as soon as, the information of interaction leading to activity and regulation could be found in RNA, proteins like reverse transcriptase move this information to a more stable form (DNA). In this way it is easier to understand the use of CoA to make two carbon molecules more reactive.

The discussions that follow are concerned with protein interactions and signaling.

Read Full Post »

Introduction to Metabolic Pathways

Author: Larry H. Bernstein, MD, FCAP

 

Humans, mammals, plants and animals, and eukaryotes and prokaryotes all share a common denominator in their manner of existence.  It makes no difference whether they inhabit the land, or the sea, or another living host. They exist by virtue of their metabolic adaptation by way of taking in nutrients as fuel, and converting the nutrients to waste in the expenditure of carrying out the functions of motility, breakdown and utilization of fuel, and replication of their functional mass.

There are essentially two major sources of fuel, mainly, carbohydrate and fat.  A third source, amino acids which requires protein breakdown, is utilized to a limited extent as needed from conversion of gluconeogenic amino acids for entry into the carbohydrate pathway. Amino acids follow specific metabolic pathways related to protein synthesis and cell renewal tied to genomic expression.

Carbohydrates are a major fuel utilized by way of either of two pathways.  They are a source of readily available fuel that is accessible either from breakdown of disaccharides or from hepatic glycogenolysis by way of the Cori cycle.  Fat derived energy is a high energy source that is metabolized by one carbon transfers using the oxidation of fatty acids in mitochondria. In the case of fats, the advantage of high energy is conferred by chain length.

Carbohydrate metabolism has either of two routes of utilization.  This introduces an innovation by way of the mitochondrion or its equivalent, for the process of respiration, or aerobic metabolism through the tricarboxylic acid, or Krebs cycle.  In the presence of low oxygen supply, carbohydrate is metabolized anaerobically, the six carbon glucose being split into two three carbon intermediates, which are finally converted from pyruvate to lactate.  In the presence of oxygen, the lactate is channeled back into respiration, or mitochondrial oxidation, referred to as oxidative phosphorylation. The actual mechanism of this process was of considerable debate for some years until it was resolved that the mechanism involve hydrogen transfers along the “electron transport chain” on the inner membrane of the mitochondrion, and it was tied to the formation of ATP from ADP linked to the so called “active acetate” in Acetyl-Coenzyme A, discovered by Fritz Lipmann (and Nathan O. Kaplan) at Massachusetts General Hospital.  Kaplan then joined with Sidney Colowick at the McCollum Pratt Institute at Johns Hopkins, where they shared tn the seminal discovery of the “pyridine nucleotide transhydrogenases” with Elizabeth Neufeld,  who later established her reputation in the mucopolysaccharidoses (MPS) with L-iduronidase and lysosomal storage disease.

This chapter covers primarily the metabolic pathways for glucose, anaerobic and by mitochondrial oxidation, the electron transport chain, fatty acid oxidation, galactose assimilation, and the hexose monophosphate shunt, essential for the generation of NADPH. The is to be more elaboration on lipids and coverage of transcription, involving amino acids and RNA in other chapters.

The subchapters are as follows:

1.1      Carbohydrate Metabolism

1.2      Studies of Respiration Lead to Acetyl CoA

1.3      Pentose Shunt, Electron Transfer, Galactose, more Lipids in brief

1.4      The Multi-step Transfer of Phosphate Bond and Hydrogen Exchange Energy

Complex I or NADH-Q oxidoreductase

Complex I or NADH-Q oxidoreductase

Fatty acid oxidation and ETC

Fatty acid oxidation and ETC

Read Full Post »

« Newer Posts - Older Posts »