Feeds:
Posts
Comments

Archive for the ‘Explanatory’ Category

Evolution and Medicine

Reporter and Curator: Larry H. Bernstein, MD, FCAP 

 

http://paleoaerie.org/2015/01/21/what-has-evolution-done-for-me-lately/

Excerpt of article

Cancer is an inescapable fact of life. All of us with either die from it or know someone who will. Cancer is so prevalent because it isn’t a disease in the way a flu or a cold is. No outside force or germ is needed to cause cancer (although it can). It arises from the very way we are put together.  Most of the genes that are needed for multicellular life have been found to be associated with cancer. Cancer is a result of our natural genetic machinery that has been built up over billions of years breaking down over time.

CLONAL EVOLUTION OF CANCER. MEL GREAVES.HTTP://WWW.SCIENCE-CONNECTIONS.COM/TRENDS/SCIENCE_CONTENT/EVOLUTION_6.HTM

Cancer is not only a result of evolutionary processes, cancer itself follows evolutionary theory as it grows. The immune system places a selective pressure on cancer cells, keeping it in check until the cancer evolves a way to avoid it and surpass it in a process known as immunoediting. Cancers face selective pressures in the microenvironments in which they grow. Due to the fast growth of cancer cells, they suck up oxygen in the tissues, causing wildly fluctuating oxygen levels as the body tries to get oxygen to the tissues. This sort of situation is bad for normal tissues and so it is for cancer, at least until they evolve and adapt. At some point, some cancer cells will develop the ability to use what is called aerobic glycolysis to make the ATP we use for energy. Ordinarily, our cells only use glycolysis when they run out of oxygen because aerobic respiration (aka oxidative phosphorylation) is far more efficient. Cancer cells, on the other hand, learn to use glycolysis all the time, even in the presence of abundant oxygen. They may not grow as quickly when there is plenty of oxygen, but they are far better than normal cells at hypoxic, or low oxygen, conditions, which they create by virtue of their metabolism. Moreover, they are better at taking up nutrients because many of the metabolic pathways for aerobic respiration also influence nutrient uptake, so shifting those pathways to nutrient uptake rather than metabolism ensures cancer cells get first pick of any nutrients in the area. The Warburg Effect, as this is called, works by selective pressures hindering those cells that can’t do so and favoring those that can. Because cancer cells have loose genetic controls and they are constantly dividing, the cancer population can evolve, whereas the normal cells cannot.

Evolutionary theory can also be used to track cancer as it metastasizes. If a person has several tumors, it is possible to take biopsies of each one and use standard cladistic programs that are normally used to determine evolutionary relationships between organisms to find which tumor is the original tumor. If the original tumor is not one of those biopsied, it will tell you where the cancer originated within the body. You can thus track the progression of cancer throughout a person’s body. Expanding on this, one can even track the effect of cancer through its effects on how organisms interact within ecosystems, creating its own evolutionary stamp on the environment as its effects radiate throughout the ecosystem.

I’ve talked about cancer at decent length (although I could easily go one for many more pages) because it is less well publicly known than some of the other ways that evolutionary theory helps us out in medicine. The increasing resistance of bacteria and viruses to antibiotics is well known. Antibiotic resistance follows standard evolutionary processes, with the result that antibiotic resistant bacteria are expected to kill 10 million people a year by 2050.  People have to get a new flu shot every year because the flu viruses are legion and they evolve rapidly to bypass old vaccinations.  If we are to accurately predict how the viruses may adapt and properly prepare vaccines for the coming year, evolutionary theory must be taken into account. Without it, the vaccines are much less likely to be effective. Evolutionary studies have pointed out important changes in the Ebola virus and how those changes areaffecting its lethality, which will need to be taken into account for effective treatments. Tracking the origins of viruses, like the avian flu or swine flu, gives us information that will be useful in combating them or even stopping them at their source before they become a problem.

HTTP://WWW.MEDSCAPE.COM/VIEWARTICLE/756378

 

 

Read Full Post »

Highlights of a Green Evolution

Reporter and Curator: Larry H Bernstein, MD, FCAP 

 

 

Chlorophyll

chlorophyll coloration to leaves

chlorophyll coloration to leaves

Paul May
School of Chemistry, University of Bristol
VRML, Jmol, and Chime versions

Chlorophyll is the molecule that absorbs sunlight and uses its energy to
synthesize carbohydrates from CO2 and water. This process is known as
photosynthesis. Animals and humans obtain their food supply by eating plants.

In 1780, the famous English chemist Joseph Priestley found that plants could “restore air which has been injured by the burning of candles.” He placed a mint
plant into a vessel of water for several days, then found that “the air would neither extinguish a candle, nor was it all inconvenient to a mouse which I put into it”.
He discovered that plants produce oxygen. Then, in 1794,  Antoine Lavoisier
discovered oxidation.  It fell to a Dutchman, Jan Ingenhousz,  to make the next
major contribution to the mechanism of photosynthesis.
Having heard of Priestley’s experiments, he  spent a summer near London doing
over 500 experiments, to discover that light plays a major role in photosynthesis.
He noted that plants not only have the faculty to correct bad air in six to ten days,
but they perform this in a few hours; owing to the influence of light of the sun
upon the plant.

Very soon after, more pieces of the puzzle were found by two chemists working
in Geneva. Jean Senebier, found that “fixed air” (CO2) was taken up during photosynthesis, and Theodore de Saussure discovered that the other reactant
necessary was water. The final contribution came from a German surgeon,
Julius Robert Mayer ,

Julius Robert Mayer

Julius Robert Mayer

who recognised that plants convert solar energy into chemical energy. He said:
“Nature has put itself the problem of how to catch in flight light streaming to
the Earth and to store the most elusive of all powers in rigid form. The plants
take in one form of power, light; and produce another power, chemical
difference.” The actual chemical equation which takes place is the reaction
between carbon dioxide and water, catalyzed by sunlight, to produce glucose
and a waste product, oxygen. The glucose sugar is either directly used as an
energy source by the plant for metabolism or growth, or is polymerized to form
starch, so it can be stored until needed. The waste oxygen is excreted into the
atmosphere, where it is made use of by plants and animals for respiration.

http://www.chm.bris.ac.uk/motm/chlorophyll/photosth.gif

Chlorophyll as a Photoreceptor

Chlorophyll is the molecule that traps this ‘most elusive of all powers’ – and is
called a photoreceptor. It is found in the chloroplasts of green plants,
and is what makes green plants, green. The basic structure of a chlorophyll
molecule is a porphyrin ring, co-ordinated to a central atom. This is very
similar in structure to the heme group found in hemoglobin, except that in
heme the central atom is iron, whereas in chlorophyll it is magnesium.

chphyll

http://www.chm.bris.ac.uk/motm/chlorophyll/chphyll.gif

Click for 3D structure file

Click for 3D structure file

There are actually 2 main types of chlorophyll, named a and b. They differ only
slightly, in the composition of a sidechain (in a it is – H3, in b it is CHO). Both of these
two chlorophylls are very effective photoreceptors because they contain a network of
alternating single and double bonds, and the orbitals can delocalize stabilizing the
structure. Such delocalised polyenes have very strong absorption bands in the visible
regions of the spectrum, allowing the plant to absorb the energy from sunlight.

chloroabs

http://www.chm.bris.ac.uk/motm/chlorophyll/chloroabs.gif

The different side groups in the 2 chlorophylls ‘tune’ the absorption spectrum to
slightly different wavelengths, so that light that is not significantly absorbed by
chlorophyll a, at, say, 460nm, will instead be captured by chlorophyll b, which
absorbs strongly at that wavelength. Thus these two kinds of chlorophyll
complement each other in absorbing sunlight. Plants can obtain all their energy
requirements from the blue and red parts of the spectrum, however, there is still
a large spectral region, between 500-600nm, where very little light is absorbed.

This light is in the green region of the spectrum, and since it is reflected, this
is the reason plants appear green. Chlorophyll absorbs so strongly that it can
mask other less intense colours. Some of these more delicate colours (from
molecules such as carotene and quercetin) are revealed when the chlorophyll
molecule decays in the Autumn, and the woodlands turn red, orange,and
golden brown. Chlorophyll can also be damaged when vegetation is cooked,
since the central Mg atom is replaced by hydrogen ions. This affects the energy
levels within the molecule, causing its absorbance spectrum to alter. Thus cooked
leaves change colour – often becoming a paler, insipid yellowy green.

As the chlorophyll in leaves decays in the autumn, the green colour fades and is
replaced by the oranges and reds of carotenoids.

Chlorophyll in Plants

The chlorophyll molecule is the active part that absorbs the sunlight, but just as with
hemoglobin, in order to do its job (synthesising carbohydrates) it needs to be attached
to the backbone of a very complicated protein. This protein may look haphazard in
design, but it has exactly the correct structure to orient the chlorophyll molecules in
the optimal position to enable them to react with nearby CO2 and H2O molecules in
a very efficient manner. Several chlorophyll molecules are lurking inside this bacterial
photoreceptor protein (right).

References:

Introduction to Organic Chemistry, Streitweiser and Heathcock (MacMillan, New York,
1981).

Biochemistry, L. Stryer (W.H. Freeman and Co, San Francisco, 1975).

Wikipedia – Chlorophyll

Chlorophyll (also chlorophyl) is a green pigment found in cyanobacteria and the
chloroplasts of algae and plants.  Its name is derived from the Greek words χλωρός,
chloros (“green”) and φύλλον, phyllon (“leaf”).  Chlorophyll is an extremely important
biomolecule, critical in photosynthesis, which allows plants to absorb energy from light. Chlorophyll absorbs light most strongly in the blue portion of the
electromagnetic spectrum, followed by the red portion. Conversely, it is a poor
absorber of green and near-green portions of the spectrum, hence the green
color of chlorophyll-containing tissues. chlorophyll was first isolated by
Joseph Bienaimé Caventou and Pierre Joseph Pelletier in 1817.

Absorption maxima of chlorophylls against the spectrum of white light

Chlorofilab.svg

Chlorophyll is found in high concentrations in chloroplasts of plant cells.

Clorofila_3
http://upload.wikimedia.org/wikipedia/commons/thumb/0/05/Clorofila_3.jpg/
120px-Clorofila_3.jpg

These chlorophyll maps show milligrams of chlorophyll per cubic meter of seawater
each month. Places where chlorophyll amounts were very low, indicating very low
numbers of phytoplankton, are blue. Places where chlorophyll concentrations were
high, meaning many phytoplankton were growing, are yellow.

chlophyll world map

chlophyll world map

http://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/
MY1DMM_CHLORA.ogv/220px–MY1DMM_CHLORA.ogv.jpg

Chlorophyll and photosynthesis

Chlorophyll is vital for photosynthesis, which allows plants to absorb energy from light.

Chlorophyll molecules are specifically arranged in and around photosystems that are
embedded in the thylakoid membranes of chloroplasts. In these complexes,
chlorophyll serves two primary functions. The function of the vast majority of
chlorophyll (up to several hundred molecules per photosystem) is to absorb light and
transfer that light energy by resonance energy transfer to a specific chlorophyll pair
in the reaction center of the photosystems.

The two currently accepted photosystem units are Photosystem II and Photosystem I,
which have their own distinct reaction center chlorophylls, named P680 and P700,
respectively. These pigments are named after the wavelength (in nanometers) of their
red-peak absorption maximum. The identity, function and spectral properties of the
types of chlorophyll in each photosystem are distinct and determined by each other
and the protein structure surrounding them. Once extracted from the protein into a
solvent (such as acetone or methanol), these chlorophyll pigments can be separated
in a simple paper chromatography experiment and, based on the number of polar
groups between chlorophyll a and chlorophyll b, will chemically separate out on the
paper.

The function of the reaction center chlorophyll is to use the energy absorbed by and
transferred to it from the other chlorophyll pigments in the photosystems to undergo
a charge separation, a specific redox reaction in which the chlorophyll donates an
electron into a series of molecular intermediates called an electron transport chain.
The charged reaction center chlorophyll (P680+) is then reduced back to its ground
state by accepting an electron. In Photosystem II, the electron that reduces P680+
ultimately comes from the oxidation of water into O2 and H+ through several
intermediates.

This reaction is how photosynthetic organisms such as plants produce O2 gas, and
is the source for practically all the O2 in Earth’s atmosphere. Photosystem I typically
works in series with Photosystem II; thus the P700+ of Photosystem I is usually
reduced, via many intermediates in the thylakoid membrane, by electrons ultimately
from Photosystem II. Electron transfer reactions in the thylakoid membranes are
complex, however, and the source of electrons used to reduce P700+ can vary.

The electron flow produced by the reaction center chlorophyll pigments is used to
shuttle H+ ions across the thylakoid membrane, setting up a chemiosmotic potential
used mainly to produce ATP chemical energy; and those electrons ultimately reduce
NADP+ to NADPH, a universal reductant used to reduce CO2 into sugars as well as
for other biosynthetic reductions.

Reaction center chlorophyll–protein complexes are capable of directly absorbing light
and performing charge separation events without other chlorophyll pigments, but the
absorption cross section (the likelihood of absorbing a photon under a given light
intensity) is small. Thus, the remaining chlorophylls in the photosystem and antenna
pigment protein complexes associated with the photosystems all cooperatively absorb
and funnel light energy to the reaction center. Besides chlorophyll a, there are other
pigments, called accessory pigments, which occur in these pigment–protein
antenna complexes.

Chemical structure

Chlorophyll is a chlorin pigment, which is structurally similar to and produced through the same metabolic pathway as other porphyrin pigments such as heme. At the center
of the chlorin ring is a magnesium ion. This was discovered in 1906, and was the first
time that magnesium had been detected in living tissue. or the structures depicted in

this article, some of the ligands attached to the Mg2+ center are omitted for clarity.
The chlorin ring can have several different side chains, usually including a long
phytol chain. There are a few different forms that occur naturally, but the most
widely distributed form in terrestrial plants is chlorophyll a.

Chlorophyll-a-3D

Chlorophyll-a-3D

http://upload.wikimedia.org/wikipedia/commons/thumb/9/92/
Chlorophyll-a-3D-vdW.png/220px-Chlorophyll-a-3D-vdW.png

Space-filling model of the chlorophyll a molecule

After initial work done by German chemist Richard Willstätter spanning from 1905 to
1915, the general structure of chlorophyll a was elucidated by Hans Fischer in 1940.
By 1960, when most of the stereochemistry of chlorophyll a was known, Robert Burns
Woodward published a total synthesis of the molecule. In 1967, the last remaining
stereochemical elucidation was completed by Ian Fleming, and in 1990 Woodward
and co-authors published an updated synthesis. Chlorophyll was announced to be
present in cyanobacteria and other oxygenic microorganisms that form stromatolites
in 2010; a molecular formula of C55H70O6N4Mg and a structure of (2-formyl)-chlorophyll a were deduced based on NMR, optical and mass spectra.

When leaves degreen in the process of plant senescence, chlorophyll is converted
to a group of colourless tetrapyrroles known as nonfluorescent chlorophyll catabolites
(NCC’s) with the general structure:

These compounds have also been identified in several ripening fruits

Nonfluorescentchlorophilcatabolites.svg

http://upload.wikimedia.org/wikipedia/commons/thumb/c/c7/Nonfluorescent
chlorophilcatabolites.svg/241px-Nonfluorescentchlorophilcatabolites.svg.png

Absorbance spectra of free chlorophyll a (blue) and b (red) in a solvent. The spectra
of chlorophyll molecules are slightly modified in vivo depending on specific pigment-
protein interactions.

Chlorophyll_ab_spectra

Chlorophyll_ab_spectra

http://upload.wikimedia.org/wikipedia/commons/thumb/2/23/Chlorophyll_ab_
spectra-en.svg/220px-Chlorophyll_ab_spectra-en.svg.png

Complementary light absorbance of anthocyanins with chlorophylls

Anthocyanins are other plant pigments. The absorbance pattern responsible for the
red color of anthocyanins may be complementary to that of green chlorophyll in
photosynthetically active tissues such as young Quercus coccifera leaves. It may
protect the leaves from attacks by plant eaters that may be attracted by green color.

Superposition of spectra of chlorophyll a and b with oenin (malvidin 3O glucoside),
a typical anthocyanidin, showing that, while chlorophylls absorb in the blue and
yellow/red parts of the visible spectrum, oenin absorbs mainly in the green part
of the spectrum, where chlorophylls don’t absorb at all.

Superposition of spectra of chlorophyll a and b with oenin

Superposition of spectra of chlorophyll a and b with oenin

http://upload.wikimedia.org/wikipedia/commons/thumb/f/f0/Spectra_Chlorophyll_
ab_oenin_%281%29.PNG/220px-Spectra_Chlorophyll_ab_oenin_%281%29.PNG

Many important natural substances are chelates. In chelates a central metal ion is
bonded to a large organic molecule, a molecule composed of carbon, hydrogen, and
other elements such as oxygen and nitrogen. One such chelate is chlorophyll, the
green pigment of plants. In chlorophyll the central ion is magnesium, and the large
organic molecule is a porphyrin. The porphyrin contains four nitrogen atoms that form
bonds to magnesium in a square planar arrangement. There are several forms of
chlorophyll. The structure of one form, chlorophyll a, is shown.

chlrphyl

http://scifun.chem.wisc.edu/chemweek/chlrphyl/chlrphyl.gif

(As you can see from the molecular structure, the “chloro” in chlorophyll does not
mean that it contains the element chlorine. The chloro portion of the word is from
the Greek chloros, which means yellowish green. The name of the element chlorine
comes from the same source. Chlorine is a yellowish green gas.)

Chlorophyll is one of the most important chelates in nature. It is capable of
channeling the energy of sunlight into chemical energy through the process of
photosynthesis. In photosynthesis, the energy absorbed by chlorophyll transforms
carbon dioxide and
water into carbohydrates and oxygen.

CO2 + H2O ——- (CH2O) + O2

(In this equation, (CH2O) is the empirical formula of carbohydrates.) The chemical
energy stored by photosynthesis in carbohydrates drives biochemical reactions in
nearly all living organisms.

In the photosynthetic reaction, carbon dioxide is reduced by water; in other words,
electrons are transferred from water to carbon dioxide. Chlorophyll assists this
transfer. When chlorophyll absorbs light energy, an electron in chlorophyll is excited
from a lower energy state to a higher energy state. In this higher energy state, this
electron is more readily transferred to another molecule. This starts a chain of
electron-transfer steps, which ends with an electron transferred to carbon dioxide.

Meanwhile, the chlorophyll which gave up an electron can accept an electron from
another molecule. This is the end of a process which starts with the removal of an
electron from water. Thus, chlorophyll is at the center of the photosynthetic
oxidation-reduction reaction between carbon dioxide and water.

Other molecules with structures similar to that of chlorophyll play important roles in
other biochemical electron-transfer (oxidation-reduction) reactions. Heme consists
of a porphyrin similar to that in chlorophyll and an iron(II) ion in the center of the
porphyrin. Heme is bright red. In the red blood cells of vertebrates, heme is bound
to proteins forming hemoglobin. Hemoglobin combines with oxygen in the lungs, gills,
or other respiratory surfaces and releases it in the tissues. In muscle cells, myoglobin,
the name given to hemoglobin in muscles, stores oxygen as an electron source for
energy-releasing oxidation-reduction reactions.

Another relative of chlorophyll is vitamin B12. Vitamin B12 contains a cobalt ion at
the center of the porphyrin. Like heme, vitamin B12 is bright red. It is essential to
digestion and nutritional absorption in animals. The exact way it functions is not
known. Because vitamin B12 is not produced by higher plants, a strictly vegetarian
diet can lead to vitamin B12 deficiency. However, it is produced by molds and
bacteria which grow on most foods.

The intense color of chlorophyll suggests that it may be useful as a commercial
pigment. In fact, chlorophyll a is a green dye (Natural Green 3) used in soaps and
cosmetics. The absorption spectrum of chlorophyll (below) shows that it absorbs
strongly in the red and blue-violet regions of the visible spectrum. Because it absorbs
red and blue-violet light, the light it reflects and transmits appears green. Commercial
pigments with structures similar to chlorophyll have been produced in a range of colors.
Some of these have slightly modified porphyrins, such as having hydrogen atoms
replaced with chlorine atoms. Others have different metal ions. For example, one
bright blue pigment has a copper(I) ion at the center of the porphyrin and is used
primarily in coloring fabrics.

http://scifun.chem.wisc.edu/chemweek/chlrphyl/clrphlsp.gif

Read Full Post »

The Colors of Respiration and Electron Transport

Reporter & Curator: Larry H. Bernstein, MD, FCAP 

 

 

Molecular Biology of the Cell. 4th edition

Electron-Transport Chains and Their Proton Pumps
http://www.ncbi.nlm.nih.gov/books/NBK26904/

Having considered in general terms how a mitochondrion uses electron
transport to create an electrochemical proton gradient, we need to
examine the mechanisms that underlie this membrane-based energy-conversion process. In doing so, we also accomplish a larger purpose.
As emphasized at the beginning of this chapter, very similar chemi-
osmotic mechanisms are used by mitochondria, chloroplasts, archea,
and bacteria. In fact, these mechanisms underlie the function of nearly
all living organisms— including anaerobes that derive all their energy
from electron transfers between two inorganic molecules. It is therefore
rather humbling for scientists to remind themselves that the existence
of chemiosmosis has been recognized for only about 40 years.

mitochondria

mitochondria

 

Overview of The Electron Transport Chain

Overview of The Electron Transport Chain

We begin with a look at some of the principles that underlie the electron-transport process, with the aim of explaining how it can pump protons
across a membrane.

Although protons resemble other positive ions such as Na+ and K+
in their movement across membranes, in some respects they are unique.
Hydrogen atoms are by far the most abundant type of atom in living
organisms; they are plentiful not only in all carbon-containing
biological molecules, but also in the water molecules that surround
them. The protons in water are highly mobile, flickering through the
hydrogen-bonded network of water molecules by rapidly
dissociating from one water molecule to associate with its neighbor,
as illustrated in Figure 14-20A. Protons are thought to move across a
protein pump embedded in a lipid bilayer in a similar way: they
transfer from one amino acid side chain to another, following a
special channel through the protein.

Protons are also special with respect to electron transport. Whenever
a molecule is reduced by acquiring an electron, the electron (e -) brings
with it a negative charge. In many cases, this charge is rapidly
neutralized by the addition of a proton (H+) from water, so that
the net effect of the reduction is to transfer an entire hydrogen atom,
H+ + e – (Figure 14-20B). Similarly, when a molecule is oxidized,
a hydrogen atom removed from it can be readily dissociated into
its constituent electron and proton—allowing the electron to
be transferred separately to a molecule that accepts electrons,
while the proton is passed to the water. Therefore, in a membrane
in which electrons are being passed along an electron-transport
chain, pumping protons from one side of the membrane to
another can be relatively simple. The electron carrier merely
needs to be arranged in the membrane in a way that causes it to
pick up a proton from one side of the membrane when it accepts
an electron, and to release the proton on the other side of the
membrane as the electron is passed to the next carrier molecule
in the chain (Figure 14-21).

protons pumped across membranes ch14f21

protons pumped across membranes ch14f21

http://www.ncbi.nlm.nih.gov/books/NBK26904/bin/ch14f21.gif

Figure 14-21

How protons can be pumped across membranes. As an electron
passes along an electron-transport chain embedded in a lipid-bilayer
membrane, it can bind and release a proton at each step.
In this diagram, electron carrier B picks up a proton (H+)
from one (more…)

e_transfer

e_transfer

The Redox Potential Is a Measure of Electron Affinities

In biochemical reactions, any electrons removed from one
molecule are always passed to another, so that whenever one
molecule is oxidized, another is reduced. Like any other chemical r
eaction, the tendency of such oxidation-reduction reactions, or
redox reactions, to proceed spontaneously depends on the free-
energy change (ΔG) for the electron transfer, which in turn
depends on the relative affinities of the two molecules for electrons.

Because electron transfers provide most of the energy for living
things, it is worth spending the time to understand them. Many
readers are already familiar with acids and bases, which donate
and accept protons (see Panel 2-2, pp. 112–113). Acids and bases
exist in conjugate acid-base pairs, in which the acid is readily
converted into the base by the loss of a proton. For example,
acetic acid (CH3COOH) is converted into its conjugate base
(CH3COO-) in the reaction:

Image ch14e3.jpg

In exactly the same way, pairs of compounds such as NADH and
NAD+ are called redox pairs, since NADH is converted to NAD+
by the loss of electrons in the reaction:

Image ch14e4.jpg

NAD+_NADH

NAD+_NADH

NADH is a strong electron donor: because its electrons are held
in a high-energy linkage, the free-energy change for passing its
electrons to many other molecules is favorable (see Figure 14-9).
It is difficult to form a high-energy linkage. Therefore its redox
partner, NAD+, is of necessity a weak electron acceptor.

The tendency to transfer electrons from any redox pair can be
measured experimentally. All that is required is the formation
of an electrical circuit linking a 1:1 (equimolar) mixture of the
redox pair to a second redox pair that has been arbitrarily selected
as a reference standard, so the voltage difference can be measured
between them (Panel 14-1, p. 784). This voltage difference is
defined as the redox potential; as defined, electrons move
spontaneously from a redox pair like NADH/NAD+ with a low
redox potential (a low affinity for electrons) to a redox pair like
O2/H2O with a high redox potential (a high affinity for electrons).
Thus, NADH is a good molecule for donating electrons to the
respiratory chain, while O2 is well suited to act as the “sink” for
electrons at the end of the pathway. As explained in Panel 14-1,
the difference in redox potential, ΔE0′, is a direct measure of
the standard free-energy change (ΔG°) for the transfer of an
electron from one molecule to another.

Proteins of inner space

Proteins of inner space

energetics-of-cellular-respiration

energetics-of-cellular-respiration

Box Icon

Panel 14-1

Redox Potentials.

Electron Transfers Release Large Amounts of Energy

As just discussed, those pairs of compounds that have the most negative
redox potentials have the weakest affinity for electrons and therefore
contain carriers with the strongest tendency to donate electrons.
Conversely, those pairs that have the most positive redox potentials
have the strongest affinity for electrons and therefore contain carriers
with the strongest tendency to accept electrons. A 1:1 mixture of NADH
and NAD+ has a redox potential of -320 mV, indicating that NADH has
a strong tendency to donate electrons; a 1:1 mixture of H2O and ½O2
has a redox potential of +820 mV, indicating that O2 has a strong
tendency to accept electrons. The difference in redox potential is
1.14 volts (1140 mV), which means that the transfer of each electron
from NADH to O2 under these standard conditions is enormously
favorable, where ΔG° = -26.2 kcal/mole (-52.4 kcal/mole for the two
electrons transferred per NADH molecule; see Panel 14-1). If we
compare this free-energy change with that for the formation of the
phosphoanhydride bonds in ATP (ΔG° = -7.3 kcal/mole, see Figure 2-75), we see that more than enough energy is released by the oxidization
of one NADH molecule to synthesize several molecules of ATP from
ADP and Pi.

 Phosphate dependence of pyruvate oxidation

Phosphate dependence of pyruvate oxidation

Living systems could certainly have evolved enzymes that would
allow NADH to donate electrons directly to O2 to make water in the reaction:

Image ch14e5.jpg

But because of the huge free-energy drop, this reaction would proceed
with almost explosive force and nearly all of the energy would be released
as heat. Cells do perform this reaction, but they make it proceed much
more gradually by passing the high-energy electrons from NADH to
O2 via the many electron carriers in the electron-transport chain.
Since each successive carrier in the chain holds its electrons more
tightly, the highly energetically favorable reaction 2H+ + 2e – + ½O2
→ H2O is made to occur in many small steps. This enables nearly half
of the released energy to be stored, instead of being lost to the
environment as heat.

Spectroscopic Methods Have Been Used to Identify Many Electron
Carriers in the Respiratory Chain

Many of the electron carriers in the respiratory chain absorb visible
light and change color when they are oxidized or reduced. In general,
each has an absorption spectrum and reactivity that are distinct enough
to allow its behavior to be traced spectroscopically, even in crude mixtures.
It was therefore possible to purify these components long before their
exact functions were known. Thus, the cytochromes were discovered
in 1925 as compounds that undergo rapid oxidation and reduction in
living organisms as disparate as bacteria, yeasts, and insects. By observing
cells and tissues with a spectroscope, three types of cytochromes were
identified by their distinctive absorption spectra and designated
cytochromes a, b, and c. This nomenclature has survived, even though
cells are now known to contain several cytochromes of each type and
the classification into types is not functionally important.

The cytochromes constitute a family of colored proteins that are
related by the presence of a bound heme group, whose iron atom
changes from the ferric oxidation state (Fe3+) to the ferrous oxidation
state (Fe2+) whenever it accepts an electron. The heme group consists
of a porphyrin ring with a tightly bound iron atom held by four nitrogen
atoms at the corners of a square (Figure 14-22). A similar porphyrin ring
is responsible for the red color of blood and for the green color of
leaves, being bound to iron in hemoglobin and to magnesium in
chlorophyll, respectively.

The structure of the heme group attached covalently to cytochrome c ch14f22

The structure of the heme group attached covalently to cytochrome c ch14f22

http://www.ncbi.nlm.nih.gov/books/NBK26904/bin/ch14f22.jpg

Figure 14-22. The structure of the heme group attached covalently
to cytochrome c.

Figure 14-22

The structure of the heme group attached covalently to cytochrome c.
The porphyrin ring is shown in blue. There are five different
cytochromes in the respiratory chain. Because the hemes in different
cytochromes have slightly different structures and (more…)

Iron-sulfur proteins are a second major family of electron carriers. In these
proteins, either two or four iron atoms are bound to an equal number of
sulfur atoms and to cysteine side chains, forming an iron-sulfur center
on the protein (Figure 14-23). There are more iron-sulfur centers than
cytochromes in the respiratory chain. But their spectroscopic detection
requires electron spin resonance (ESR) spectroscopy, and they are less
completely characterized. Like the cytochromes, these centers carry one
electron at a time.

structure of iron sulfur centers ch14f23

structure of iron sulfur centers ch14f23

http://www.ncbi.nlm.nih.gov/books/NBK26904/bin/ch14f23.jpg

Figure 14-23. The structures of two types of iron-sulfur centers.

Figure 14-23

The structures of two types of iron-sulfur centers. (A) A center of the
2Fe2S type. (B) A center of the 4Fe4S type. Although they contain
multiple iron atoms, each iron-sulfur center can carry only one
electron at a time. There are more than seven different (more…)

The simplest of the electron carriers in the respiratory chain—and
the only one that is not part of a protein—is a small hydrophobic
molecule that is freely mobile in the lipid bilayer known as ubiquinone,
or coenzyme Q. A quinone (Q) can pick up or donate either one or
two electrons; upon reduction, it picks up a proton from the medium
along with each electron it carries (Figure 14-24).

quinone electron carriers ch14f24

quinone electron carriers ch14f24

http://www.ncbi.nlm.nih.gov/books/NBK26904/bin/ch14f24.jpg

Figure 14-24. Quinone electron carriers.

Figure 14-24

Quinone electron carriers. Ubiquinone in the respiratory chain picks
up one H+ from the aqueous environment for every electron it accepts,
and it can carry either one or two electrons as part of a hydrogen atom
(yellow). When reduced ubiquinone donates (more…)

In addition to six different hemes linked to cytochromes, more than
seven iron-sulfur centers, and ubiquinone, there are also two copper
atoms and a flavin serving as electron carriers tightly bound to respiratory-chain proteins in the pathway from NADH to oxygen. This pathway
involves more than 60 different proteins in all.

As one would expect, the electron carriers have higher and higher
affinities for electrons (greater redox potentials) as one moves along
the respiratory chain. The redox potentials have been fine-tuned
during evolution by the binding of each electron carrier in a particular
protein context, which can alter its normal affinity for electrons. However,
because iron-sulfur centers have a relatively low affinity for electrons,
they predominate in the early part of the respiratory chain; in contrast,
the cytochromes predominate further down the chain, where a higher
affinity for electrons is required.

The order of the individual electron carriers in the chain was
determined by sophisticated spectroscopic measurements (Figure 14-25),
and many of the proteins were initially isolated and characterized as
individual polypeptides. A major advance in understanding the
respiratory chain, however, was the later realization that most of
the proteins are organized into three large enzyme complexes.

path of electrons ch14f25

path of electrons ch14f25

http://www.ncbi.nlm.nih.gov/books/NBK26904/bin/ch14f25.gif

Figure 14-25. The general methods used to determine the path of
electrons along an electron-transport chain.

Figure 14-25

The general methods used to determine the path of electrons along
an electron-transport chain. The extent of oxidation of electron
carriers a, b, c, and d is continuously monitored by following their
distinct spectra, which differ in their oxidized and (more…)

The Respiratory Chain Includes Three Large Enzyme Complexes
Embedded in the Inner Membrane

Membrane proteins are difficult to purify as intact complexes
because they are insoluble in aqueous solutions, and some of
the detergents required to solubilize them can destroy normal
protein-protein interactions. In the early 1960s, however, it
was found that relatively mild ionic detergents, such as deoxycholate,
can solubilize selected components of the inner mitochondrial
membrane in their native form. This permitted the identification
and purification of the three major membrane-bound respiratory
enzyme complexes in the pathway from NADH to oxygen (Figure 14-26).
As we shall see in this section, each of these complexes acts as an
electron-transport-driven H+ pump; however, they were
initially characterized in terms of the electron carriers that
they interact with and contain:

mitochondrial oxidative phosphorylation

mitochondrial oxidative phosphorylation

http://www.ncbi.nlm.nih.gov/books/NBK26904/bin/ch14f26.gif

Figure 14-26. The path of electrons through the three respiratory
enzyme complexes.

Figure 14-26

The path of electrons through the three respiratory enzyme complexes.
The relative size and shape of each complex are shown. During the
transfer of electrons from NADH to oxygen (red lines), ubiquinone
and cytochrome c serve as mobile carriers that ferry (more…)

The NADH dehydrogenase complex (generally known as complex I)
is the largest of the respiratory enzyme complexes, containing more
than 40 polypeptide chains. It accepts electrons from NADH and
passes them through a flavin and at least seven iron-sulfur centers
to ubiquinone. Ubiquinone then transfers its electrons to a second
respiratory enzyme complex, the cytochrome b-c1 complex.

The cytochrome b-c1 complex contains at least 11 different
polypeptide chains and functions as a dimer. Each monomer
contains three hemes bound to cytochromes and an iron-sulfur
protein. The complex accepts electrons from ubiquinone
and passes them on to cytochrome c, which carries its electron
to the cytochrome oxidase complex.

The cytochrome oxidase complex also functions as a dimer; each
monomer contains 13 different polypeptide chains, including two
cytochromes and two copper atoms. The complex accepts one electron
at a time from cytochrome c and passes them four at a time to oxygen.

The cytochromes, iron-sulfur centers, and copper atoms can carry
only one electron at a time. Yet each NADH donates two electrons,
and each O2 molecule must receive four electrons to produce water.
There are several electron-collecting and electron-dispersing points
along the electron-transport chain where these changes in electron
number are accommodated. The most obvious of these is cytochrome
oxidase.

An Iron-Copper Center in Cytochrome Oxidase Catalyzes Efficient
O2 Reduction

Because oxygen has a high affinity for electrons, it releases a
large amount of free energy when it is reduced to form water.
Thus, the evolution of cellular respiration, in which O2 is
converted to water, enabled organisms to harness much more
energy than can be derived from anaerobic metabolism. This
is presumably why all higher organisms respire. The ability of
biological systems to use O2 in this way, however, requires a
very sophisticated chemistry. We can tolerate O2 in the air we
breathe because it has trouble picking up its first electron; this
fact allows its initial reaction in cells to be controlled closely by
enzymatic catalysis. But once a molecule of O2 has picked up one
electron to form a superoxide radical (O2 -), it becomes dangerously
reactive and rapidly takes up an additional three electrons wherever
it can find them. The cell can use O2 for respiration only because
cytochrome oxidase holds onto oxygen at a special bimetallic
center, where it remains clamped between a heme-linked iron
atom and a copper atom until it has picked up a total of four electrons.
Only then can the two oxygen atoms of the oxygen molecule be
safely released as two molecules of water (Figure 14-27).

Figure 14-27. The reaction of O2 with electrons in cytochrome oxidase.

Figure 14-27

The reaction of O2 with electrons in cytochrome oxidase. As indicated,
the iron atom in heme a serves as an electron queuing point; this
heme feeds four electrons into an O2 molecule held at the bimetallic
center active site, which is formed by the other (more…)

The cytochrome oxidase reaction is estimated to account for 90%
of the total oxygen uptake in most cells. This protein complex is
therefore crucial for all aerobic life. Cyanide and azide are extremely
toxic because they bind tightly to the cell’s cytochrome oxidase
complexes to stop electron transport, thereby greatly reducing
ATP production.

Although the cytochrome oxidase in mammals contains 13
different protein subunits, most of these seem to have a subsidiary
role, helping to regulate either the activity or the assembly of the
three subunits that form the core of the enzyme. The complete
structure of this large enzyme complex has recently been determined
by x-ray crystallography, as illustrated in Figure 14-28. The atomic
resolution structures, combined with mechanistic studies of the effect
of precisely tailored mutations introduced into the enzyme by genetic
engineering of the yeast and bacterial proteins, are revealing the
detailed mechanisms of this finely tuned protein machine.

Figure 14-28. The molecular structure of cytochrome oxidase.

Figure 14-28

The molecular structure of cytochrome oxidase. This protein
is a dimer formed from a monomer with 13 different protein
subunits (monomer mass of 204,000 daltons). The three colored
subunits are encoded by the mitochondrial genome, and they
form the functional (more…)

Electron Transfers Are Mediated by Random Collisions in
the Inner Mitochondrial Membrane

The two components that carry electrons between the three
major enzyme complexes of the respiratory chain—ubiquinone
and cytochrome c—diffuse rapidly in the plane of the inner
mitochondrial membrane. The expected rate of random collisions
between these mobile carriers and the more slowly diffusing
enzyme complexes can account for the observed rates of electron
transfer (each complex donates and receives an electron about
once every 5–20 milliseconds). Thus, there is no need to postulate
a structurally ordered chain of electron-transfer proteins in the
lipid bilayer; indeed, the three enzyme complexes seem to exist as
independent entities in the plane of the inner membrane, being
present in different ratios in different mitochondria.

The ordered transfer of electrons along the respiratory chain
is due entirely to the specificity of the functional interactions
between the components of the chain: each electron carrier is
able to interact only with the carrier adjacent to it in the sequence
shown in Figure 14-26, with no short circuits.

Electrons move between the molecules that carry them in
biological systems not only by moving along covalent bonds
within a molecule, but also by jumping across a gap as large
as 2 nm. The jumps occur by electron “tunneling,” a quantum-
mechanical property that is critical for the processes we are
discussing. Insulation is needed to prevent short circuits that
would otherwise occur when an electron carrier with a low redox
potential collides with a carrier with a high redox potential. This
insulation seems to be provided by carrying an electron deep
enough inside a protein to prevent its tunneling interactions
with an inappropriate partner.

How the changes in redox potential from one electron carrier
to the next are harnessed to pump protons out of the mitochondrial
matrix is the topic we discuss next.

A Large Drop in Redox Potential Across Each of the Three Respiratory
Enzyme Complexes Provides the Energy for H+ Pumping

We have previously discussed how the redox potential reflects
electron affinities (see p. 783). An outline of the redox potentials
measured along the respiratory chain is shown in Figure 14-29.
These potentials drop in three large steps, one across each major
respiratory complex. The change in redox potential between any
two electron carriers is directly proportional to the free energy
released when an electron transfers between them. Each enzyme
complex acts as an energy-conversion device by harnessing some
of this free-energy change to pump H+ across the inner membrane,
thereby creating an electrochemical proton gradient as electrons
pass through that complex. This conversion can be demonstrated
by purifying each respiratory enzyme complex and incorporating
it separately into liposomes: when an appropriate electron donor
and acceptor are added so that electrons can pass through the complex,
H+ is translocated across the liposome membrane.

Figure 14-29. Redox potential changes along the mitochondrial
electron-transport chain.

Figure 14-29

Redox potential changes along the mitochondrial electron-transport
chain. The redox potential (designated E′0) increases as electrons
flow down the respiratory chain to oxygen. The standard free-energy
change, ΔG°, for the transfer (more…)

The Mechanism of H+ Pumping Will Soon Be Understood in
Atomic Detail

Some respiratory enzyme complexes pump one H+ per electron
across the inner mitochondrial membrane, whereas others pump
two. The detailed mechanism by which electron transport is coupled
to H+ pumping is different for the three different enzyme complexes.
In the cytochrome b-c1 complex, the quinones clearly have a role.
As mentioned previously, a quinone picks up a H+ from the aqueous
medium along with each electron it carries and liberates it when it
releases the electron (see Figure 14-24). Since ubiquinone is freely
mobile in the lipid bilayer, it could accept electrons near the inside
surface of the membrane and donate them to the cytochrome b-c1
complex near the outside surface, thereby transferring one H+
across the bilayer for every electron transported. Two protons are
pumped per electron in the cytochrome b-c1 complex, however, and
there is good evidence for a so-called Q-cycle, in which ubiquinone
is recycled through the complex in an ordered way that makes this
two-for-one transfer possible. Exactly how this occurs can now be
worked out at the atomic level, because the complete structure of
the cytochrome b-c1 complex has been determined by x-ray
crystallography (Figure 14-30).

Figure 14-30. The atomic structure of cytochrome b-c 1.

Figure 14-30

The atomic structure of cytochrome b-c 1. This protein is a dimer.
The 240,000-dalton monomer is composed of 11 different protein
molecules in mammals. The three colored proteins form the
functional core of the enzyme: cytochrome b (green), cytochrome (more…)

Allosteric changes in protein conformations driven by electron
transport can also pump H+, just as H+ is pumped when ATP
is hydrolyzed by the ATP synthase running in reverse. For both the
NADH dehydrogenase complex and the cytochrome oxidase complex,
it seems likely that electron transport drives sequential allosteric
changes in protein conformation that cause a portion of the protein
to pump H+ across the mitochondrial inner membrane. A general
mechanism for this type of H+ pumping is presented in Figure 14-31.

Figure 14-31. A general model for H+ pumping.

Figure 14-31

A general model for H+ pumping. This model for H+ pumping
by a transmembrane protein is based on mechanisms that are
thought to be used by both cytochrome oxidase and the light-driven
procaryotic proton pump, bacteriorhodopsin. The protein
is driven through (more…)

H+ Ionophores Uncouple Electron Transport from ATP Synthesis

Since the 1940s, several substances—such as 2,4-dinitrophenol—
have been known to act as uncoupling agents, uncoupling electron
transport from ATP synthesis. The addition of these low-molecular-weight organic compounds to cells stops ATP synthesis by mitochondria
without blocking their uptake of oxygen. In the presence of an
uncoupling agent, electron transport and H+ pumping continue at
a rapid rate, but no H+ gradient is generated. The explanation for
this effect is both simple and elegant: uncoupling agents are lipid-
soluble weak acids that act as H+ carriers (H+ ionophores), and
they provide a pathway for the flow of H+ across the inner mitochondrial
membrane that bypasses the ATP synthase. As a result of this short-
circuiting, the proton-motive force is dissipated completely, and
ATP can no longer be made.

Respiratory Control Normally Restrains Electron Flow
Through the Chain

When an uncoupler such as dinitrophenol is added to cells,
mitochondria increase their oxygen uptake substantially because
of an increased rate of electron transport. This increase reflects
the existence of respiratory control. The control is thought to
act via a direct inhibitory influence of the electrochemical proton
gradient on the rate of electron transport. When the gradient is
collapsed by an uncoupler, electron transport is free to run unchecked
at the maximal rate. As the gradient increases, electron transport
becomes more difficult, and the process slows. Moreover, if an
artificially large electrochemical proton gradient is experimentally
created across the inner membrane, normal electron transport
stops completely, and a reverse electron flow can be detected in
some sections of the respiratory chain. This observation suggests
that respiratory control reflects a simple balance between the
free-energy change for electron-transport-linked proton pumping
and the free-energy change for electron transport—that is, the
magnitude of the electrochemical proton gradient affects both
the rate and the direction of electron transport, just as it affects
the directionality of the ATP synthase (see Figure 14-19).

Respiratory control is just one part of an elaborate interlocking
system of feedback controls that coordinate the rates of glycolysis,
fatty acid breakdown, the citric acid cycle, and electron transport.
The rates of all of these processes are adjusted to the ATP:ADP ratio,
increasing whenever an increased utilization of ATP causes the ratio
to fall. The ATP synthase in the inner mitochondrial membrane,
for example, works faster as the concentrations of its substrates
ADP and Pi increase. As it speeds up, the enzyme lets more H+ flow
into the matrix and thereby dissipates the electrochemical proton
gradient more rapidly. The falling gradient, in turn, enhances the
rate of electron transport.

Similar controls, including feedback inhibition of several key enzymes
by ATP, act to adjust the rates of NADH production to the rate of
NADH utilization by the respiratory chain, and so on. As a result of
these many control mechanisms, the body oxidizes fats and sugars
5–10 times more rapidly during a period of strenuous exercise than
during a period of rest.

Natural Uncouplers Convert the Mitochondria in Brown Fat into
Heat-generating Machines

In some specialized fat cells, mitochondrial respiration is normally
uncoupled from ATP synthesis. In these cells, known as brown fat
cells, most of the energy of oxidation is dissipated as heat rather
than being converted into ATP. The inner membranes of the large
mitochondria in these cells contain a special transport protein that
allows protons to move down their electrochemical gradient, by-
passing ATP synthase. As a result, the cells oxidize their fat stores
at a rapid rate and produce more heat than ATP. Tissues containing
brown fat serve as “heating pads,” helping to revive hibernating animals
and to protect sensitive areas of newborn human babies from the cold.

Bacteria Also Exploit Chemiosmotic Mechanisms to Harness Energy

Bacteria use enormously diverse energy sources. Some, like animal
cells, are aerobic; they synthesize ATP from sugars they oxidize to
CO2 and H2O by glycolysis, the citric acid cycle, and a respiratory
chain in their plasma membrane that is similar to the one in the
inner mitochondrial membrane. Others are strict anaerobes, deriving
their energy either from glycolysis alone (by fermentation) or from an
electron-transport chain that employs a molecule other than oxygen
as the final electron acceptor. The alternative electron acceptor can
be a nitrogen compound (nitrate or nitrite), a sulfur compound
(sulfate or sulfite), or a carbon compound (fumarate or carbonate),
for example. The electrons are transferred to these acceptors by a
series of electron carriers in the plasma membrane that are comparable
to those in mitochondrial respiratory chains.

Despite this diversity, the plasma membrane of the vast majority of
bacteria contains an ATP synthase that is very similar to the one in
mitochondria. In bacteria that use an electron-transport chain to
harvest energy, the electron-transport pumps H+ out of the cell and
thereby establishes a proton-motive force across the plasma membrane
that drives the ATP synthase to make ATP. In other bacteria, the
ATP synthase works in reverse, using the ATP produced by glycolysis
to pump H+ and establish a proton gradient across the plasma
membrane. The ATP used for this process is generated by
fermentation processes (discussed in Chapter 2).

Thus, most bacteria, including the strict anaerobes, maintain a proton
gradient across their plasma membrane. It can be harnessed to drive
a flagellar motor, and it is used to pump Na+ out of the bacterium via
a Na+-H+ antiporter that takes the place of the Na+-K+ pump of
eucaryotic cells. This gradient is also used for the active inward transport
of nutrients, such as most amino acids and many sugars: each nutrient is
dragged into the cell along with one or more H+ through a specific symporter
(Figure 14-32). In animal cells, by contrast, most inward transport across
the plasma membrane is driven by the Na+ gradient that is established by the
Na+-K+ pump.

Figure 14-32. The importance of H+-driven transport in bacteria.

Figure 14-32

The importance of H+-driven transport in bacteria. A proton-motive force
generated across the plasma membrane pumps nutrients into the cell and
expels Na+. (A) In an aerobic bacterium, an electrochemical proton gradient
across the plasma membrane is produced (more…)

Some unusual bacteria have adapted to live in a very alkaline
environment and yet must maintain their cytoplasm at a physiological
pH. For these cells, any attempt to generate an electrochemical H+
gradient would be opposed by a large H+ concentration gradient in
the wrong direction (H+ higher inside than outside). Presumably for
this reason, some of these bacteria substitute Na+ for H+ in all of their
chemiosmotic mechanisms. The respiratory chain pumps Na+ out of
the cell, the transport systems and flagellar motor are driven by an
inward flux of Na+, and a Na+-driven ATP synthase synthesizes
ATP. The existence of such bacteria demonstrates that the principle
of chemiosmosis is more fundamental than the proton-motive force
on which it is normally based.

Summary

The respiratory chain in the inner mitochondrial membrane contains
three respiratory enzyme complexes through which electrons pass on
their way from NADH to O2.

Each of these can be purified, inserted into synthetic lipid vesicles,
and then shown to pump H+ when electrons are transported through it.
In the intact membrane, the mobile electron carriers ubiquinone and
cytochrome c complete the electron-transport chain by shuttling between
the enzyme complexes. The path of electron flow is NADH → NADH
dehydrogenase complex → ubiquinone → cytochrome b-c1 complex →
cytochrome c → cytochrome oxidase complex → molecular oxygen (O2).

The respiratory enzyme complexes couple the energetically favorable
transport of electrons to the pumping of H+ out of the matrix. The
resulting electrochemical proton gradient is harnessed to make ATP
by another transmembrane protein complex, ATP synthase, through
which H+ flows back into the matrix. The ATP synthase is a reversible
coupling device that normally converts a backflow of H+ into ATP
phosphate bond energy by catalyzing the reaction ADP + Pi → ATP,
but it can also work in the opposite direction and hydrolyze ATP to
pump H+ if the electrochemical proton gradient is sufficiently reduced.
Its universal presence in mitochondria, chloroplasts, and procaryotes
testifies to the central importance of chemiosmotic mechanisms in cells.

By agreement with the publisher, this book is accessible by the search
feature, but cannot be browsed.

Copyright © 2002, Bruce Alberts, Alexander Johnson, Julian Lewis,
Martin Raff, Keith Roberts, and Peter Walter; Copyright © 1983, 1989,
1994, Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff, Keith
Roberts, and James D. Watson .

Read Full Post »

The Colors of Life Function

Writer and Curator: Larry H. Bernstein, MD, FCAP 

2.5.1 Type 1 Copper Proteins

The Cu(II) state of this category has an intense blue color due to a thiolate ligand
to Cu(II) charge transfer, and unusual EPR properties arising from the asymmetrical
Cu site (distorted trigonal-pyramidal). The proteins all have a low molecular
mass and have, so far, rather arbitrarily been divided into sub-groups, such as
azurins, plastocyanins, pseudoazurins, amicyanins and various other blue
proteins. Of these the azurins, amicyanins, pseudo-azurins and plastocyanins
apparently have similar copper coordination by two histidine, one cysteine and
one methionine residue. Where the function of Type I copper proteins is known,
it is invariably electron transfer. As yet the names for these proteins are all trivial
and are often derived from source, function or color. The different classes are
usually discerned on the basis of their primary and tertiary structure.

The first bacterial blue proteins to be described were called azurins. Rusticyanin is
another example of a bacterial protein. It has unusual properties with a reduction
potential of 680 mV, and is functional at pH 2. The azurins have well-defined electron
-transfer functions.

The so-called pseudo-azurins differ from the azurins in the N-terminal amino acid
sequence and the optical spectra, which resemble those of plastocyanins.

The blue proteins known as plastocyanins occur in plants, blue-green and green
algae. Their electron transfer role is well defined, i.e. from the bc1 complex
(EC 1.10.2.2) to the photooxidized P-700.

Amicyanins are electron carriers between methylamine dehydrogenase and
cytochrome c, with a characteristic amino acid sequence.

Of the remaining blue proteins stellacyanin is a well- known example. Umecyanin,
plantacyanin and mavicyanin are also considered to belong to this group.
Although these proteins undergo redox reactions in vitro, their true biological
function remains unknown. Most of these proteins exhibit an unusual EPR signal
in which the copper hyperfine splitting pattern is poorly resolved. There is good
evidence that at least for stellacyanin, methionine does not function as a ligand
for copper.

2.5.2 Type 2 Copper Proteins

The copper centres in these proteins are spectroscopically consistent with square
planar or pyramidal coordination, containing oxygen and/or nitrogen ligation.
The Cu(II) is EPR active, with a ‘normal’ signal. There is no intense blue color.
This group includes the copper/zinc superoxide dismutase (EC 1.15.1.1),
dopamine b-monooxygenase (EC 1.14.17.1), galactose oxidase (EC 1.1.3.9)
and the various copper-containing amine oxidases. Some members of this last
group may also contain an organic prosthetic group, such as PQQ
(see section 10), or a modified amino-acid residue.

2.5.3 Type 3 Copper Proteins

In this group a pair of copper atoms comprise a dinuclear centre, with no EPR
activity as for single Cu’s. The best known example of an enzyme containing a
single Type 3 centre is tyrosinase (catechol oxidase, EC 1.10.3.1). This protein
contains a metal center which is a structural analogue of the dinuclear copper
center in hemocyanin (ref 31).

2.5.4 Multi-Copper Oxidases

In addition to the above, there are several proteins with catalytic activity that
contain Types 1, 2 and 3 centres in various stoichiometric ratios. These
include L-ascorbate oxidase (EC 1.10.3.3), laccase (EC 1.10.3.2) and
ceruloplasmin (ferro-oxidase, EC 1.16.3.1), the latter two having aromatic diamine
and diphenol oxidase activity. There is growing evidence that in these proteins
the Type 2 and Type 3 copper centres are juxtaposed. Recently it has been
shown that in L-ascorbate oxidase, a trinuclear copper site is present, consisting
of a type 3 copper site, very close (3.9 Å) and possibly bridged to a type 2 copper
site (ref 32). There is a view that ceruloplasmin functions as a ferro-oxidase
and the Fe(III) produced in this reaction can then oxidize the same substrates
as laccase.

2.5.5 Copper Centres in Cytochrome Oxidase

There are two copper centres that appear to be unique. Both are present in
cytochrome-c oxidase (EC 1.9.3.1). The first appears to be an isolated metal ion
and has been referred to as Cud and CuA. The second appears to be part
of a dinuclear centre with cytochrome a3. It has been referred to as Cuu,
Cua3 and CuB. At the moment the ascriptions CuA and CuB are most frequently
used; however, the recent discovery (ref 33) of a cytochrome oxidase in which
cytochrome a has been replaced by cytochrome b, leads to the recommendation
that CuB shall be referred to as Cua3.

There is a striking similarity between two of the Cu centres of N2O reductase
and CuA (ref 34, 35).

2.5.6 Molybdenum enzymes (general)

Molybdenum enzymes contain molybdenum at the catalytic center responsible
for reaction with substrate. They may be divided into those that contain
the iron-molybdenum cofactor and those that contain the pterin-molybdenum
cofactor.

2.5.7 Additional centers

If a molybdenum enzyme contains flavin, it may be called either a molybdenum
flavoprotein or a flavomolybdenum protein, as indicated above. Other centers
should be treated similarly, e.g. an iron-sulfur molybdenum protein.

2.5.8 Molybdenum enzymes containing the iron-molybdenum cofactor

The only enzymes at present known to belong to this group are the nitrogenases
(EC 1.18.6.1; and EC 1.19.6.1): see pp 89-116 in (ref 36) and pp 91-100 in (ref 37).

2.5.9 Molybdenum enzymes containing the pterin-molybdenum cofactor

These enzymes [see pp 411-415 in (ref 36) and (ref 38)] may be divided
into those in which the molybdenum bears a cyanide-labile sulfido (or thio
– see Note 1) ligand i.e. containing the S2- ligand as Mo=S) and those
lacking this ligand. The former group includes xanthine oxidase (EC 1.1.3.22),
xanthine dehydrogenase (EC 1.1.1.204), aldehyde oxidase (EC 1.2.3.1) and
purine hydroxylase (EC: see Note 2 and 3). These may be called ‘molybdenum-
containing hydroxylase’ as is widely done. Molybdenum enzymes lacking the
sulfide (thio) ligand include sulfite oxidase (EC 1.8.3.1), NAD(P)+-independent
aldehyde dehydrogenase and nitrate reductases (assimilatory and dissimilatory)
(EC 1.6.6.1-3).

2.5.10 Molybdenum enzymes containing the pterin-molybdenum cofactor

These enzymes [see pp 411-415 in (ref 36) and (ref 38)] may be divided into those
in which the molybdenum bears a cyanide-labile sulfido (or thio – see Note 1)
ligand i.e. containing the S2- ligand as Mo=S) and those lacking this ligand. The
former group includes xanthine oxidase (EC 1.1.3.22), xanthine dehydrogenase
(EC 1.1.1.204), aldehyde oxidase (EC 1.2.3.1) and purine hydroxylase. These
may be called ‘molybdenum-containing hydroxylase’ as is widely done.
Molybdenum enzymes lacking the sulfide (thio) ligand include sulfite oxidase
(EC 1.8.3.1), NAD(P)+-independent aldehyde dehydrogenase and nitrate
reductases (assimilatory and dissimilatory) (EC 1.6.6.1-3).

2.5.11 Metal-Substituted Metalloproteins

Scientists from several areas, dealing with spectroscopy and electron-transfer
mechanisms, often use metalloproteins in which a metal at the active site has
been substituted by another metal ion, like Co, Zn, Hg, Cd. Examples are zinc-
substituted cytochromes and cobalt-substituted ferredoxins.

The names for such modified proteins are easily given by using indications
like: ‘zinc-substituted ….’. In case of multi-metal proteins, where ambiguity might
arise about which metal has been substituted, one could easily add in parentheses
the name of the metal that has been replaced, such as: cobalt- substituted [Fe]
nitrogenase.

In formulae fragments or short names one could use the following notation:
[3Fe1Co-4S]2+, cytochrome c'[Fe[arrow right]CoFe], plastocyanin[Cu
[arrow right]Hg].

Ambler, R.P. (1980) in From Cyclotrons to Cytochromes (Kaplan, N.O. &
Robinson, A., eds) Academic Press, New York

Moore, G. & Pettigrew, F.(1987) Cytochromes c, Springer-Verlag, Berlin

Bartsch, R.G. (1963) in Bacterial Photosynthesis (Gest, H., San Pietro, A. &
Vernon, L.P., ed.) p. 315, Antioch Press, Yellow Springs, Ohio.

Stiefel, E.I. & Cramer, S.P. (1985) in Molybdenum Enzymes (Spiro, T.G., ed.),
Wiley-Interscience, New York, 89-116.

Smith B.E. et al. (1988), in Nitrogen Fixation Hundred Years After (Bothe,
H., de Bruijn, F.J. & Newton, W.E., ed.), Gustav Fischer, Stuttgart, New York,
91-100

Type-2 copper-containing enzymes.
MacPherson IS1, Murphy ME.
Cell Mol Life Sci. 2007 Nov;64(22):2887-99.

Type-2  Cu sites are found in all the major branches of life and are often
involved in the catalysis of oxygen species. Four type-2 Cu protein
families are selected as model systems for review: amine oxidases,
Cu monooxygenases, nitrite reductase/multicopper oxidase, and
CuZn superoxide dismutase. For each model protein, the availability
of multiple crystal structures and detailed enzymological studies provides
a detailed molecular view of the type-2 Cu site and delineation of the
mechanistic role of the Cu in biological function. Comparison of these
model proteins leads to the identification of common properties of the
Cu sites and insight into the evolution of the trinuclear active site found
in multicopper oxidases.

Copper proteins and copper enzymes.
Cass AE, Hill HA.
Ciba Found Symp. 1980;79:71-91.
http://www.chm.bris.ac.uk/motm/caeruloplasmin/copper_proteins/t1.htm

The copper proteins that function in homeostasis, electron transport, dioxygen
transport and oxidation are discussed. Particular emphasis is placed on the
role of the ligands, their type and disposition which, in conjunction with other
residues in the active site, determine the role of the copper ion. It is proposed that
copper proteins can be considered in four groups. Those in Group I contain a
single copper ion in an approximately tetrahedral environment with nitrogen and
sulphur-containing ligands. Group II proteins have a single copper ion in a square-
planar-like arrangement. Group III proteins have two copper ions in close
proximity. Group IV consists of multi-opper proteins, composed of sites
representative of the other three groups.

Such centers owe their name to the intense blue coloration of the corresponding
Cu(II) proteins. The color is particularly distinctive since the metal centers are
so optically diluted in these metalloenzymes that only intense absorption in the
visible region, resulting from symmetry allowed electronic transitions, can give
rise to conspicuous colors. In contrast, the comparatively pale blue color of normal
Cu(II)) is the result of forbidden electronic transitions between d-orbitals of
different symmetry; in Cu2+(aq) this gives a molar extinction coefficient of
10 M-1cm-1 from a broad absorption between 10,000 cm-1 and 15,000 cm-1
compared to about 3000 M-1cm-1 observed for blue Cu(II) centers.  For the
T1 centers the intense absorption is attributed to a ligand-to-metal charge
transfer between the Cu2+ and a bonded cysteinate ligand. Typically, as in
azurin or plastocyanin this occurs around 16,000 cm-1. Ceruloplasmin has
three T1 centers, and the blue absorption is at 16,400 cm-1 (610nm).

Plastocyanine geometry

around the copper Crystal structures show a very irregular ‘tetrahedral’ coordination
with two sulphurs from methionine and cysteinate, and two histidine nitrogens.
However a comparison of azurin with plastocyanin shows that the geometry
is in some ways closer to a trigonal bipyramid, with and without one extra apical
ligand, so that azurin has a weakly bound glutamine oxygen, and plastocyanine
does not. The T1 coppers in caruloplasmin are in plastocyanine-type domains.
Each of these are coordinated to two histidines and a cysteine, in two of the T1
domains there is also a methionine residue, the third T1 domain has a leucine
residue which may only have a van der Waals type contact with the copper.

T1 copper centers are functional in the reversible electron transfer:

Cu2+ + e-   =   Cu+

The strongly distorted geometry represents a compromise (entactic-state
situation) between d10 Cu(I), with its preferred tetrahedral or trigonal
coordination through soft sulfur ligands, and d9 Cu(II) with preferential
square planar or square pyramidal geometry and nitrogen ligand
coordination.   This irregular, high energy arrangement at the metal
center resembles the transition-state geometry between the tetrahedral
and square planar equilibrium configurations of the two oxidation states
involved and permits enhanced rates of electron transfer. The potential
range for proteins with T1 copper centers runs from 180 mV in
stellacyanin to 680 mV in rusticyanin.

Zinc proteins: enzymes, storage proteins, transcription factors, and replication
proteins.
Coleman JE.
Annu Rev Biochem. 1992;61:897-946.

In the past five years there has been a great expansion in our knowledge of
the role of zinc in the structure and function of proteins. Not only is zinc
required for essential catalytic functions in enzymes (more than 300 are known
at present), but also it stabilizes and even induces the folding of protein
subdomains. The latter functions have been most dramatically illustrated
by the discovery of the essential role of zinc in the folding of the DNA-binding
domains of eukaryotic transcription factors, including the zinc
finger transcription factors, the large family of hormone receptor proteins,
and the zinc cluster transcription factors from yeasts. Similar functions are
highly probable for the zinc found in the RNA polymerases and the zinc-
containing accessory proteins involved in nucleic acid replication. The rapid
increase in the number and nature of the proteins in which zinc functions
is not unexpected since zinc is the second most abundant trace metal found in
eukaryotic organisms, second only to iron. If one subtracts the amount of iron
found in hemoglobin, zinc becomes the most abundant trace metal found
in the human body.

Zinc Coordination Spheres in Protein Structures
ACS ChemWorx
Mikko Laitaoja , Jarkko Valjakka , and Janne Jänis
Inorg. Chem., 2013, 52 (19), pp 10983–10991
http://dx.doi.org:/10.1021/ic401072d
Sept 23, 2013

Synopsis
A statistical analysis in terms of zinc coordinating amino acids, metal-to-ligand
bond lengths, coordination number, and structural classification was performed,
revealing coordination spheres from classical tetrahedral cysteine/histidine binding
sites to more complex binuclear sites with carboxylated lysine residues. According
to the results, coordination spheres of hundreds of crystal structures in the PDB
could be misinterpreted due to symmetry-related molecules or missing electron
densities for ligands.

Protein-folding location can regulate manganese-binding versus copper- or
zinc-binding.
Tottey S, Waldron KJ, Firbank SJ, Reale B, Bessant C, Sato K, Cheek TR, et al.
Nature. 2008 Oct 23;455(7216):1138-42. http://dx.doi.org:/10.1038/nature07340

Metals are needed by at least one-quarter of all proteins. Although metallo-
chaperones insert the correct metal into some proteins, they have not been
found for the vast majority, and the view is that most metalloproteins acquire
their metals directly from cellular pools. However, some metals form more
stable complexes with proteins than do others. For instance, as described
in the Irving-Williams series, Cu(2+) and Zn(2+) typically form more stable
complexes than Mn(2+). Thus it is unclear what cellular mechanisms manage
metal acquisition by most nascent proteins. To investigate this question, we
identified the most abundant Cu(2+)-protein, CucA (Cu(2+)-cupin A), and the
most abundant Mn(2+)-protein, MncA (Mn(2+)-cupin A), in the periplasm of
the cyanobacterium Synechocystis PCC 6803. Each of these newly identified
proteins binds its respective metal via identical  ligands within a cupin fold.
Consistent with the Irving-Williams series, MncA only binds Mn(2+) after
folding in solutions containing at least a 10(4) times molar excess of Mn(2+)
over Cu(2+) or Zn(2+). However once MncA has bound Mn(2+), the metal
does not exchange with Cu(2+). MncA and CucA have signal peptides for
different export pathways into the periplasm, Tat and Sec respectively. Export
by the Tat pathway allows MncA to fold in the cytoplasm, which contains only
tightly bound copper or Zn(2+) (refs 10-12) but micromolar Mn(2+) (ref. 13). In
contrast, CucA folds in the periplasm to acquire Cu(2+). These results reveal
a mechanism whereby the compartment in which a protein folds overrides its
binding preference to control its metal content. They explain why the cytoplasm
must contain only tightly bound and buffered copper and Zn(2+).

Predicting copper-, iron-, and zinc-binding proteins in pathogenic species of the
Paracoccidioides genus
GB Tristão, L do Prado Assunção, LPA dos Santos, CL Borges, MG Silva-Bailão,
CM de Almeida Soares, G Cavallaro and AM Bailão*
Front. Microbiol., 9 Jan 2015 http://dx.doi.org:/10.3389/fmicb.2014.00761

Approximately one-third of all proteins have been estimated to contain at least
one metal cofactor, and these proteins are referred to as metalloproteins. These
represent one of the most diverse classes of proteins, containing metal ions that
bind to specific sites to perform catalytic, regulatory and structural functions.
Bioinformatic tools have been developed to predict metalloproteins encoded by
an organism based only on its genome sequence. Its function and the type of
metal binder can also be predicted via a bioinformatics approach.  Paracoccidioides
complex includes termodimorphic pathogenic fungi that are found as saprobic
mycelia in the environment and as yeast, the parasitic form, in host tissues. They
are the etiologic agents of Paracoccidioidomycosis, a prevalent systemic mycosis
in Latin America. Many metalloproteins are important for the virulence of several
pathogenic microorganisms. Accordingly, the present work aimed to predict the
copper, iron and zinc proteins encoded by the genomes of three phylogenetic species
of Paracoccidioides (Pb01, Pb03, andPb18). The metalloproteins were identified
using bioinformatics approaches based on structure, annotation and domains. Cu-,
Fe-, and Zn-binding proteins represent 7% of the total proteins encoded by
Paracoccidioides spp. genomes. Zinc proteins were the most abundant metallo-
proteins, representing 5.7% of the fungus proteome, whereas copper and iron
proteins represent 0.3 and 1.2%, respectively. Functional classification revealed that
metalloproteins are related to many cellular processes. Furthermore, it was observed
that many of these metalloproteins serve as virulence factors in the biology of the
fungus. Thus, it is concluded that the Cu, Fe, and Zn metalloproteomes of the
Paracoccidioides spp. are of the utmost importance for the biology and virulence
of these particular human pathogens.

Zinc finger proteins: new insights into structural and functional diversity
John H Laity, Brian M Lee, Peter E Wright
Current Opinion in Structural Biology Feb 2001; 11(1): 39–46
http://epigenie.com/key-epigenetic-players/chromatin-modifying-and-dna-
binding-proteins/zinc-finger-proteins/

Zinc finger proteins are among the most abundant proteins in eukaryotic genomes.
Their functions are extraordinarily diverse and include DNA recognition, RNA
packaging, transcriptional activation, regulation of apoptosis, protein folding
and assembly, and lipid binding. Zinc finger structures are as diverse as their
functions. Structures have recently been reported for many new zinc finger
domains with novel topologies, providing important insights into structure/function
relationships. In addition, new structural studies of proteins containing the
classical Cys2His2 zinc finger motif have led to novel insights into mechanisms
of DNA binding and to a better understanding of their broader functions in
transcriptional regulation.

Zinc Finger Proteins

Zinc finger (ZnF) proteins are a massive, diverse family of proteins that serve a
wide variety of biological functions. Due to their diversity, it is difficult to come up
with a simple definition of what unites all ZnF proteins; however, the most common
approach is to define them as all small, functional domains that require coordination
by at least one zinc ion (Laity et al., 2001). The zinc ion serves to stabilize the
integration of the protein itself, and is generally not involved in binding targets.
The “finger” refers to the secondary structures (α-helix and β-sheet) that are
held together by the Zn ion. Zinc finger containing domains typically serve
as interactors, binding DNA, RNA, proteins or small molecules (Laity et al., 2001).

ZnF Protein Families

Cys2His2 was the first domain discovered (also known as Krüppel-type). It was
initially discovered as a repeating domain in the IIIA transcription factor in
Xenopus laevis (Brown et al., 1985; Miller et al., 1985). IIIA has nine repeats
of the 30 amino acids that make up the Cys2His2 domain. Each domain forms
a left-handed ββα secondary structure, and coordinates a Zn ion between
two cysteines on the β-sheet hairpin and two histidines in the α-helix, hence
the name Cys2His2 (Lee et al., 1989). These resides are highly conserved,
as well as a general hydrophobic core that allows the helix to form. The other
residues can show great sequence diversity (Michael et al., 1992). Cys2His2
zinc fingers that bind DNA tend to have 2-4 tandem domains as part of a
larger protein. The residues of the alpha helices form specific contacts with a
specific DNA sequence motif by “reading” the nucleotides in major groove
of DNA (Elrod-Erickson et al., 1996; Pavletich and Pabo, 1991). Cys2His2
proteins are the biggest group of transcription factors in most species. Non-
DNA binding proteins can have much more flexible tertiary structure.
Examples of Cys2His2 proteins include the Inhibitor of Apoptosis (IAP) family
of proteins and the CTFC transcription factor.

Treble clef fingers are a very diverse group of ZnF protiens both in terms of
structure and function. What makes them a family is a shared fold at their core
that looks a little like a musical treble clef, especially if you squint (Grishin,
2001). Most treble clef finger motifs have a β hairpin, a variable loop region,
a β hairpin, and an α helix. The “knuckle” of the β hairpin and the α helix contain
the Cys-x-x-Cys sequence necessary to coordinate the Zn ion. Treble clef
fingers often form the core of protein structures, for example the L24E and
S14 ribosomal proteins and the RING finger family.

Zinc ribbons are a little less structurally complex than the other two major groups.
Zinc ribbons contain two zinc knuckles, often β hairpins, coordinating a zinc ion via
a two Cys residures separated by 2-4 other residues on one knuckle, and a Cys-x-x-
Cys on the other (Hahn and Roberts, 2000). Examples of zinc ribbon-containing
proteins include the basal transcription factors TFIIS and TFIIB that for a complex
with RNAPII to bind DNA, and the Npl4 nuclear core protein that uses a zinc ribbon
to bind ubiquitin (Alam et al., 2004). Cys2His2, treble clef fingers, and zinc ribbons
form the majority of zinc fingers, but there are several other smaller groups that
don’t fit neatly into these three. Green fluorescent protein as a marker for gene
expression.

Metallothionein proteins expression, copper and zinc concentrations, and lipid
peroxidation level in a rodent model for amyotrophic lateral sclerosis
E Tokuda, Shin-Ichi Ono,  K Ishige, A Naganuma, Y Ito, T Suzuki
Toxicology Jan 2007; 229(1–2): 33–41

It has been hypothesized that copper-mediated oxidative stress contributes to the
pathogenesis of familial amyotrophic lateral sclerosis (ALS), a fatal motor neuron
disease in humans. To verify this hypothesis, we examined the copper and zinc
concentrations and the amounts of lipid peroxides, together with that of the
expression of metallothionein (MT) isoforms in a mouse model [superoxide
dismutase1 transgenic (SOD1 Tg) mouse] of ALS. The expression of MT-I and
MT-II (MT-I/II) isoforms were measured together with Western blotting, copper
level, and lipid peroxides amounts increased in an age-dependent manner in the
spinal cord, the region responsible for motor paralysis. A significant increase was
already seen as early as 8-week-old SOD1 Tg mice, at which time the mice had not
yet exhibited motor paralysis, and showed a further increase at 16 weeks of age,
when paralysis was evident. Inversely, the spinal zinc level had significantly
decreased at both 8 and 16 weeks of age. The third isoform, the MT-III level,
remained at the same level as an 8-week-old wild-type mouse, finally increasing
to a significant level at 16 weeks of age. It has been believed that a mutant SOD1
protein, encoded by a mutant SOD1, gains a novel cytotoxic function while
maintaining its original enzymatic activity, and causes motor neuron death
(gain-of-toxic function). Copper-mediated oxidative stress seems to be a probable
underlying pathogenesis of gain-of-toxic function. Taking the above current
concepts and the classic functions of MT into account, MTs could have a disease
modifying property: the MT-I/II isoform for attenuating the gain-of-toxic function
at the early stage of the disease, and the MT-III isoform at an advanced stage.

Prion protein expression level alters regional copper, iron and zinc content in
the mouse brain
MJ Pushie,  IJ Pickering, GR Martin, S Tsutsui, FR Jirik and GN George
Metallomics, 2011,3, 206-214 http://dx.doi.org:/10.1039/C0MT00037J

The central role of the prion protein (PrP) in a family of fatal neurodegenerate
diseases has garnered considerable research interest over the past two decades.
Moreover, the role of PrP in neuronal development, as well as its apparent role
in metal homeostasis, is increasingly of interest. The host-encoded form of the
prion protein (PrPC) binds multiple copper atoms via its N-terminal domain
and can influence brain copper and iron levels. The importance of PrPC to the
regulation of brain metal homeostasis and metal distribution, however, is not
fully understood. We therefore employed synchrotron-based X-ray fluorescence
imaging to map the level and distributions of several key metals in the brains of
mice that express different levels of PrPC. Brain sections from wild-type, prion
gene knockout (Prnp−/−) and PrPC over-expressing mice revealed striking
variation in the levels of iron, copper, and even zinc in specific brain regions as
a function of PrPC expression. Our results indicate that one important function
of PrPC may be to regulate the amount and distribution of specific metals within
the central nervous system. This raises the possibility that PrPC levels, or its
activity, might regulate the progression of diseases in which altered metal
homeostasis is thought to play a pathogenic role such as Alzheimer’s,
Parkinson’s and Wilson’s diseases and disorders such as hemochromatosis.

Zinc & Copper Imbalances: Immense Biochemical Implications
Mar 27, 2013 by Michael McEvoy
http://metabolichealing.com/zinc-copper-imbalances-immense-biochemical-
implications/

The status of zinc and copper levels may have profound implications for
many people. Much has been written about the significance of these two
trace elements for many, many years. Many health conditions may be
directly caused by abnormal zinc and copper levels.

With all of the recent attention given to methylation status, gene mutations,
MTHFR, and the associated neurological and mental/behavioral disorders
that may ensue, zinc and copper status remains a pivotal ratio in these regards.

While zinc toxicity and copper deficiency are possible, the subject of this
article is on the more common imbalance: copper toxicity and zinc deficiency.

The Physiological Roles Of Zinc & Copper

Zinc and copper are antagonists. The balance between these two trace
elements is an example of the effects of biological dualism. While zinc
toxicity is possible, far more common is zinc deficiency and copper toxicity.
Both zinc and copper play essential roles in the body, and there can be a
number of causes for why imbalances ensue.

It may be easier to identify the roles that zinc doesn’t play in the body,
than the roles it does play. Zinc is an essential trace element that activates
several hundred enzymatic reactions. These reactions are fundamental
to life and biological activity. Some of the activities that zinc are involved in:

  • DNA & RNA synthesis
  • Gene expression
  • Nervous system function
  • Immune function & immune signaling such as cell
    apoptosis
  • Neuronal transmission
  • Brain function
  • Zinc possesses powerful anabolic activities in the cells
  • Formation of zinc proteins known as “zinc fingers”
  • Zinc is essential for blood clotting and platelet formation
  • Zinc is involved in Vitamin A synthesis
  • Folate is made available through zinc enzyme reactions
  • Along with copper, Zinc makes up the antioxidant
    enzyme
    system, ZnCu superoxide dismutase
  • Steroidal hormone synthesis
  • Growth & development of children
  • Testosterone and semen formation
  • The highest concentration of zinc is found in the
    male prostate gland

Copper is an essential trace element serving many important functions
as well. However, copper is well documented to induce several toxic effects
in the body, when elevated. Because copper is a pro-oxidant when free and
unbound, it can quickly generate free radicals.

The major sources for copper toxicity are: exposure to industrial forms
of copper such as copper pipes, copper cookware, birth control, exposure
to copper-based fungicides. Diets high in copper and low in zinc may play
a role in copper toxicity. Pyrrole disorder, which causes depletion of zinc,
may result in elevated levels of copper.

Some of the essential roles copper plays in the body:

  • Connective tissue formation
  • ATP synthesis
  • Iron metabolism
  • Brain health via neurotransmitter synthesis
  • Gene transcription
  • Synthesis of the antioxidant superoxide dismutase
  • Skin pigmentation
  • Nerve tissue: myelin sheath formation
  • Copper tends to rise when estrogen is dominant

Perhaps one of the first reports that zinc and copper imbalances play
a role in human health and disease was their detection in mental
disorders made by Carl Pfeiffer, MD, PhD. Dr. Pfeiffer identified a
condition known as pyrrole disorder, sometimes referred to as
pyrroluria or “mauve factor”.

As it turns out, pyrrole disorder is a major biochemical imbalance
in many people with chronic illnesses such as chronic Lyme disease,
autism, schizophrenia, depression, bi-polar, and chronic fatigue
syndrome. Pyrroles are a byproduct of hemoglobin synthesis.
Apparently, some individuals are more predisposed towards producing
higher amounts of pyrroles. When pyrroles are excessive, they irreversibly
bind to zinc and vitamin B6, causing their excretion. Consequently,
it is common that once zinc levels become depleted, copper levels tend to rise.

Copper Toxicity

Problems associated with copper toxicity include: pyrrole disorder,
estrogen dominance, schizophrenia, depression, anxiety disorder,
chronic fatigue, migraines, liver toxicity, thyroid conditions, chronic
candida yeast infections, PMS, to name a few. Some research has
even implicated copper toxicity with Alzheimer’s Disease and with
cardiovascular disease. Perhaps one of the primary mechanisms
through which copper toxicity can damage tissues is through its
initiation of oxidative stress and free radical formation. Free copper
ions that are not bound to copper proteins such as ceruloplasmin,
are pro-oxidants, and are highly reactive.

Empirical research from clinicians, indicates that there are different
types of copper imbalances. For example, if there is a lot of free,
unbound copper present, this may cause a situation of nutritive
copper deficiency. Another copper imbalance is when high pyrroles
depress zinc levels, and copper levels concomintantly rise. If high
pyrroles are present, B6 will also be lost in high amounts. In a general
but very real sense, all forms of copper excess will affect zinc status,
due to the dualistic nature of zinc and copper.

Copper & Estrogen

It has been known for many years that copper can cause a rise in
estrogen, and conversely estrogen may raise copper. Estrogen
dominance has been extensively studied in its role in breast
cancer development. One possible, critical role that can cause
estrogen to become carcinogenic, is through its oxidation induced by
copper. 
Once oxidized, estrogen forms volatile hydroxyl radicals and
the associated DNA damage and “mutagenesis”.

Zinc Deficiency

As mentioned previously, pyrrole disorder will directly depress
zinc status, causing high levels of its excretion. When zinc is
lost, copper rises. Because of their essential roles in neuro-
transmitter synthesis, low zinc and high copper levels can
directly effect cognition, behavior and thought processes.
Zinc has been studied in biochemical reactions involving
calcium-driven, synaptic neurotransmission, as well as in
glutamate/GABA balance and with limbic brain function.

Zinc & Reproduction

Zinc is essential for steroidal hormone synthesis, and is a
well known catalyst for testosterone synthesis, as well as
leutinizing hormone. Zinc has demonstrated its ability to
prevent miscarriage and toxicity during pregnancy. The male
prostate gland reportedly contains the highest concentration
of zinc in the body.

Zinc & Brain Function

Much attention has been given to excitotoxicity, such as the
effects induced by MSG (monosodium glutamtate). Excess
stimulation of the excitatory neurotransmitter glutamate,
may cause severe physical and psychological reactions in
certain individuals. Zinc has been studied for its ability to
enhance GABA 
(glutamate’s antagonistic neurotransmitter)
activity and to suppress excess glutamate.

Studies on mice demonstrated that when depleted of zinc
for two weeks, the mice developed seizures, most likely due
to GABA deficiencies and glutamate excess.

There is an emerging body of evidence that demonstrates
that Alzheimer’s disease may involve copper toxicity and
zinc deficiency. Not only can excess copper cause zinc
depletion, but so can excess lead.

The hippocampus, a major part of the limbic brain, records
memories and is responsible for processing meaningful
experiences. Numerous studies site that if hippocampal
cells are deprived of zinc, the hippocampal cells die. In
addition to hippocampus cell death induced by zinc
deprivation, the amygdala, the other major limbic gland
experiences cell death as well, when deprived of zinc.

Green Fluorescent Protein

Chalfie M, Tu Y, Euskirchen G, Ward WW, Prasher DC.
Science. 1994 Feb 11;263(5148):802-5.
http://www.ncbi.nlm.nih.gov/pubmed/8303295

A complementary DNA for the Aequorea victoria green fluorescent protein (GFP)
produces a fluorescent product when expressed in prokaryotic (Escherichia coli)
or eukaryotic (Caenorhabditis elegans) cells. Because exogenous substrates and
cofactors are not required for this fluorescence, GFP expression can be used
to monitor gene expression and protein localization in living organisms.

http://en.wikipedia.org/wiki/Green_fluorescent_protein

The green fluorescent protein (GFP) is a protein composed of 238 amino acid
residues (26.9 kDa) that exhibits bright green fluorescence when exposed
to light in the blue to ultraviolet range. Although many other marine organisms
have similar green fluorescent proteins, GFP traditionally refers to the protein
first isolated from the jellyfish Aequorea victoria. The GFP from A. victoria
has a major excitation peak at a wavelength of 395 nm and a minor one at
475 nm. Its emission peak is at 509 nm, which is in the lower green portion
of the visible spectrum. The fluorescence quantum yield (QY) of GFP is 0.79.
The GFP from the sea pansy (Renilla reniformis) has a single major excitation
peak at 498 nm.

In cell and molecular biology, the GFP gene is frequently used as a reporter of
expression. In modified forms it has been used to make biosensors, and many
animals have been created that express GFP as a proof-of-concept that a gene
can be expressed throughout a given organism. The GFP gene can be introduced
into organisms and maintained in their genome through breeding, injection with a
viral vector, or cell transformation. To date, the GFP gene has been introduced
and expressed in many Bacteria, Yeast and other Fungi, fish (such as zebrafish),
plant, fly, and mammalian cells, including human. Martin Chalfie, Osamu Shimomura,
and Roger Y. Tsien were awarded the 2008 Nobel Prize in Chemistry on 10 October
2008 for their discovery and development of the green fluorescent protein.

http://www.conncoll.edu/ccacad/zimmer/GFP-ww/GFP-1.htm

In Aequorea victoria a protein called aequorin releases blue light upon binding
with calcium. This blue light is then totally absorbed by the GFP, which in turn
gives off the green light as in the animation below.

In 1994 GFP was cloned. Now GFP is found in laboratories all over the world where
it is used in every conceivable plant and animal. Flatworms, algae, E. coli and pigs
have all been made to fluoresce with GFP.

The importance of GFP was recognized in 2008 when the Nobel Committee awarded
Osamu Shimomura, Marty Chalfie and Roger Tsien the Chemistry Nobel Prize ”
for the discovery and development of the green fluorescent protein, GFP.”

Why is it so popular? Well, I like to think of GFP as the microscope of the twenty-
first century. Using GFP we can see when proteins are made, and where they can go.
This is done by joining the GFP gene to the gene of the protein of interest so that
when the protein is made it will have GFP hanging off it. Since GFP fluoresces, one
can shine light at the cell and wait for the distinctive green fluorescence associated
with GFP to appear.

A variant of yellow fluorescent protein with fast and efficient maturation for
cell-biological applications
T Nagai, K Ibata, E Sun Park, M Kubota, K Mikoshiba & A Miyawaki
Nature Biotechnology 20, 87 – 90 (2002)  http://dx.doi.org:/10.1038/nbt0102-87

The green fluorescent protein (GFP) from the jellyfish Aequorea victoria
has provided a myriad of applications for biological systems. Over the last
several years, mutagenesis studies have improved folding properties of GFP.
However, slow maturation is still a big obstacle to the use of GFP variants for
visualization. These problems are exacerbated when GFP variants are expressed
at 37°C and/or targeted to certain organelles. Thus, obtaining GFP variants that
mature more efficiently is crucial for the development of expanded research
applications. Among Aequorea GFP variants, yellow fluorescent proteins (YFPs)
are relatively acid-sensitive,and uniquely quenched by chloride ion (Cl−)3. For
YFP to be fully and stably fluorescent, mutations that decrease the sensitivity
to both pH and Cl− are desired. Here we describe the development of an
improved version of YFP named “Venus”. Venus contains a novel mutation,
F46L, which at 37°C greatly accelerates oxidation of the chromophore, the rate-
limiting step of maturation. As a result of other mutations, F64L/M153T/
V163A/S175G, Venus folds well and is relatively tolerant of exposure
to acidosis and Cl−. We succeeded in efficiently targeting a neuropeptide
Y-Venus fusion protein to the dense-core granules of PC12 cells. Its secretion
was readily monitored by measuring release of fluorescence into the medium.
The use of Venus as an acceptor allowed early detection of reliable signals of
fluorescence resonance energy transfer (FRET) for Ca2+ measurements in brain
slices. With the improved speed and efficiency of maturation and the increased
resistance to environment, Venus will enable fluorescent labelings that were not
possible before.

Rhodopsin-like Protein from the Purple Membrane of Halobacterium halobium
DIETER OESTERHELT &  WALTHER STOECKENIUS
Nature New Biology 29 Sep 1971; 233, 149-152  | http://dx.doi.org:/10.1038/
newbio233149a0

HALOPHILIC bacteria require high concentrations of sodium chloride and lower
concentrations of KCl and MgCl2 for growth. The cell membrane dissociates into
fragments of varying size when the salt is removed1. One characteristic fragment—
termed the “purple membrane” because of its characteristic deep purple colour—
has been isolated in relatively pure form from Halobacterium halobium. We can
now show that the purple colour is due to retinal bound to an opsin-like protein,
the only protein present in this membrane fragment.

References

Stoeckenius, W. , and Rowen, R. , J. Cell Biol., 34, 365 (1967).

Stoeckenius, W. , and Kunau, W. H. , J. Cell Biol., 38, 337 (1968).

Blaurock, A. E. , and Stoeckenius, W. , Nature New Biology, 233, 152 (1971).

Sehgal, S. N. , and Gibbons, N. E. , Canad. J. Microbiol., 6, 165 (1960).

Kelly, M. , Norgård, S. , and Liaach-Jensen, S. , Acta Chem. Scand., 2A, 2169 (1970).

Shapiro, A. L. , Vinnela, E. , and Maizel, jun., J. V. , Biochem. Biophys. Res.
Commun., 28, 815 (1967).

The monomerization of the Purple protein, a member of the GFP-family
Corning, Brooke

Green fluorescent protein (GFP) has been used extensively since its discovery
in the 1960s to report and visualize gene expression. For years it has been the only
known naturally occurring fluorescent pigment that is encoded by a single gene,
making it extremely useful in various fields of biology, because the expression of
this gene directly leads to the appearance of the fluorescent green color. Recently,
however, many more proteins with similar properties to GFP, and available in a
variety of colors, have been isolated from the class of marine organisms called
Anthozoa, which includes the corals. This increase in the availability of colored
proteins in GFP family in turn has expanded the number of available biotech-
nology applications. However, some of these newly discovered GFP-like
proteins do not have wild-type forms that readily allow for the creation of
fusion proteins, particularly because of oligomerization. It is widely accepted
that almost all members of the GFP-family form dimers or tetramers in their
functional forms.

This study investigates a GFP-ike protein, Purple, isolated from two species,
Galaxea fascicularis and Montipora efflorescens. Purple protein forms oligomers
when expressed, which would then interfere with the normal expression of a  protein
to be tagged in gene fusion experiments. We selectively mutated 3 amino acids,
which we believed were responsible for oligomerization in Purple. These 3
residues were chosen based on sequence similarities to a very similar protein,
a mutant form of the Rtms5 chromoprotein from Montipora efflorescens. While
we had hoped that the resulting triple-mutant Purple protein would form
monomers in vivo while retaining its purple coloration, this turned out to
be incorrect. The resulting mutants had lost their ability to turn purple. However,
we also determined that we had successfully changed the oligomerization
state of Purple by examining the relative molecular mass of one our
mutant proteins, which turned out to be half the size of the original
purple protein. It is possible that by adding additional mutations in
the future, the original spectral properties could be recovered. If
successful, this would further expand the utility of the GFP family.

Rhodopsin, also known as visual purple, from Ancient Greek ῥόδον
(rhódon, “rose”), due to its pinkish color, and ὄψις (ópsis, “sight”), is
a light-sensitive receptor protein. It is a biological pigment in photo-
receptor cells of the retina. Rhodopsin is the primary pigment found
in rod photoreceptors. Rhodopsins belong to the G-protein-coupled
receptor (GPCR) family. They are extremely sensitive to light, enabling
vision in low-light conditions. Exposed to light, the pigment
immediately photobleaches, and it takes about 45 minutes to regenerate
fully in humans. Its discovery was reported by German physiologist
Franz Christian Boll in 1876.

Read Full Post »

The Life and Work of Allan Wilson

Curator: Larry H. Bernstein, MD, FCAP

 

Allan Charles Wilson (18 October 1934 – 21 July 1991) was a Professor of Biochemistry at the University of California, Berkeley, a pioneer in the use of molecular approaches to understand evolutionary change and reconstruct phylogenies, and a revolutionary contributor to the study of human evolution. He was one of the most controversial figures in post-war biology; his work attracted a great deal of attention both from within and outside the academic world. He is the only New Zealander to have won the MacArthur Fellowship.

He is best known for experimental demonstration of the concept of the molecular clock (with his doctoral student Vincent Sarich), which was theoretically postulated by Linus Pauling and Emile Zuckerkandl, revolutionary insights into the nature of the molecular anthropology of higher primates and human evolution, called Mitochondrial Eve hypothesis (with his doctoral students Rebecca L. Cann and Mark Stoneking).

Allan Wilson was born in Ngaruawahia, New Zealand, and raised on his family’s rural dairy farm at Helvetia, Pukekohe, about twenty miles south of Auckland. At his local Sunday School, the vicar’s wife was impressed by young Allan’s interest in evolution and encouraged Allan’s mother to enroll him at the elite King’s College secondary school in Auckland. There he excelled in mathematics, chemistry, and sports.

Wilson already had an interest in evolution and biochemistry, but intended to be the first in his family to attend university by pursuing studies in agriculture and animal science. Wilson met Professor Campbell Percy McMeekan, a New Zealand pioneer in animal science, who suggested that Wilson attend the University of Otago in southern New Zealand to further his study in biochemistry rather than veterinary science. Wilson gained a BSc from the University of Otago in 1955, majoring in both zoology and biochemistry.

The bird physiologist Donald S. Farner met Wilson as an undergraduate at Otago and invited him to Washington State University at Pullman as his graduate student. Wilson obliged and completed a master’s degree in zoology at WSU under Farner in 1957, where he worked on the effects of photoperiod on the physiology of birds.

Wilson then moved to the University of California, Berkeley, to pursue his doctoral research. At the time the family thought Allan would only be gone two years. Instead, Wilson remained in the United States, gaining his PhD at Berkeley in 1961 under the direction of biochemist Arthur Pardee for work on the regulation of flavin biosynthesis in bacteria. From 1961 to 1964, Wilson studied as a post-doc under biochemist Nathan O. Kaplan at Brandeis University in Waltham, Massachusetts. In Kaplan’s lab, working with lactate and malate dehydrogenases, Wilson was first introduced to the nascent field of molecular evolution. Nate Kaplan was one of the very earliest pioneers to address phylogenetic problems with evidence from protein molecules, an approach that Wilson later famously applied to human evolution and primate relationships. After Brandeis, Wilson returned to Berkeley where he set up his own lab in the Biochemistry department, remaining there for the rest of his life.

Wilson joined the UC Berkeley faculty of biochemistry in 1964, and was promoted to full professor in 1972. His first major scientific contribution was published as Immunological Time-Scale For Hominid Evolution in the journal Science in December 1967. With his student Vincent Sarich, he showed that evolutionary relationships of the human species with other primates, in particular the Great Apes (chimpanzees, gorillas, and orangutans), could be inferred from molecular evidence obtained from living species, rather than solely from fossils of extinct creatures.

Their microcomplement fixation method (see complement system) measured the strength of the immune reaction between an antigen (serum albumin) from one species and an antibody raised against the same antigen in another species. The strength of the antibody-antigen reaction was known to be stronger between more closely related species: their innovation was to measure it quantitatively among many species pairs as an “immunological distance”. When these distances were plotted against the divergence times of species pair with well-established evolutionary histories, the data showed that the molecular difference increased linearly with time, in what was termed a “molecular clock”. Given this calibration curve, the time of divergence between species pairs with unknown or uncertain fossil histories could be inferred. Most controversially, their data suggested that divergence times between humans, chimpanzees, and gorillas were on the order of 3~5 million years, far less than the estimates of 9~30 million years accepted by conventional paleoanthropologists from fossil hominids such as Ramapithecus. This ‘recent origin’ theory of human/ape divergence remained controversial until the discovery of the “Lucy” fossils in 1974.

Wilson and another PhD student Mary-Claire King subsequently compared several lines of genetic evidence (immunology, amino acid differences, and protein electrophoresis) on the divergence of humans and chimpanzees, and showed that all methods agreed that the two species were >99% similar.[4][19] Given the large organismal differences between the two species in the absence of large genetic differences, King and Wilson argued that it was not structural gene differences that were responsible for species differences, but gene regulation of those differences, that is, the timing and manner in which near-identical gene products are assembled during embryology and development. In combination with the “molecular clock” hypothesis, this contrasted sharply with the accepted view that larger or smaller organismal differences were due to large or smaller rates of genetic divergence.

In the early 1980s, Wilson further refined traditional anthropological thinking with his work with PhD students Rebecca Cann and Mark Stoneking on the so-called “Mitochondrial Eve” hypothesis.[20] In his efforts to identify informative genetic markers for tracking human evolutionary history, he focused on mitochondrial DNA (mtDNA) — genes that are found in mitochondria in the cytoplasm of the cell outside the nucleus. Because of its location in the cytoplasm, mtDNA is passed exclusively from mother to child, the father making no contribution, and in the absence of genetic recombination defines female lineages over evolutionary timescales. Because it also mutates rapidly, it is possible to measure the small genetic differences between individual within species by restriction endonuclease gene mapping. Wilson, Cann, and Stoneking measured differences among many individuals from different human continental groups, and found that humans from Africa showed the greatest inter-individual differences, consistent with an African origin of the human species (the so-called “Out of Africa” hypothesis). The data further indicated that all living humans shared a common maternal ancestor, who lived in Africa only a few hundreds of thousands of years ago.

This common ancestor became widely known in the media and popular culture as the Mitochondrial Eve. This had the unfortunate and erroneous implication that only a single female lived at that time, when in fact the occurrence of a coalescent ancestor is a necessary consequence of population genetic theory, and the Mitochondrial Eve would have been only one of many humans (male and female) alive at that time.[2][3] This finding was, like his earlier results, not readily accepted by anthropologists. Conventional hypothesis was that various human continental groups had evolved from diverse ancestors, over several million of years since divergence from chimpanzees. The mtDNA data, however, strongly suggested that all humans descended from a common, quite recent, African mother.

Wilson became ill with leukemia, and after a bone marrow transplant, died on Sunday, 21 July 1991, at the Fred Hutchinson Memorial Cancer Research Center in Seattle. He had been scheduled to give the keynote address at an international conference the same day. He was 56, at the height of his scientific recognition and powers.

Wilson’s success can be attributed to his strong interest and depth of knowledge in biochemistry and evolutionary biology, his insistence of quantification of evolutionary phenomena, and has early recognition of new molecular techniques that could shed light on questions of evolutionary biology. After development of quantitative immunological methods, his lab was the first to recognize restriction endonuclease mapping analysis as a quantitative evolutionary genetic method, which led to his early use of DNA sequencing, and the then-nascent technique of PCR to obtain large DNA sets for genetic analysis of populations. He trained scores of undergraduate, graduate (34 people, 17 each of men and women, received their doctoral degrees in his lab), and post-doctoral students in molecular evolutionary biology, including sabbatical visitors from six continents. His lab published more than 300 technical papers, and was recognized as a mecca for those wishing to enter the field of molecular evolution in the 1970s and 1980s.

The Allan Wilson Centre for Molecular Ecology and Evolution was established in 2002 in his honour to advance knowledge of the evolution and ecology of New Zealand and Pacific plant and animal life, and human history in the Pacific. The Centre is under the Massey University, at Palmerston North, New Zealand, and is a national collaboration involving the University of Auckland, Victoria University of Wellington, the University of Otago, University of Canterbury and the New Zealand Institute for Plant and Food Research.

A 41-minutes documentary film of his life entitled Allan Wilson, Evolutionary: Biochemist, Biologist, Giant of Molecular Biology was released by Films Media Group in 2008.

 

Allan Charles Wilson. 18 October 1934 — 21 July 1991

Rebecca L. Cann

Department of Cell and Molecular Biology, University of Hawaii at Manoa, Biomedical Sciences Building T514, 1960 East–West Rd, Honolulu, HI 96822, USA

Abstract

Allan Charles Wilson was born on 18 October 1934 at Ngaruawahia, New Zealand. He died in Seattle, Washington, on 21 July 1991 while undergoing treatment for leukemia.  Allan was known as a pioneering and highly innovative biochemist, helping to define the field of molecular evolution and establish the use of a molecular clock to measure evolutionary change between living species. The molecular clock, a method of measuring the timescale of evolutionary change between two organisms on the basis of the number of mutations that they have accumulated since last sharing a common genetic ancestor, was an idea initially championed by Émile Zuckerkandl and Linus Pauling (Zuckerkandl & Pauling 1962), on the basis of their observations that the number of changes in an amino acid sequence was roughly linear with time in the aligned hemoglobin proteins of animals. Although it is now not unusual to see the words ‘molecular evolution’ and ‘molecular phylogeny’ together, when Allan formed his own biochemistry laboratory in 1964 at the University of California, Berkeley, many scientists in the field of evolutionary biology considered these ideas complete heresy. Allan’s death at the relatively young age of 56 years left behind his wife, Leona (deceased in 2009), a daughter, Ruth (b. 1961), and a son, David (b. 1964), as well his as mother, Eunice (deceased in 2002), a younger brother, Gary Wilson, and a sister, Colleen Macmillan, along with numerous nieces, nephews and cousins in New Zealand, Australia and the USA. In this short span of time, he trained more than 55 doctoral students and helped launch the careers of numerous postdoctoral fellows.

Allan Charles Wilson, Biochemistry; Molecular Biology: Berkeley

1934-1991

Professor

The sudden death of Allan Wilson, of leukemia, on 21 July 1991, at the age of 56, and at the height of his powers, robbed the Berkeley campus and the international scientific community of one of its most active and respected leaders.

Read Full Post »

Evolution of Myoglobin and Hemoglobin

Author and Curator: Larry H. Bernstein, MD, FCAP 

Nitric oxide dioxygenase function and mechanism of flavohemoglobin, hemoglobin, myoglobin and their associated reductases

Paul R. Gardner
Journal of Inorganic Biochemistry Jan 2005;  99(1): 247–266
http://dx.doi.org:/10.1016/j.jinorgbio.2004.10.003

Microbial flavohemoglobins (flavoHbs) and hemoglobins (Hbs) show large radical dotNO dioxygenation rate constants ranging from 745 to 2900 μM−1 s−1 suggesting a primal radical dotNO dioxygenase (NOD) (EC 1.14.12.17) function for the ancient Hb superfamily. Indeed, modern O2-transporting and storing mammalian red blood cell Hb and related muscle myoglobin (Mb) show vestigial radical dotNO dioxygenation activity with rate constants of 34–89 μM−1 s−1. In support of a NOD function, microbial flavoHbs and Hbs catalyze O2-dependent cellular radical dotNO metabolism, protect cells from radical dotNO poisoning, and are induced by radical dotNO exposures. Red blood cell Hb, myocyte Mb, and flavoHb-like activities metabolize radical dotNO in the vascular lumen, muscle, and other mammalian cells, respectively, decreasing radical dotNO signalling and toxicity. HbFe(III)–OOradical dot, HbFe(III)–OONO and protein-caged [HbFe(III)–Oradical dotradical dotNO2] are proposed intermediates in a reaction mechanism that combines both O-atoms of O2 with radical dotNO to form nitrate and HbFe(III). A conserved Hb heme pocket structure facilitates the dioxygenation reaction and efficient turnover is achieved through the univalent reduction of HbFe(III) by associated reductases. High affinity flavoHb and Hb heme ligands, and other inhibitors, may find application as antibiotics and antitumor agents that enhance the toxicity of immune cell-derived radical dotNO or as vasorelaxants that increase radical dotNO signaling.

http://ars.els-cdn.com/content/image/1-s2.0-S016201340400296X-gr1.sml

NO-NOD-NOR map

http://ars.els-cdn.com/content/image/1-s2.0-S016201340400296X-gr2.sml

http://ars.els-cdn.com/content/image/1-s2.0-S016201340400296X-gr3.sml

The evolution of the globin family genes: Concordance of stochastic and augmented maximum parsimony genetic distances for α hemoglobin, β hemoglobin and myoglobin phylogenies
R Holmquist, TH Jukes, H Moise, M Goodman, GW Moore
Journal of Molecular Biology Jul 1976; 105(1): 39–74
http://dx.doi.org:/10.1016/0022-2836(76)90194-7

We compare the amino acid sequences of 70 globing, representing the following families: (a) α hemoglobin chains; (b) β hemoglobin chains; (c) myoglobins; (d) two lamprey, a mollusc, and two plant globins. The comparisons show a convergence of maximal and minimal estimates of genetic differences as calculated respectively by the stochastic and maximum parsimony procedures, thus demonstrating for the first time the logical consistency and complementarity of the two procedures. Evolutionary rates are non-constant, varying over a range of 1 to 75 nucleotide replacements per 100 codons per 108 years. These rate differentials are resolved into two components (a) due to change in the number of codon sites free to fix mutations during the period of divergence of the species involved; (b) due to change in fixation intensity at each site. These two components also show non-uniformity along different lineages. Positive Darwinian natural selection can bring about an increase in either component, and negative or stabilizing selection in protein evolution can lead to decreases. Accelerated rates of globin evolution were found in lineages of cold-blooded vertebrates, some marsupials, and early placental mammals, while slower rates were found in warm-blooded vertebrates, especially higher primates. One manifestation of negative selection in the globins is that minimal 3-base type amino acid replacements occur less frequently than would be expected if base replacements had occurred and were accepted at random. The selection against these replacements is not due to atypical behavior with respect to the change in electrical charge involved in the replacements. Interestingly, the globins from the lamprey, sea hare and the legumes are as distant from one another as are α-hemoglobin and β-hemoglobin from myoglobin.

Hemoglobin Orthologs
http://www.bio.davidson.edu/Courses/Molbio/MolStudents/spring2005/Heiner/ortholog.html

Orthologs are sequences of genes that evolved from a common ancestor and can be traced evolutionarily through different species. By comparing the ortholog sequences of a specific gene between many species, the amino acid sequences which are conserved can be determined. These highly conserved sequences are important, because they provide information on which amino acids are essential to the protein structure and function.

Evolution of Hemoglobin

Hemoglobin is derived from the myoglobin protein, and ancestral species just had myoglobin for oxygen transport. 500 million years ago the myoglobin gene duplicated and part of the gene became hemoglobin. Lampreys are the most ancestral animal to have hemoglobin, and the ancestral version was composed of dimers instead of tetramers and was only weakly cooperative. 100 million years later, the hemoglobin gene duplicated again forming alpha and beta subunits. This form of derived hemoglobin is found in bony fish, reptiles, and mammals, which all have both alpha and beta subunits to form a tetramer (Mathews et al., 2000).

Conserved Sequences

When the amino acid sequences of myoglobin, the hemoglobin alpha subunit, and the hemoglobin beta subunit are compared, there are several amino acids that remain conserved between all three globins (Mathews et al., 2000). These amino acid sequences are considered truly essential, because they have remained unchanged throughout evolution, and therefore are fundamental to the function of the protein. These essential amino acids can be seen in Figure 1, which compares myoglobin, and the alpha and beta subunits of hemoglobin. The histidines in helix F8 and helix E7 are highly conserved. These histidines are located proximally and distally to the heme molecule and keep the heme molecule in place within the hemoglobin protein as seen in Figure 2 (Mathews et al., 2000). This shows that the position of the heme molecule within the globin protein is essential to its function. Likewise, the amino acids in the FG region are also highly conserved. This region of the protein is essential to the conformational change between the T to R states (Mathews et al., 2000). Additionally, the amino acids at the alpha-beta subunit interfaces are highly conserved, because they also affect the conformational change between the subunits, which regulates oxygen affinity and cooperativity. In general, the most highly conserved sequences are located within the interior of the hemoglobin protein where the subunits contact each other (Gribaldo et al., 2003).

A cartoon drawing of the structure of hemoglobin around heme molecule. The histadines in helix F8 and E7 interact directly with the heme molecule.  figure2

http://www.bio.davidson.edu/Courses/Molbio/MolStudents/spring2005/Heiner/figure2.jpg

Figure 2: A cartoon drawing of the structure of hemoglobin around heme molecule. The histadines in helix F8 and E7 interact directly with the heme molecule. http://www.aw-bc.com/mathews/ch07/fi7p5.htm  (permission pending).

Figure 1: The amino acid sequences of myoglobin, alpha subunit of hemoglobin, and beta subunit of hemoglobin. The amino acid sequences highlighted in tan are conserved between all three globins and the amino acid sequences highlighted in gray are conserved between alpha and beta hemoglobin. http://www.aw-bc.com/mathews/ch07/fi7p11.htm (permission pending).

http://www.bio.davidson.edu/Courses/Molbio/MolStudents/spring2005/Heiner/figure1.jpg

Figure 2: A cartoon drawing of the structure of hemoglobin around heme molecule. The histidines in helix F8 and E7 interact directly with the heme molecule. http://www.aw-bc.com/mathews/ch07/fi7p11.htm (permission pending).

Alpha Subunit of Hemoglobin

The alpha subunit of hemoglobin has several amino acid sequences that are conserved across many species and are essential to its function. The alpha subunit of hemoglobin is encoded by the 2 genes HBA1 and HBA2 both located on chromosome 16 (GeneCard, 2005). Click here to see the gene card for HBA1. To determine which amino acid sequences are conserved, I compared the orthologs of HBA1 in Homo sapiens (humans) to 5 additional species including, Xenopus tropicalis (African clawed frog), Danio rerio (Zebra fish), Gallus gallus (Red jungle fowl), Mus musculus (mouse), and Rattus norvegicus (rat) using the Ensembl program. Figure 3 shows the 6 orthologs aligned and the important conserved regions highlighted. The stars indicate amino acids that are conserved between all of the species. As a general observation, the mouse ortholog of HBA is the most similar to human HBA, because it is the most evolutionarily related. The amino acid sequences that are conserved in all globin proteins (highlighted in blue) can be seen in Figure 3. There are also several conserved amino acids that are specifically important to HBA structure (highlighted in red) including: the phenylalanine (F) at position 44, which is in direct contact with the heme group; tyrosine (Y) at position 142, which stabilizes the hemoglobin molecule by forming hydrogen bonds between two of the helices; and glycine (G) at position 26, which is small and therefore allows two of the helices to approach each other, which is important to the structure of hemoglobin (Natzke, 1998). Additionally, there are several proteins found in the alpha subunit that are involved in the movement of the alpha and beta subunits (also highlighted in red) including: the tyrosine (Y) at position 43, which interacts with the beta subunit during the R state, and the arginine (N) at position 143, which interacts with the beta subunit during the T state (Gribaldo et al., 2003).

Mutations

Looking at the effects mutated portions of a gene is also a good way to determine the function of highly conserved sequences. In hemoglobin, deleterious mutations are most common in the heme pockets of the protein and in the alpha and beta subunit interfaces (Mathews et al., 2000). There are several key mutations in highly conserved portions of HBA (highlighted in yellow) including: the substitution of histidine (H) at position 88 to tyrosine (Y), which disrupts the heme molecule leading to decreased oxygen affinity; the substitution of arginine (N) at position 143 to histidine (H), which eliminates a bond in the T state and therefore favors the R state, resulting in increased oxygen affinity; the substitution of proline (P) at position 97 to arginine (N), which alters the alpha-beta contact region and results in the disassociation of the hemoglobin complex; and the substitution of leucine (L) at position 138 for proline (P), which interrupts the helix formation and also results in the disassociation of the hemoglobin complex (Mathews et al., 2000).

Bar-headed Goose Hemoglobin

As mentioned on the previous page, the bar-headed goose has hemoglobin that is specifically adapted to high altitudes. The bar-headed goose hemoglobin has an increased oxygen affinity which allows it to live in low oxygen pressure environments (Liang et al., 2001). This increased oxygen affinity is the result of a mutation at position 121 in the alpha subunit, which is highly conserved in other species, from proline to alanine, as seen in Figure 4 (Liang et al., 2001). This substitution leaves a two-carbon gap between the alpha-beta dimer, which relaxes the T structure and allows it to bind oxygen more readily under lower pressures (Jessen et al. 1991). Thus, comparing orthologs can also be used to explain differences in the oxygen binding capabilities of hemoglobin in different species.

References

Ensembl. Ensembl Genome Browser. http://www.ensembl.org/. Accessed March 2005.

GeneCard. 2005. GeneCard for HBA1. http://genome-www.stanford.edu/cgi-bin/genecards/carddisp?HBA1&search=HBA&suff=txt. Accessed March 2005.

Gribaldo, Simonetta, Didier Casane, Philippe Lopez and Herve Philippe. 2003. Functional Divergence Prediction from Evolutionary Analysis: A Case Study of Vertebrate Hemoglobin. Molecular Biology and Evolution 20 (11): 1754-1759.

Jessen, Timm H et al. 1991. Adaptation of bird hemoglobins to high altitudes: Demonstration of molecular mechanism by protein engineering. Evolution 88: 6519-6522.

Liang, Yuhe et al. 2001. The Crystal Structure of Bar-headed Goose Hemoglobin in Deoxy Form: The Alloseteric Mechanism of a Hemoglobin Species with High Oxygen Affinity. Journal of Molecular Biology 313: 123-137.

Mathews, Christopher, Kensal Van Holde and Kevin Ahern. 2000. Biochemistry 3 rd edition. http://www.aw-bc.com/mathews/ch07/c07emhp.htm .   Accessed March 2005.

Natzke, Lisa. 1998. Hemoglobin. http://biology.kenyon.edu/BMB/Chime/Lisa/FRAMES/hemetext.htm. Accessed March 2005.

Divergence pattern and selective mode in protein evolution: the example of vertebrate myoglobins and hemoglobin chains.
Otsuka J1, Miyazaki K, Horimoto K.
J Mol Evol. 1993 Feb; 36(2):153-81.

The evolutionary relation of vertebrate myoglobin and the hemoglobin chains including the agnathan hemoglobin chain is investigated on the basis of a new view of amino acid changes that is developed by canonical discriminant analysis of amino acid residues at individual sites. In contrast to the clear discrimination of amino acid residues between myoglobin, hemoglobin alpha chain, and hemoglobin beta chain in warm-blood vertebrates, the three types of globins in the lower class of vertebrates show so much variation that they are not well discriminated. This is seen particularly at the sites that are ascertained in mammals to carry the amino acid residues participating in stabilizing the monomeric structure in myoglobin and the residues forming the subunit contacts in hemoglobin. At these sites, agnathan hemoglobin chains are evaluated to be intermediate between the myoglobin and hemoglobin chains of gnathostomes. The variation in the phylogenetically lower class of globins is also seen in the internal region; there the amino acid residues of myoglobin and hemoglobin chains in the phylogenetically higher class exhibit an example of parallel evolution at the molecular level. New quantities, the distance of sequence property between discriminated groups and the variation within each group, are derived from the values of discriminant functions along the peptide chain, and this set of quantities simply describes an overall feature of globins such that the distinction between the three types of globins has been clearer as the vertebrates have evolved to become jawed, landed, and warm-blooded. This result strongly suggests that the functional constraint on the amino acid sequence of a protein is changed by living conditions and that severe conditions constitute a driving force that creates a distinctive protein from a less-constrained protein.

The globin gene repertoire of lampreys: Convergent evolution of hemoglobin and myoglobin in jawed and jawless vertebrates
K Schwarze, KL Campbell, T Hankeln, JF Storz, FG Hoffmann and T Burmester
Mol Biol Evol (2014).  http://dx.doi.org:/10.1093/molbev/msu216

Agnathans (jawless vertebrates) occupy a key phylogenetic position for illuminating the evolution of vertebrate anatomy and physiology. Evaluation of the agnathan globin gene repertoire can thus aid efforts to reconstruct the origin and evolution of the globin genes of vertebrates, a superfamily that includes the well-known model proteins hemoglobin and myoglobin. Here we report a comprehensive analysis of the genome of the sea lamprey (Petromyzon marinus) which revealed 23 intact globin genes and two hemoglobin pseudogenes. Analyses of the genome of the Arctic lamprey (Lethenteron camtschaticum) identified 18 full length and five partial globin gene sequences. The majority of the globin genes in both lamprey species correspond to the known agnathan hemoglobins. Both genomes harbor two copies of globin X, an ancient globin gene that has a broad phylogenetic distribution in the animal kingdom. Surprisingly, we found no evidence for an ortholog of neuroglobin in the lamprey genomes. Expression and phylogenetic analyses identified an ortholog of cytoglobin in the lampreys; in fact, our results indicate that cytoglobin is the only orthologous vertebrate-specific globin that has been retained in both gnathostomes and agnathans. Notably, we also found two globins that are highly expressed in the heart of P. marinus, thus representing functional myoglobins. Both genes have orthologs in L. camtschaticum. Phylogenetic analyses indicate that these heart-expressed globins are not orthologous to the myoglobins of jawed vertebrates (Gnathostomata), but originated independently within the agnathans. The agnathan myoglobin and hemoglobin proteins form a monophyletic group to the exclusion of functionally analogous myoglobins and hemoglobins of gnathostomes, indicating that specialized respiratory proteins for O2 transport in the blood and O2 storage in the striated muscles evolved independently in both lineages. This dual convergence of O2-transport and O2-storage proteins in agnathans and gnathostomes involved the convergent co-option of different precursor proteins in the ancestral globin repertoire of vertebrates.

Globin evolution
Kent Holsinger
http://darwin.eeb.uconn.edu/eeb348/lecturenotes/molevol-multigene/node2.html

I’ve just pointed out the distinction between myoglobin and hemoglobin. You may also remember that hemoglobin is a multimeric protein consisting of four subunits, 2 α\alpha subunits and 2 β\beta subunits. What you may not know is that in humans there are actually two types of α\alpha hemoglobin and four types of β\beta hemoglobin, each coded by a different genetic locus (see Table 1). The five α\alpha -globin loci (α\alpha_1, α\alpha_2, ς\zeta, and two non-functional pseudogenes) are found in a cluster on chromosome 16. The six β\beta-globin loci (ε\epsilon, ϒ\gamma_G, ϒ\gamma_A, δ\delta, β\beta, and a pseudogene) are found in a cluster on chromosome 11. The myoglobin locus is on chromosome 22.

Table 1: Human hemoglobins arranged in developmental sequence. Adult hemoglobins composed of 2 and 2 subunits typically account for less than 3% of hemoglobins in adults (http://sickle.bwh.harvard.edu/hbsynthesis.html).

Not only do we have all of these different types of globin genes in our bodies, they’re all related to one another. Comparative sequence analysis has shown that vertebrate myoglobin and hemoglobins diverged from one another about 450 million years ago. Figure 1 shows a phylogenetic analysis of globin genes from humans, mice, and a variety of Archaea. Focus your attention on the part of the tree that has human and mouse sequences. You’ll notice two interesting things:

Human and mouse neuroglobins (Ngb) are more closely related to one another than they are to other globins, even those from the same species. The same holds true for cytoglobins (Cyg) and myoglobins (Mb).

Within the hemoglobins, only mouse β\beta-globin (Mouse HbB) is misplaced. All other α\alpha- and β\beta-globins group with the corresponding mouse and human loci.

This pattern is exactly what we expect as a result of duplication and divergence. Up to the time that a gene becomes duplicated, its evolutionary history matches the evolutionary history of the organisms containing it. Once there are duplicate copies, each follows an independent evolutionary history. Each traces the history of speciation and divergence. And over long periods duplicate copies of the same gene share more recent common ancestry with copies of the same gene in a different species than they do with duplicate genes in the same genome.

Figure 1: Evolution of globin genes in Archaea and mammals (from [2]).

http://darwin.eeb.uconn.edu/eeb348/lecturenotes/molevol-multigene/img11.png

Evolution of globin genes in Archaea and mammals

Evolution of globin genes in Archaea and mammals

A history of duplication and divergence in multigene families makes it important to distinguish between two classes of related loci: those that represent the same locus in different species and between which divergence is a result of species divergence are orthologs. Those that represent different loci and between which divergence occurred after duplication of an ancestral gene are paralogs. The β\beta-globin loci of humans and chickens are orthologous. The α\alpha $- and $\beta $-globin loci of any pair of taxa are paralogous.

As multigene families go, the globin family is relatively simple and easy to understand. There are only about a dozen loci involved, one isolated locus (myoglobin) and two clusters of loci ($\alpha- and β\beta-globins). You’ll find a diagram of the β\beta-globin cluster in Figure 2. As you can see the β\beta-globins are not only evolutionarily related to one another they occur relatively close to one another on chromosome 11 in humans.

Figure 2: Structure of the human β\beta-globin gene cluster. % identity refers to similarity to the mouse β\beta-globin sequence. From http://globin.cse.psu.edu/html/pip/betaglobin/iplot.ps  (retrieved 28 Nov 2006).

Other families are far more complex. Class I and class II MHC loci, for example are part of the same multigene family. Moreover, immunoglobulins, T-cell receptors, and, and MHC loci are part of a larger superfamily of genes, i.e., all are ultimately derived from a common ancestral gene by duplication and divergence. Table 2 lists a few examples of multigene families and superfamilies in the human genome and the number of proteins produced.

Table 2: A few gene families from the human genome (adapted from [5,6]).
Distribution and conservation of sequence

Distribution and conservation of sequence

https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRHcfUpQ09ufj8cleSgnhDfQVUEHsTvnYGNxKaPa5wxMqNFzFU6

Distribution and conservation of sequence motifs throughout mammalian beta-globin gene clusters.A detailed map of the gene cluster is shown on the numbered line

evolutionary history of three hypothetical living species (C, D, and E)

evolutionary history of three hypothetical living species (C, D, and E)

the evolutionary history of three hypothetical living species (C, D, and E), inferred by comparing amino-acid differences in their myoglobin molecules.

http://media-1.web.britannica.com/eb-media/98/52998-004-A8682A5B.jpg

oxyhemoglobin dissociation curve

oxyhemoglobin dissociation curve

much higher affinity for oxygen than haemoglobin.

much higher affinity for oxygen than haemoglobin.

http://i.stack.imgur.com/WQJe9.jpg

myoglobin hs much higher oxygen affinity than Hb

Evolution of Myoglobin / Hemoglobin Proteins

Primitive Globin – Very primitive animals had only a myoglobin-like, single-chain ancestral globin for oxygen storage and were so small that they did not require a transport protein. Roughly 500 million years ago the ancestral myoglobin gene was duplicated. One copy became the ancestor of the myoglobin genes of all higher organisms. The other copy evolved into the gene for an oxygen transport protein and gave rise to the hemoglobins.

Most Primitive Hemoglobin – The most primitive animals to possess hemoglobin are the lampreys. Lamprey hemoglobin can form dimers but not tetramers and is only weakly cooperative. It represents a first step toward allosteric binding. Subsequently a second gene duplication must have occurred, giving rise to the ancestors of the present-day  and  hemoglobin chain families. This must have happened about 400 million years ago, at about the time of divergence of the sharks and bony fish. The evolutionary line of the bony fish led to the reptiles and eventually to the mammals, all carrying genes for both  and  globins and capable of forming tetrameric 22 hemoglobins. Further gene duplications have occurred in the hemoglobin line, leading to the embryonic forms  and , the fetal form, , and the infant form  (Figure 7.22).

Conserved Amino Acid Sequences – During the long evolution of the myoglobin/hemoglobin family of proteins, only a few amino acid residues have remained invariant (Figure 7.11). They include the histidines proximal and distal to the heme iron (F8 and E7- see Figure 7.5b) and Val FG5, which has been implicated in the hemoglobin deoxy/oxy conformation change. These may mark the truly essential positions in the molecule. Other regions highly conserved in hemoglobins are those near the 1 – 2 and 2 – 1 contacts. These parts of the molecule are most directly involved in the allosteric conformational change.

 http://web.squ.edu.om/med-lib/med_cd/e_cds/Electronic%20Study%20Guide%20of%20Biochemistry/ch07/c07emhp.htm

Read Full Post »

More Complexity in Protein Evolution

Author and Curator: Larry H. Bernstein, MD, FCAP 

Lactate dehydrogenase like crystallin: a potentially protective shield for Indian spiny-tailed lizard (Uromastix ltardwickit) lens against environmental stress?
A Atta, A Ilyas, Z Hashim, A Ahmed and S Zarina
The Protein Journal 2014; 33(2), p. 128-34.
http://dx.doi.org/10.1007/s10930-014-9543-4

Taxon specific lens crystallins in ve1iebrates are either similar or identical with various metabolic enzymes. These bifunctional crystallins serve as structural protein in lens along with their catalytic role. In the present study, we have partially purified and characterized lens crystallin from Indian spiny-tailed lizard (Uroma stix hardwickii). We have found lactate dehydrogenase (LDH) activity in lens indicating presence of an enzyme crystallin with dual functions. Taxon specific lens crystallins are product of gene sharing or gene duplication phenomenon where a pre-existing enzyme is recruited as lens crystallin in addition to structural role. In lens, same gene adopts refractive role in lens without modification or loss of pre-existing function during gene sharing phenomenon. Apart from conventional role of structural protein, LDH activity containing crystallin in Uromastix hardwickii lens is likely to have adaptive characteristics to offer protection against toxic effects of oxidative stress and ultraviolet light, hence justifying its recruitment. Taxon specific crystallins may serve as good models to understand structure-function relationship of these proteins.

αB-Crystallin and 27-kd Heat Shock Protein Are Regulated by Stress Conditions in the Central Nervous System and Accumulate in Rosenthal Fibers
T Iwaki, A Iwaki, J Tateishi, Y Sakaki, and JE Goldmant
Ameri J Pathol  1993; 143(2):487-495.

To understand the significance of the accumulation of αB-crystallin in Rosenthal fibers within astrocytes, the expression and metabolism of αB-crystallin in glioma cell lines were examined under the conditions of heat and oxidative stress. αB-crystallin mRNA was increased after both stresses, and αB-crystallin protein moved from a detergent-soluble to a detergent-insoluble form. In addition, Western blotting of Alexander’s  disease brain homogenates revealed that the 27-kd heat shock protein (HSP27), which is related to αB-crystallin, accumulates along with αB-crystallin. The presence of HSP27 in Rosenthal fibers was directly demonstrated by immunohistochemistry. Our results suggest that astrocytes in Alexander’s disease may be involved in an as yet unknown kind of stress reaction that causes the accumulation of αB-ccystallin and HSP27 and results in Rosenthal fiber formation.

α-Crystallin can function as a molecular chaperone
Joseph Horwitz
Proc. Nadl. Acad. Sci. USA Nov 1992; 89: 10449-10453. Biochemistry

The α-crystallins (αA and αB) are major lens structural proteins of the vertebrate eye that are related to the small heat shock protein family. In addition, crystallins (especially αB) are found in many cells organs outside the lens, and aα is overexpressed in several neurological disorders and in cell lines under stress conditions. Here I show that α-crystallin can function as a molecular chaperone. Stoichiometric amounts of αA and αB suppress thermally induced aggregation of various enzymes. In particular, α-crystalln is very efficient in suppressing the thermally induced aggregation of β- and y-crystallins, the two other major mammalian stuctural lens proteins. α-Crystallin was also effective in preventing aggregation and in refolding guanidine hydrochloride-denatured y-crystallin, as judged by circular dichroism spectroscopy. My results thus indicate that α-crystallin refracts light and protects proteins from aggregation in the transparent eye lens and that in nonlens cells α-crystallin may have other functions in addition to its capacity to suppress aggregation of proteins.

Gene sharing by δ-crystallin and argininosuccinate Iyase
J Piatigorsky, WE O’Brient, BL Norman, K Kalumuckt, GJ Wistow, T Borras, et al.
Proc. Natl. Acad. Sci. USA  May 1988; 85: 3479-3483. Evolution.

The lens structural protein δ-crystallin and the metabolic enzyme argininosuccinate lyase (ASL; Largininosuccinate argine-lyase, EC 4.3.2.1) have striking sequence similarity. We have demonstrated that duck δ-crystallin has enormously high ASL activity, while chicken δ-crystallin has lower but significant activity. The lenses of these birds had much greater ASL activity than other tissues, suggesting that ASL is being expressed at unusually high levels as a structural component. In Southern blots of human genomic DNA, chicken δ1-crystallin cDNA hybridized only to the human ASL gene; moreover, the two chicken δ-crystallin genes accounted for all the sequences in the chicken genome able to cross-hybridize with a human ASL cDNA, with preferential hybridization to the δ2 gene. Correlations of enzymatic activity and recent data on mRNA levels in the chicken lens suggest that ASL activity depends on expression of the δ2-crystallin gene. The data indicate that the same gene, at least in ducks, encodes two different functions, an enzyme (ASL) and a structural protein (δ-crystallin), although in chickens specialization and separation of functions may have occurred.

Gecko i-crystallin: How cellular retinol-binding protein became an eye lens ultraviolet filter
PJ L Werten, Beate Roll, DMF van Aalten, and WW de Jong
PNAS Mar 2000; 97(7): 3282–3287 http://pnas.org/cgi/doi/10.1073ypnas.050500597

Eye lenses of various diurnal geckos contain up to 12% i-crystallin. This protein is related to cellular retinol-binding protein type I (CRBP I) but has 3,4-didehydroretinol, rather than retinol, as a ligand. The 3,4-didehydroretinol gives the lens a yellow color, thus protecting the retina by absorbing short-wave radiation. i-Crystallin could be either the gecko’s housekeeping CRBP I, recruited for an additional function in the lens, or the specialized product of a duplicated CRBP I gene. The finding of the same CRBP I-like sequence in lens and liver cDNA of the gecko Lygodactylus picturatus now supports the former option. Comparison with i-crystallin of a distantly related gecko, Gonatodes vittatus, and with mammalian CRBP I, suggests that acquiring the additional lens function is associated with increased amino acid changes. Compared with the rat CRBP I structure, the i-crystallin model shows reduced negative surface charge, which might facilitate the required tight protein packing in the lens. Other changes may provide increased stability, advantageous for a long-living lens protein, without frustrating its role as retinol transporter outside the lens. Despite a number of replacements in the ligand pocket, recombinant i-crystallin binds 3,4-didehydroretinol and retinol with similar and high affinity (1.6 nM). Availability of ligand thus determines whether it binds 3,4-didehydroretinol, as in the lens, or retinol, in other tissues. i-Crystallin presents a striking example of exploiting the potential of an existing gene without prior duplication.

Expression of βA3/A1-crystallin in the developing and adult rat eye
G Parthasarathy, Bo Ma, C Zhang, C Gongora, JS Zigler, MK Duncan, D Sinha
J Molec Histol 2011; 42(1): 59-69. http://dx.doi.org:/10.1007/s10735-010-9307-1

Crystallins are very abundant structural proteins of the lens and are also expressed in other tissues. We have previously reported a spontaneous mutation in the rat βA3/A1-crystallin gene, termed Nuc1, which has a novel, complex, ocular phenotype. The current study was undertaken to compare the expression pattern of this gene during eye development in wild type and Nuc1 rats by in situ hybridization (ISH) and immunohistochemistry (IHC).
βA3/A1-crystallin expression was first detected in the eyes of both wild type and Nuc1 rats at embryonic (E) day 12.5 in the posterior portion of the lens vesicle, and remained limited to the lens fibers throughout fetal life.
After birth, βA3/A1-crystallin expression was also detected in the neural retina (specifically in the astrocytes and ganglion cells) and in the retinal pigmented epithelium (RPE).
This suggested that βA3/A1-crystallin is not only a structural protein of the lens, but has cellular function(s) in other ocular tissues.
In summary, expression of βA3/A1-crystallin is controlled differentially in various eye tissues with lens being the site of greatest expression.
Similar staining patterns, detected by ISH and IHC, in wild type and Nuc1 animals suggest that functional differences in the protein, rather than changes in mRNA/protein level of expression likely account for developmental abnormalities in Nuc1.

βA3/A1Crystallin controls anoikis-mediated cell death in astrocytes by modulating PI3K/AKT/mTOR and ERK survival pathways through the PKD/Bit1-signaling axis
B Ma, T Sen, L Asnaghi, M Valapala, F Yang, S Hose, D S McLeod, Y Lu, et la.
Cell Death and Disease 2011; 2(10). http://dx.doi.org:/10.1038/cddis.2011.100

During eye development, apoptosis is vital to the maturation of highly specialized structures such as the lens and retina. Several forms of apoptosis have been described, including anoikis, a form of apoptosis triggered by inadequate or inappropriate cell–matrix contacts. The anoikis regulators, Bit1 (Bcl-2 inhibitor of transcription-1) and protein kinase-D (PKD), are expressed in developing lens when the organelles are present in lens fibers, but are downregulated as active denucleation is initiated.
We have previously shown that in rats with a spontaneous mutation in the Cryba1 gene, coding for βA3/A1-crystallin, normal denucleation of lens fibers is inhibited. In rats with this mutation (Nuc1), both Bit1 and PKD remain abnormally high in lens fiber cells. To determine whether βA3/A1-crystallin has a role in anoikis, we induced anoikis in vitro and conducted mechanistic studies on astrocytes, cells known to express βA3/A1-crystallin.
The expression pattern of Bit1 in retina correlates temporally with the development of astrocytes. Our data also indicate that loss of βA3/A1-crystallin in astrocytes results in a failure of Bit1 to be trafficked to the Golgi, thereby suppressing anoikis. This loss of βA3/A1-crystallin also induces insulin-like growth factor-II, which increases cell survival and growth by modulating the phosphatidylinositol-3-kinase (PI3K)/AKT/mTOR and extracellular signal-regulated kinase pathways. We propose that βA3/A1-crystallin is a novel regulator of both life and death decisions in ocular astrocytes.

βA3/A1-crystallin in astroglial cells regulates retinal vascular remodeling during development
D Sinha, A Klise, Y Sergeev, S Hose, IA Bhutto, L Hackler Jr., T Malpic-llanos, et al.
Molec Cell Neurosci 2008; 37(1): 85-95.

http://dx.doi.org:/10.1016/j.mcn.2007.08.016

Vascular remodeling is a complex process critical to development of the mature vascular system. Astrocytes are known to be indispensable for initial formation of the retinal vasculature; our studies with the Nuc1 rat provide novel evidence that these cells are also essential in the retinal vascular remodeling process.
Nuc1 is a spontaneous mutation in the Sprague–Dawley rat originally characterized by nuclear cataracts in the heterozygote and microphthalmia in the homozygote. We report here that the Nuc1 allele results from mutation of the βA3/A1-crystallin gene, which in the neural retina is expressed only in astrocytes. We demonstrate striking structural abnormalities in Nuc1 astrocytes with profound effects on the organization of intermediate filaments. While vessels form in the Nuc1 retina, the subsequent remodeling process required to provide a mature vascular network is deficient. Our data implicate βA3/A1-crystallin as an important regulatory factor mediating vascular patterning and remodeling in the retina.

A developmental defect in astrocytes inhibits programmed regression of the hyaloid vasculature in the mammalian eye
C Zhang, L Asnaghi, C Gongora, B Patek, S Hose, Bo Ma, MA Fard, L Brako, et al.
Eur J Cell Biol 2011; 90(5): 440-448.
http://dx.doi.org:/10.1016/j.ejcb.2011.01.003

Previously we reported the novel observation that astrocytes ensheath the persistent hyaloid artery, both in the Nuc1 spontaneous mutant rat, and in human PFV (persistent fetal vasculature) disease (Developmental Dynamics 234:36–47, 2005). We now show that astrocytes isolated from both the optic nerve and retina of Nuc1 rats migrate faster than wild type astrocytes. Aquaporin 4 (AQP4), the major water channel in astrocytes, has been shown to be important in astrocyte migration. We demonstrate that AQP4 expression is elevated in the astrocytes in PFV conditions, and we hypothesize that this causes the cells to migrate abnormally into the vitreous where they ensheath the hyaloid artery. This abnormal association of astrocytes with the hyaloid artery may impede the normal macrophage-mediated remodeling and regression of the hyaloid system.

βA3/A1-crystallin is required for proper astrocyte template formation and vascular remodeling in the retina.
D Sinha; WJ Stark; M Valapala; IA Bhutto; M Cano; S Hose; GA Lutty; et al.  Transgenic research 2012; 21(5):1033-42.

Nuc1 is a spontaneous rat mutant resulting from a mutation in the Cryba1 gene, coding for βA3/A1-crystallin. Our earlier studies with Nuc1 provided novel evidence that astrocytes, which express βA3/A1-crystallin, have a pivotal role in retinal remodeling. The role of astrocytes in the retina is only beginning to be explored. One of the limitations in the field is the lack of appropriate animal models to better investigate the function of astrocytes in retinal health and disease. We have now established transgenic mice that overexpress the Nuc1 mutant form of Cryba1, specifically in astrocytes. Astrocytes in wild type mice show normal compact stellate structure, producing a honeycomb-like network. In contrast, in transgenics over-expressing the mutant (Nuc1) Cryba1 in astrocytes, bundle-like structures with abnormal patterns and morphology were observed. In the nerve fiber layer of the transgenic mice, an additional layer of astrocytes adjacent to the vitreous is evident. This abnormal organization of astrocytes affects both the superficial and deep retinal vascular density and remodeling. Fluorescein angiography showed increased venous dilation and tortuosity of branches in the transgenic retina, as compared to wild type. Moreover, there appear to be fewer interactions between astrocytes and endothelial cells in the transgenic retina than in normal mouse retina. Further, astrocytes overexpressing the mutant βA3/A1-crystallin migrate into the vitreous, and ensheath the hyaloid artery, in a manner similar to that seen in the Nuc1 rat. Together, these data demonstrate that developmental abnormalities of astrocytes can affect the normal remodeling process of both fetal and retinal vessels of the eye and that βA3/A1-crystallin is essential for normal astrocyte function in the retina.

Ontogeny of oxytocin and vasopressin receptor binding in the lateral septum in prairie and montane voles
Z. Wang, L.J. Young
Developmental Brain Research 1997; 104:191–195.

Adult prairie (Microtus ochrogaster). and montane voles (M. montanus). differ in the distribution of oxytocin OT. and vasopressin AVP receptor binding in the brain. The present study examined the ontogenetic pattern of these receptor bindings in the lateral septum in both species to determine whether adult differences in the receptor binding are derived from a common pattern in development. In both species, OT and AVP receptor binding in the lateral septum were detected neonatally, increased during development, and reached the adult level at weaning third week. The progression of OT and AVP receptor differed, as OT receptor binding increased continually until weaning while AVP receptor binding did not change in the first week, increased rapidly in the second week, and was sustained thereafter. For both receptors, the binding increased more rapidly in montane than in prairie voles, resulting in species differences in receptor binding at weaning and in adulthood. Together, these data indicate that OT and AVP could affect the brain during development in a peptide- and species-specific manner in voles.

Evolution of the vasopressin/oxytocin superfamily: Characterization of a cDNA encoding a vasopressin-related precursor, preproconopressin, from the mollusc Lymnaea stagnalis
RE Van Kesteren, AB Smit, RW Dirksi, ND De With, WPM Geraerts, and J Joosse
Proc. Nadl. Acad. Sci. USA May 1992; 89: 4593-4597. Neurobiology

Although the nonapeptide hormones vasopressin, oxytocin, and related peptides from vertebrates and some nonapeptides from invertebrates share similarities in amino acid sequence, their evolutionary relationships are not dear. To investigate this issue, we doned a cDNA encoding a vasopressin-related peptide, Lys-conopressin, produced in the central nervous system of the gastropod mollusc Lymnaea stagnalis. The predicted preproconopressin has the overall architecture of vertebrate preprovasopressins, with a signal peptide, Lys-conopressin, that is flanked at the C terminus by an amidation signal and a pair of basic residues, followed by a neurophysin domain. The Lymnaea neurophysin and the vertebrate neurophysins share high sequence identity, which includes the conservation of all 14 cysteine residues. In addition, the Lymnaea neurophysin possesses unique structural characteristics. It contains a putative N-linked glycosylation site at a position in the vertebrate neurophysins where a strictly conserved tyrosine residue, which plays an essential role in binding of the nonapptide hormones, is found. The C-terminal copeptin homologous extension of the Lymnaea neurophysin has low sequence identity with the vertebrate counterparts and is probably not cleaved from the prohormone, as are the mammalin copeptins. The conopressin gene is expressed in only a few neurons in both pedal ganglia of the central nervous system. The conopressin transcript is present in two sizes, due to alternative use of polyadenylylation signals. The data presented here demonstrate that the typical organization of the prohormones of the vasopressin/oxytocin superfamily must have been present in the common ancestors of vertebrates and invertebrates.

A common allele in the oxytocin receptor gene (OXTR) impacts prosocial temperament and human hypothalamic-limbic structure and function
H Tosta, B Kolachanaa, S Hakimia, H Lemaitrea, BA Verchinskia, et al.
PNAS Aug 3, 2010; 107(31): 13936–13941
http://pnas.org/cgi/doi/10.1073/pnas.1003296107

The evolutionarily highly conserved neuropeptide oxytocin is a key mediator of social and emotional behavior in mammals, including humans. A common variant (rs53576) in the oxytocin receptor gene (OXTR) has been implicated in social-behavioral phenotypes, such as maternal sensitivity and empathy, and with neuropsychiatric disorders associated with social impairment, but the intermediate neural mechanisms are unknown. Here, we used multimodal neuroimaging in a large sample of healthy human subjects to identify structural and functional alterations in OXTR risk allele carriers and their link to temperament. Activation and interregional coupling of the amygdala during the processing of emotionally salient social cues was significantly affected by genotype. In addition, evidence for structural alterations in key oxytocinergic regions emerged, particularly in the hypothalamus. These neural characteristics predicted lower levels of reward dependence, specifically in male risk allele carriers. Our findings identify sex-dependent mechanisms impacting the structure and function of hypothalamic-limbic circuits that are of potential clinical and translational significance.
Test of Association Between 10 SNPs in the Oxytocin Receptor Gene and Conduct Disorder
JT Sakai, TJ Crowley, MC Stallings, M McQueen, JK Hewitt, C Hopfer, et al.
Psychiatr Genet. 2012 Apr; 22(2): 99–102. http://dx.doi.org:/10.1097/YPG.0b013e32834c0cb2

Animal and human studies have implicated oxytocin (OXT) in affiliative and prosocial behaviors. We tested whether genetic variation in the OXT receptor (OXTR) gene is associated with conduct disorder (CD).
Utilizing a family-based sample of adolescent probands recruited from an adolescent substance abuse treatment program, control probands and their families (total sample n=1,750), we conducted three tests of association with CD and 10 SNPs (single nucleotide polymorphisms) in the OXTR gene: (1) family-based comparison utilizing the entire sample; (2) within-Whites, case control comparison of adolescent patients with CD and controls without CD; and (3) within-Whites case-control comparison of parents of patients and parents of controls.
Family-based association tests failed to show significant results (no results p<0.05). While strictly correcting for the number of tests (α=0.002), adolescent patients with CD did not differ significantly from adolescent controls in genotype frequency for the OXTR SNPs tested; similarly, comparison of OXTR genotype frequencies for parents failed to differentiate patient and control family type, except a trend association for rs237889 (p=0.004). In this sample, 10 SNPs in the OXTR gene were not significantly associated with CD.

Leu55Pro transthyretin accelerates subunit exchange and leads to rapid formation of hybrid tetramers
CA Keetch, EHC Bromley, MG McCammon, N Wang, J Christodoulou, CV Robinson
JBC  Oct 11, 2005 M508753200. http://jbc.org/cgi/doi/10.1074/jbc.M508753200

Transthyretin is a tetrameric protein associated with the commonest form of

systemic amyloid disease. Using isotopically labeled proteins and mass spectrometry we compared subunit exchange in wild-type transthyretin with that of the variant associated with the most aggressive form of the disease, Leu55Pro. Wild-type subunit exchange occurs via both monomers and dimers , while exchange via dimers is the dominant mechanism for the Leu55Pro variant. Since patients with the Leu55Pro mutation are heterozygous, expressing both proteins simultaneously, we also analyzed the subunit exchange reaction between wild-type and Leu55Pro tetramers . We found that hybrid tetramers containing two or three Leu55Pro subunits dominate in the early stages of the reaction. Surprisingly we also found that in the presence of Leu55Pro transthyretin, the rate of dissociation of wild-type transthyretin is increased. This implies interactions between the two proteins that accelerate the formation of hybrid tetramers, a result with important implications for transthyretin amyloidos is.

Beyond Genetic Factors in Familial Amyloidotic Polyneuropathy: Protein Glycation and the Loss of Fibrinogen’s Chaperone Activity
G da Costa, RA Gomes, A Guerreiro, E Mateus, E Monteiro, et al.
PLoS ONE 2011; 6(10): e24850. http://dx.doi.org:/10.1371/journal.pone.0024850

Familial amyloidotic polyneuropathy (FAP) is a systemic conformational disease characterized by extracellular amyloid fibril formation from plasma transthyretin (TTR). This is a crippling, fatal disease for which liver transplantation is the only effective therapy. More than 80 TTR point mutations are associated with amyloidotic diseases and the most widely accepted disease model relates TTR tetramer instability with TTR point mutations. However, this model fails to explain two observations. First, native TTR also forms amyloid in systemic senile amyloidosis, a geriatric disease. Second, age at disease onset varies by decades for patients bearing the same mutation and some mutation carrier individuals are asymptomatic throughout their lives. Hence, mutations only accelerate the process and non-genetic factors must play a key role in the molecular mechanisms of disease. One of these factors is protein glycation, previously associated with conformational diseases like Alzheimer’s and Parkinson’s. The glycation hypothesis in FAP is supported by our previous discovery of methylglyoxal-derived glycation of amyloid fibrils in FAP patients. Here we show that plasma proteins are differentially glycated by methylglyoxal in FAP patients and that fibrinogen is the main glycation target. Moreover, we also found that fibrinogen interacts with TTR in plasma. Fibrinogen has chaperone activity which is compromised upon glycation by methylglyoxal. Hence, we propose that methylglyoxal glycation hampers the chaperone activity of fibrinogen, rendering TTR more prone to aggregation, amyloid formation and ultimately, disease.

Aromatic Sulfonyl Fluorides Covalently Kinetically Stabilize Transthyretin to Prevent Amyloidogenesis while Affording a Fluorescent Conjugate
NP Grimster, S Connelly, A Baranczak, J Dong, …, JW Kelly
J Am Chem Soc. 2013 Apr 17; 135(15): 5656–5668. http://dx.doi.org:/10.1021/ja311729d

Molecules that bind selectively to a given protein and then undergo a rapid chemoselective reaction to form a covalent conjugate have utility in drug development. Herein a library of 1,3,4-oxadiazoles substituted at the 2 position with an aryl sulfonyl fluoride and at the 5 position with a substituted aryl known to have high affinity for the inner thyroxine binding subsite of transthyretin (TTR) were conceived of by structure-based design principles and were chemically synthesized. When bound in the thyroxine binding site, most of the aryl sulfonyl fluorides react rapidly and chemoselectively with the pKa-perturbed K15 residue, kinetically stabilizing TTR and thus preventing amyloid fibril formation, known to cause polyneuropathy. Conjugation t50s range from 1 to 4 min, ~ 1400 times faster than the hydrolysis reaction outside the thyroxine binding site. Xray crystallography confirms the anticipated binding orientation and sheds light on the sulfonyl fluoride activation leading to the sulfonamide linkage to TTR. A few of the aryl sulfonyl fluorides efficiently form conjugates with TTR in plasma. A few of the TTR covalent kinetic stabilizers synthesized exhibit fluorescence upon conjugation and therefore could have imaging applications as a consequence of the environment sensitive fluorescence of the chromophore.

Identification of S-sulfonation and S-thiolation of a novel transthyretin Phe33Cys variant from a patient diagnosed with familial transthyretin amyloidosis
A Lim, T Prokaeva, ME Mccomb, LH Connors, M Skinner, and CE Costello
Protein Science 2003; 12:1775–1786.
http://proteinscience.org/cgi/doi/10.1110/ps.0349703.

Familial transthyretin amyloidosis (ATTR) is an autosomal dominant disorder associated with a variant form of the plasma carrier protein transthyretin (TTR). Amyloid fibrils consisting of variant TTR, wild-type TTR, and TTR fragments deposit in tissues and organs. The diagnosis of ATTR relies on the identification of pathologic TTR variants in plasma of symptomatic individuals who have biopsy proven amyloid disease. Previously, we have developed a mass spectrometry-based approach, in combination with direct DNA sequence analysis, to fully identify TTR variants. Our methodology uses immunoprecipitation to isolate TTR from serum, and electrospray ionization and matrix-assisted laser desorption/ionization mass spectrometry (MS) peptide mapping to identify TTR variants and posttranslational modifications. Unambiguous identification of the amino acid substitution is performed using tandem MS (MS/MS) analysis and confirmed by direct DNA sequence analysis. The MS and MS/MS analyses also yield information about posttranslational modifications. Using this approach, we have recently identified a novel pathologic TTR variant. This variant has an amino acid substitution (Phe — Cys) at position 33. In addition, like the Cys10 present in the wild type and in this variant, the Cys33 residue was both S-sulfonated and S-thiolated (conjugated to cysteine, cysteinylglycine, and glutathione). These adducts may play a role in the TTR fibrillogenesis.

Evolutionary relationships of lactate dehydrogenases (LDHs) from mammals, birds, an amphibian, fish, barley, and bacteria: LDH cDNA sequences from Xenopus, pig, and rat
S Tsuji, MA Qureshi, EW Hou, WM Fitch, and S S.-L. Li
Proc. Natl. Acad. Sci. USA Sep 1994; 91: 9392-9396. Evolution

The nucleotide sequences of the cDNAs encoding LDH (EC 1.1.1.27) subunits LDH-A (muscle), LDH-B (liver), and LDH-C (oocyte) from Xenopus laevis, LDH-A (muscle) and LDH-B (heart) from pig, and LDH-B (heart) and LDH-C (testis) from rat were determined. These seven newly deduced amino acid sequences and 22 other published LDH sequences, and three unpublished fish LDH-A sequences kindly provided by G. N. Somero and D. A. Powers, were used to construct the most parsimonious phylogenetic tree of these 32 LDH subunits from mammals, birds, an amphibian, fish, barley, and bacteria. There have been at least six LDH gene duplications among the vertebrates. The Xenopus LDH-A, LDH-B, and LDH-C subunits are most closely related to each other and then are more closely related to vertebrate LDH-B than LDH-A. Three fish LDH-As, as well as a single LDH of lamprey, also seem to be more related to vertebrate LDH-B than to land vertebrate LDH-A. The mammalian LDH-C (testis) subunit appears to have diverged very early, prior to the divergence of vertebrate LDH-A and LDH-B subunits, as reported previously.

Evidence for neutral and selective processes in the recruitment of enzyme-crystallins in avian lenses
Graeme Wistow, Andrea Anderson, and Joram Piatigorsky
Proc. Natl. Acad. Sci. USA Aug 1990; 87: 6277-6280, Evolution

In apparent contrast to most other tissues, the ocular lenses in vertebrates show striking differences in protein composition between taxa, most notably in the recruitment of different enzymes as major structural proteins. This variability appears to be the result of at least partially neutral evolutionary processes, although there is also evidence for selective modification in molecular structure. Here we describe a bird, the chimney swift (Chaetura pelagica), that lacks δ-crystallin/ argininosuccinate lyase, usually the major crystallin of avian lenses. Clearly, δ-crystallin is not specifically required for a functionally effective avian lens. Furthermore the lens composition of the swift is more similar to that of the related hummingbirds than to that of the barn swallow (Hirundo rustica), suggesting that phylogeny is more important than environmental selection in the recruitment of crystallins. However differences in ε-crystallin/lactate dehydrogenase-B sequence between swift and hummingbird and other avian and reptilian species suggest that selective pressures may also be working at the molecular level. These differences also confirm the close relationship between swifts and hummingbirds.

Enzyme/crystallins and extremely high pyridine nucleotide levels in the eye lens.
Zigler, J. S., Jr.; Rao, P. V.
FASEB J. 1991; 3: 223-225.

Taxon-specific crystallins are proteins present in high abundance in the lens of phylogenetically restricted groups of animals. Recently it has been found that these proteins are actually enzymes which the lens has apparently adopted to serve as structural proteins. Most of these proteins have been shown to be identical to, or related to, oxidoreductases. In guinea pig lens, which contains zeta-crystallin, a protein with an NADPH dependent oxidoreductase activity, the levels of both NADPH and NADP* are extremely high and correlate with the concentration of zeta-crystallin. We report here nucleotide assays on lenses from vertebrates containing other enzyme/crystallins. In each case where the enzyme/crystallin is a pyridine nucleotide-binding protein the level of that particular nucleotide is extremely high in the lens. The presence of an enzyme/crystallin does not affect the lenticular concentrations of those nucleotides which are not specifically bound. The possibility that nucleotide binding may be a factor in the selection of some enzymes to serve as enzyme/crystallins is considered.

Comparison of stability properties of lactate dehydrogenase B4/ε-crystallin from different species
CEM Voorter, LTM Wintjes, PWH Heinstra, H Bloemendal and WW De Jong
Eur. J. Biochem. 1993; 211: 643-648

ε-Crystallin occurs as an abundant lens protein in many birds and in crocodiles and has been identified as heart-type lactate dehydrogenase (LDH-B4). Lens proteins have, due to their longevity and environmental conditions, extraordinary requirements for structural stability. To study lens protein stability, we compared various parameters of LDH-B4/ε-crystallin from lens and/or heart of duck, which has abundant amounts of this enzyme in its lenses, and of chicken and pig, which have no λ-crystallin. Measuring the thermostability of LDH-B4 from the different sources, the t50 values (temperature at which 50% of the enzyme activity remains after a 20-min period) for LDH-B4 from duck heart, duck lens and chicken heart were all found to be around 76°C whereas pig heart LDHB4 was less thermostable, having a t50 value of 625°C. A similar tendency was found with urea inactivation studies. Plotting the first-order rate constants obtained from inactivation kinetic plots against urea concentration, it was clear that LDH-B4 from pig heart was less stable in urea than the homologous enzymes from duck heart, chicken heart and duck lens. The duck and chicken enzymes were also much more resistant against proteolysis than the porcine enzyme. Therefore, it is concluded that avian LDH-B4 is structurally more stable than the homologous enzyme in mammals. This greater stability might make it suitable to function as a ε-crystallin, as in duck, but is not necessarily associated with high lens expression, as in chicken.

Duck lens ε-crystallin and lactate dehydrogenase B4 are identical: A single-copy gene product with two distinct functions
W Hendriks, JWM Mulders, MA Bibby, C Slingsby, H Bloemendal, and WW De Jong
Proc. Natl. Acad. Sci. USA Oct 1988; 85: 7114-7118. Biochemistry

To investigate whether or not duck lens ε-crystaliin and duck heart lactate dehydrogenase (LDH) B4 are the product of the same gene, we have isolated and sequenced cDNA clones of duck ε-crystallin. By using these clones we demonstrate that there is a single-copy Ldh-B gene in duck and in chicken. In the duck lens this gene is overexpressed, and its product is subject to posttranslational modification. Reconstruction of the evolutionary history of the LDH protein family reveals that the mammalian Ldh-C gene most probably originated from an ancestral Ldh-A gene and that the amino acid replacement rate in LDH-C is approximately 4 times the rate in LDH-A. Molecular modeling of LDH-B sequences shows that the increased thermostability of the avian tetramer might be explained by mutations that increase the number of ion pairs. Furthermore, the replacement of bulky side chains by glycines on the corners of the duck protein suggests an adaptation to facilitate close packing in the lens.

Lactate Dehydrogenase A as a Highly Abundant Eye Lens Protein in Platypus (Ornithorhynchus anatinus): Upsilon (υ)-Crystallin
T van Rheede,  R Amons, N Stewart, and WW de Jong
Mol. Biol. Evol. 2003; 20(06):994–998. http://dx.doi.org:/10.1093/molbev/msg116

Vertebrate eye lenses mostly contain two abundant types of proteins, the α-crystallins and the β/λ-crystallins. In addition, certain housekeeping enzymes are highly expressed as crystallins in various taxa. We now observed an unusual approximately 41-kd protein that makes up 16% to 18% of the total protein in the platypus eye lens. Its cDNA sequence was determined, which identified the protein as muscle-type lactate dehydrogenase A (LDH-A). It is the first observation of LDH-A as a crystallin, and we designate it upsilon (υ)-crystallin. Interestingly, the related heart-type LDH-B occurs as an abundant lens protein, known as ε-crystallin, in many birds and crocodiles. Thus, two members of the ldh gene family have independently been recruited as crystallins in different higher vertebrate lineages, suggesting that they are particularly suited for this purpose in terms of gene regulatory or protein structural properties. To establish whether platypus LDH-A/υ-crystallin has been under different selective constraints as compared with other vertebrate LDH-A sequences, we reconstructed the vertebrate Ldh-A gene phylogeny. No conspicuous rate deviations or amino acid replacements were observed.

Isozymes, moonlighting proteins and promiscous enzymes
M Nath Gupta, M Kapoor, AB Majumder and V Singh
Current Science Apr 2011; 100(8): 1152-1162.

The structures of isoenzymes differ and yet these catalyse the same type of reaction. These structures evolved to suit the physiological needs and are located in different parts of cells or tissues. Moonlighting proteins represent the same structure performing very different biological functions. Biological promiscuity reveals that the same active sites can catalyse different types of reactions. These three different phenomena, all illustrate similar evolutionary strategies. Viewed together, it emerges that biologists need to take a hard look at the ‘structure–function’ paradigm as well as the notions of biological specificity. Meanwhile, biotechnologists  continue to exploit the opportunities which ‘nonspecificity’ offers.

Read Full Post »

Larry H Bernstein, MD, Reporter

Leaders in Pharmaceutical Intelligence

Lasker~Koshland
Special Achievement Award in Medical Science

Award Description

Mary-Claire King
For bold, imaginative, and diverse contributions to medical science and human rights — she discovered the BRCA1 gene locus that causes hereditary breast cancer and deployed DNA strategies that reunite missing persons or their remains with their families.

The 2014 Lasker~Koshland Award for Special Achievement in Medical Science honors a scientist who has made bold, imaginative, and diverse contributions to medical science and human rights. Mary-Claire King(University of Washington, Seattle) discovered the BRCA1 gene locus that causes hereditary breast cancer and deployed DNA strategies that reunite missing persons or their remains with their families. Her work has touched families around the world.

As a statistics graduate student in the late 1960s, King took the late Curt Stern’s genetics course just for fun. The puzzles she encountered there—problems posed by Stern—enchanted her. She was delighted to learn that people could be paid to solve such problems, and that mathematics holds their key. She decided to study genetics and never looked back.

During her Ph.D. work with the late Allan Wilson (University of California, Berkeley), King discovered that the sequences of human and chimpanzee proteins are, on average, more than 99 percent identical; DNA sequences that do not code for proteins differ only a little more. The two primates therefore are much closer cousins than suggested by fossil studies of the time. The genetic resemblance seemed to contradict obvious distinctions: Human brains outsize those of chimps; their limbs dwarf ours; and modes of communication, food gathering, and other lifestyle features diverge dramatically. King and Wilson proposed that these contrasts arise not from disparities in DNA sequences that encode proteins, but from a small number of differences in DNA sequences that turn the protein-coding genes on and off.

Just as genetic changes drive species in new directions, they also can propel cells toward malignancy. From an evolutionary perspective, the topic of breast cancer began to intrigue King. The illness runs in families and is clearly inherited, yet many affected women have no close relatives with the disease. It is especially deadly for women whose mothers succumbed to it—and risk increases for those who have a mother or sister with breast cancer, particularly if the cancer struck bilaterally or before menopause. Unlike the situation with lung cancer, no environmental exposure distinguishes sisters who get breast cancer from those who remain disease free.

By studying a rare familial cancer, Alfred Knudsen (Lasker Clinical Medical Research Award, 1998) had shown in the early 1970’s how an inherited genetic defect could increase vulnerability to cancer. In the model he advanced, some families harbor a damaged version of a gene that normally encourages proper cellular behavior. Genetic mishaps occur during a person’s lifetime, and a second “hit” in a cell with the first physiological liability nudges the injured cell toward malignancy. A similar story might play out in families with a high incidence of breast cancer, King reasoned. She began to hunt for the theoretical pernicious gene in 1974.

2014_illustration_special
The hunt
Many geneticists doubted that susceptibility to breast cancer would map to a single gene; even if it did, finding the culprit seemed unlikely for numerous reasons. First, most cases are not familial and the disease is common—so common that inherited and non-inherited cases could occur in the same families. Furthermore, the malady might not strike all women who carry a high-risk gene, and different families might carry different high-risk genes. Prevailing views held that the ailment arises from the additive effects of multiple undefined genetic and environmental insults and from complicated interactions among them. No one had previously tacked such complexities, and an attempt to unearth a breast cancer gene seemed woefully naïve.

To test whether she could find evidence that particular genes increase the odds of getting breast cancer, King applied mathematical methods to data from more than 1500 families of women younger than 55 years old with newly diagnosed breast cancer. The analysis, published in 1988, suggested that four percent of the families carry a single gene that predisposes individuals to the illness.

The most convincing way to validate this idea was to track down the gene. Toward this end, King analyzed DNA from 329 participating relatives with 146 cases of invasive breast cancer. In many of the 23 families to which the participants belonged, the scourge struck young women, often in both breasts, and in some families, even men.

In late 1990, King (by then a professor at the University of California, Berkeley) hit her quarry. She had zeroed in on a suspicious section of chromosome 17 that carried particular genetic markers in women with breast cancer in the most severely affected families. Somewhere in that stretch of DNA lay the gene, which she named BRCA1.

This discovery spurred an international race to find the gene. Four years later, scientists at Myriad Genetics, Inc. isolated it. Alterations in either BRCA1 or a second breast-cancer susceptibility gene, BRCA2, found by Michael Stratton and colleagues (Institute of Cancer Research, UK) increase risk of ovarian as well as breast cancer. The proteins encoded by these genes help maintain cellular health by repairing marred DNA. When theBRCA1 or BRCA2 proteins fail to perform their jobs, genetic integrity is compromised, thus setting the stage for cancer.

About 12 percent of women in the general population get breast cancer at some point in their lives. In contrast, 65 percent of women who inherit an abnormal version of BRCA1 and about 45 percent of women who inherit an abnormal version of BRCA2 develop breast cancer by the time they are 70 years old. Individuals with troublesome forms of BRCA1 and BRCA2 can now be identified, monitored, counseled, and treated appropriately.

Harmful versions of other genes also predispose women to breast cancer, ovarian cancer, or both. Several years ago, King devised a scheme to screen for all of these genetic miscreants. This strategy allows genetic testing and risk determination for breast and ovarian cancer; it is already in clinical practice.

Genetic tools, human rights
King has applied her expertise to aid people who suffer from ills perpetrated by humans as well as genes. She helped find the “lost children” of Argentina—those who had been kidnapped as infants or born while their mothers were in prison during the military regime of the late 1970s and early 1980s. Some of these youngsters had been illegally adopted, many by military families. In 1983, King began identifying individuals, first with a technique that was originally designed to match potential organ transplant donors and recipients. She then developed an approach that relies on analysis of DNA from mitochondria—a cellular component that passes specifically from mother to child, and is powerful for connecting people to their female forebears. King helped prove genetic relationships and thus facilitated the reunion of more than 100 of the children with their families.

Later, the Argentinian government asked if she could help identify dead bodies of individuals thought to have been murdered. King harnessed the same method to figure out who had been buried in mass graves. She established that teeth, whose enamel coating protects DNA in the dental pulp from degradation, offer a valuable resource when attempting to trace remains in situations where long periods have elapsed since the time of death.

This and related approaches have been used to identify soldiers who went missing in action, including the remains of an American serviceman who was buried beneath the Tomb of the Unknowns in Arlington National Cemetery for 14 years, as well as victims of natural disasters and man-made tragedies such as 9/11.

Mary-Claire King has employed her intellect, dedication, and ethical sensibilities to generate knowledge that has catalyzed profound changes in health care, and she has applied her expertise to promote justice where nefarious governments have terrorized their citizens.

by Evelyn Strauss

Read Full Post »

Biochemical Insights of Dr. Jose Eduardo de Salles Roselino

Larry H. Bernstein, MD, FCAP, Interviewer, Curator

Leaders in Pharmaceutical Intelligence

Biochemical Insights of Dr. Jose Eduardo de Salles Roselino

http://pharmaceuticalintelligence.com/12/24/2014/larryhbern/Biochemical_
Insights_of_Dr._Jose_Eduardo_de_Salles_Roselino/

Article ID #165: Biochemical Insights of Dr. Jose Eduardo de Salles Roselino. Published on 12/17/2014

WordCloud Image Produced by Adam Tubman

Biochemical Insights of Dr. Jose Eduardo de Salles Roselino

How is it that developments late in the 20th century diverted the attention of
biological processes from a dynamic construct involving interacting chemical
reactions under rapidly changing external conditions effecting tissues and cell
function to a rigid construct that is determined unilaterally by the genome
construct, diverting attention from mechanisms essential for seeing the complete
cellular construct?

Larry, I assume that in case you read the article titled Neo – Darwinism, The
Modern Synthesis and Selfish Genes that bares no relationship with Physiology
with Molecular Biology J. Physiol 2011; 589(5): 1007-11 by Denis Noble, you might
find that it was the key factor required in order to understand the dislodgment
of physiology as a foundation of medical reasoning. In the near unilateral emphasis
of genomic activity as a determinant of cellular activity all of the required general
support for the understanding of my reasoning. The DNA to protein link goes
from triplet sequence to amino acid sequence. That is the realm of genetics.
Further, protein conformation, activity and function requires that environmental
and micro-environmental factors should be considered (Biochemistry). If that
were not the case, we have no way to bridge the gap between the genetic
code and the evolution of cells, tissues, organs, and organisms.

  • Consider this example of hormonal function. I would like to stress in
    the cAMP dependent hormonal response, the transfer of information
    that 
    occurs through conformation changes after protein interactions.
    This mechanism therefore, requires that proteins must not have their
    conformation determined by sequence alone.
    Regulatory protein conformation is determined by its sequence plus
    the interaction it has in its micro-environment. For instance, if your
    scheme takes into account what happens inside the membrane and
    that occurs before cAMP, then production is increased by hormone
    action. A dynamic scheme  will show an effect initially, over hormone
    receptor (hormone binding causing change in its conformation) followed
    by GTPase change in conformation caused by receptor interaction and
    finally, Adenylate cyclase change in conformation and in activity after
    GTPase protein binding in a complex system that is dependent on self-
    assembly and also, on changes in their conformation in response to
    hormonal signals (see R. A Kahn and A. G Gilman 1984 J. Biol. Chem.
    v. 259,n 10 pp6235-6240. In this case, trimeric or dimeric G does not
    matter). Furthermore, after the step of cAMP increased production we
    also can see changes in protein conformation.  The effect of increased
    cAMP levels over (inhibitor protein and protein kinase protein complex)
    also is an effect upon protein conformation. Increased cAMP levels led
    to the separation of inhibitor protein (R ) from cAMP dependent protein
    kinase (C ) causing removal of the inhibitor R and the increase in C activity.
    R stands for regulatory subunit and C for catalytic subunit of the protein
    complex.
  • This cAMP effect over the quaternary structure of the enzyme complex
    (C protein kinase + R the inhibitor) may be better understood as an
    environmental information producing an effect in opposition to
    what may be considered as a tendency  towards a conformation
    “determined” by the genetic code. This “ideal” conformation
    “determined” by the genome  would be only seen in crystalline
    protein.
     In carbohydrate metabolism in the liver the hormonal signal
    causes a biochemical regulatory response that preserves homeostatic
    levels of glucose (one function) and in the muscle, it is a biochemical
    regulatory response that preserves intracellular levels of ATP (another
    function).
  • Therefore, sequence alone does not explain conformation, activity
    and function of regulatory proteins
    .  If this important regulatory
    mechanism was  not ignored, the work of  S. Prusiner (Prion diseases
    and the BSE crisis Stanley B. Prusiner 1997 Science; 278: 245 – 251,
    10  October) would be easily understood.  We would be accustomed
    to reason about changes in protein conformation caused by protein
    interaction with other proteins, lipids, small molecules and even ions.
  • In case this wrong biochemical reasoning is used in microorganisms.
    Still it is wrong but, it will cause a minor error most of the time, since
    we may reduce almost all activity of microorganism´s proteins to a
    single function – The production of another microorganism. However,
    even microorganisms respond differently to their micro-environment
    despite a single genome (See M. Rouxii dimorphic fungus works,
    later). The reason for the reasoning error is, proteins are proteins
    and DNA are DNA quite different in chemical terms. Proteins must
    change their conformation to allow for fast regulatory responses and
    DNA must preserve its sequence to allow for genetic inheritance.

Read Full Post »

The History of Infectious Diseases and Epidemiology in the late 19th and 20th Century

Curator: Larry H Bernstein, MD, FCAP

 

Infectious diseases are a part of the history of English, French, and Spanish Colonization of the Americas, and of the Slave Trade.  The many plagues in the new and old world that have effected the course of history from ancient to modern times were known to the Egyptians, Greeks, Chinese, crusaders, explorers, Napoleon, and had familiar ties of war, pestilence, and epidemic. Our coverage is mainly concerned with the scientific and public health consequences of these events that preceded WWI and extended to the Vietnam War, and is highlighted by the invention of a public health system world wide.

The Armed Forces Institute of Pathology (AFIP) closed its’ doors on September 15, 2011. It was founded as the Army Medical Museum on May 21, 1862, to collect pathological specimens along with their case histories.

The information from the case files of the pathological specimens from the Civil War was compared with Army pensions records and compiled into the six-volume Medical and Surgical History of the War of the Rebellion, an early study of wartime medicine.

In 1900, museum curator Walter Reed led the commission which proved that a mosquito was the vector for Yellow Fever, beginning the mosquito eradication campaigns throughout most of the twentieth century.

WalterReed

WalterReed

Another museum curator, Frederick Russell, conducted clinical trials on the typhoid vaccine in 1907, resulting in the U.S. Army to be the first Army vaccinated against typhoid.

Increased emphasis on pathology during the twentieth century turned the museum, renamed the Armed Forces Institute of Pathology in 1949, into an international resource for pathology and the study of disease. AFIP’s pathological collections have been used, for example, in the characterization of the 1918-influenza virus in 1997.

Prior to moving to the Walter Reed Army Medical Center, the AFIP was located at the Army Medical Museum and Library on the Mall (1887-1969), and earlier as Army Medical Museum in Ford’s Theatre (1867-1886).

Army Medical Museum and Library on the Mall

Army Medical Museum and Library on the Mall

This institution, originally the Library of the Surgeon General’s Office (U.S. Army), gained its present name and was transferred from the Army to the Public Health Service in 1956. In 1962, it moved to its own Bethesda site after sharing space for nearly 100 years with other Army units, first at the former Ford’s Theatre building and then at the Army Medical Museum and Library on the Mall. Rare books and other holdings that had been sent to Cleveland for safekeeping during World War II were also reunited with the main collection at that time.

The National Museum of Health and Medicine, established in 1862, inspires interest in and promotes the understanding of medicine — past, present, and future — with a special emphasis on tri-service American military medicine. As a National Historic Landmark recognized for its ongoing value to the health of the military and to the nation, the Museum identifies, collects, and preserves important and unique resources to support a broad agenda of innovative exhibits, educational programs, and scientific, historical, and medical research. NMHM is a headquarters element of the U.S. Army Medical Research and Materiel Command. NMHM’s newest exhibit installations showcase the institution’s 25-million object collection, focusing on topics as diverse as innovations in military medicine, traumatic brain injury, anatomy and pathology, military medicine during the Civil War, the assassination of Abraham Lincoln (including the bullet that killed him), human identification and a special exhibition on the Museum’s own major milestone—the 150th anniversary of the founding of the Army Medical Museum. Objects on display will include familiar artifacts and specimens: the bullet that killed Lincoln and a leg showing the effects of elephantiasis, as well as recent finds in the collection—all designed to astound visitors to the new Museum.

Today, the National Library of Medicine houses the largest collection of print and non-print materials in the history of the health sciences in the United States, and maintains an active program of exhibits and public lectures. Most of the archival and manuscript material dates from the 17th century; however, the Library owns about 200 pre-1601 Western and Islamic manuscripts. Holdings include pre-1914 books, pre-1871 journals, archives and modern manuscripts, medieval and Islamic manuscripts, a collection of printed books, manuscripts, and visual material in Japanese, Chinese, and Korean; historical prints, photographs, films, and videos; pamphlets, dissertations, theses, college catalogs, and government documents.

The oldest item in the Library is an Arabic manuscript on gastrointestinal diseases from al-Razi’s The Comprehensive Book on Medicine (Kitab al-Hawi fi al-tibb) dated 1094. Significant modern collections include the papers of U.S. Surgeons General, including C. Everett Koop, and the papers of Nobel Prize-winning scientists, particularly those connected with NIH.

As part of its Profiles in Science project, the National Library of Medicine has collaborated with the Churchill Archives Centre to digitize and make available over the World Wide Web a selection of the Rosalind Franklin Papers for use by educators and researchers. This site provides access to the portions of the Rosalind Franklin Papers, which range from 1920 to 1975. The collection contains photographs, correspondence, diaries, published articles, lectures, laboratory notebooks, and research notes.

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

“Science and everyday life cannot and should not be separated. Science, for me, gives a partial explanation of life. In so far as it goes, it is based on fact, experience, and experiment. . . . I agree that faith is essential to success in life, but I do not accept your definition of faith, i.e., belief in life after death. In my view, all that is necessary for faith is the belief that by doing our best we shall come nearer to success and that success in our aims (the improvement of the lot of mankind, present and future) is worth attaining.”

–Rosalind Franklin in a letter to Ellis Franklin, ca. summer 1940

Smallpox

Although some disliked mandatory smallpox vaccination measures, coordinated efforts against smallpox went on in the United States after 1867, and the disease continued to diminish in the wealthy countries. By 1897, smallpox had largely been eliminated from the United States. In Northern Europe a number of countries had eliminated smallpox by 1900, and by 1914, the incidence in most industrialized countries had decreased to comparatively low levels. Vaccination continued in industrialized countries, until the mid to late 1970s as protection against reintroduction. Australia and New Zealand are two notable exceptions; neither experienced endemic smallpox and never vaccinated widely, relying instead on protection by distance and strict quarantines.

In 1966 an international team, the Smallpox Eradication Unit, was formed under the leadership of an American, Donald Henderson. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort, and adopted the new disease surveillance method promoted by Czech epidemiologist Karel Raška. Two-year old Rahima Banu of Bangladesh (pictured) was the last person infected with naturally occurring Variola major, in 1975

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

—World Health Organization, Resolution WHA33.3

Anthrax

Anthrax is an acute disease caused by the bacterium Bacillus anthracis. Most forms of the disease are lethal, and it affects both humans and other animals. Effective vaccines against anthrax are now available, and some forms of the disease respond well to antibiotic treatment.

Like many other members of the genus Bacillus, B. anthracis can form dormant endospores (often referred to as “spores” for short, but not to be confused with fungal spores) that are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, even Antarctica. When spores are inhaled, ingested, or come into contact with a skin lesion on a host, they may become reactivated and multiply rapidly.

Anthrax commonly infects wild and domesticated herbivorous mammals that ingest or inhale the spores while grazing. Ingestion is thought to be the most common route by which herbivores contract anthrax. Carnivores living in the same environment may become infected by consuming infected animals. Diseased animals can spread anthrax to humans, either by direct contact (e.g., inoculation of infected blood to broken skin) or by consumption of a diseased animal’s flesh.

Anthrax does not spread directly from one infected animal or person to another; it is spread by spores. These spores can be transported by clothing or shoes. The body of an animal that had active anthrax at the time of death can also be a source of anthrax spores. Owing to the hardiness of anthrax spores, and their ease of production in vitro, they are extraordinarily well suited to use (in powdered and aerosol form) as biological weapons.

Bacillus anthracis is a rod-shaped, Gram-positive, aerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria and put them into a mouse. The bacterium normally rests in endospore form in the soil, and can survive for decades in this state. Once ingested or placed in an open wound, the bacterium begins multiplying inside the animal or human and typically kills the host within a few days or weeks. The endospores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.

Robert Koch

Robert Koch

Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. One spore forms per one vegetative bacterium. Once formed, these spores are very hard to eradicate.

The lethality of the anthrax disease is due to the bacterium’s two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.

To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore.

Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol.

Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling. LF inactivates neutrophils so they cannot phagocytose bacteria. Anthrax causes vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.

Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere. Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. The body’s natural defenses presumably can destroy low levels of exposure. These people usually contract cutaneous anthrax if they catch anything.

Throughout history, the most dangerous form of inhalational anthrax was called woolsorters’ disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare, as almost no infected animals remain. The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. Gastrointestinal anthrax is exceedingly rare in the United States, with only one case on record, reported in 1942, according to the Centers for Disease Control and Prevention.

Various techniques are used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.

All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2.

  1. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a “medusa head” appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.

Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine, from Pasteur’s pioneering 19th-century work with cattle (the second effective vaccine ever) to the controversial 20th century use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.

If a person is suspected as having died from anthrax, every precaution should be taken to avoid skin contact with the potentially contaminated body and fluids exuded through natural body openings. The body should be put in strict quarantine and then incinerated. A blood sample should then be collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though culture of the organism is still the gold standard for diagnosis.

Full isolation of the body is important to prevent possible contamination of others. Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations should be used when handling the body. Disposable personal protective equipment and filters should be autoclaved, and/or burned and buried.

Anyone working with anthrax in a suspected or confirmed victim should wear respiratory equipment capable of filtering this size of particle or smaller. The US National Institute for Occupational Safety and Health – and Mine Safety and Health Administration-approved high-efficiency respirator, such as a half-face disposable respirator with a high-efficiency particulate air filter, is recommended.

All possibly contaminated bedding or clothing should be isolated in double plastic bags and treated as possible biohazard waste. The victim should be sealed in an airtight body bag. Dead victims who are opened and not burned provide an ideal source of anthrax spores. Cremating victims is the preferred way of handling body disposal.

Until the 20th century, anthrax infections killed hundreds of thousands of animals and people worldwide each year. French scientist Louis Pasteur developed the first effective vaccine for anthrax in 1881.

louis-pasteur

louis-pasteur

As a result of over a century of animal vaccination programs, sterilization of raw animal waste materials, and anthrax eradication programs in United States, Canada, Russia, Eastern Europe, Oceania, and parts of Africa and Asia, anthrax infection is now relatively rare in domestic animals. Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001.

Anthrax outbreaks occur in some wild animal populations with some regularity. The disease is more common in countries without widespread veterinary or human public health programs. In the 21st century, anthrax is still a problem in less developed countries.

  1. anthracis bacterial spores are soil-borne. Because of their long lifespan, spores are present globally and remain at the burial sites of animals killed by anthrax for many decades. Disturbed grave sites of infected animals have caused reinfection over 70 years after the animal’s interment.

Cholera

This is an acute diarrheal infection that can kill within a matter of hours if untreated. Oral rehydration therapy — drinking water mixed with salts and sugar. But researchers at EPFL — the Swiss Federal Institute of Technology in Lausanne — say using rice starch instead of sugar with the rehydration salts could reduce bacterial toxicity by almost 75 percent. That would make the microbe less likely to infect a patient’s family and friends if they are exposed to any body fluids.

The World Health Organization says cholera, a water-borne bacterium, infects three to five million people every year, and the severe dehydration it causes leads to as many as 120,000 deaths.

Cholera is an acute diarrheal disease caused by the water borne bacteria Vibrio cholerae O1 or O139 (V. cholerae). Infection is mainly through ingestion of contaminated water or food. The V cholerae passes through the stomach, colonizes the upper part of the small intestine, penetrates the mucus layer, and secretes cholera toxin which affects the small intestine.

Clinically, the majority of cholera episodes are characterized by a sudden onset of massive diarrhea and vomiting accompanied by the loss of profuse amounts of protein-free fluid with electrolytes. The resulting dehydration produces tachycardia, hypotension, and vascular collapse, which can lead to sudden death. The diagnosis of cholera is commonly established by isolating the causative organism from the stools of infected individuals

There are an estimated 3–5 million cholera cases and 100 000–120 000 deaths due to cholera every year.

Up to 80% of cases can be successfully treated with oral rehydration salts.

Effective control measures rely on prevention, preparedness and response.

Provision of safe water and sanitation is critical in reducing the impact of cholera and other waterborne diseases.

Oral cholera vaccines are considered an additional means to control cholera, but should not replace conventional control measures.

During the 19th century, cholera spread across the world from its original reservoir in the Ganges delta in India. Six subsequent pandemics killed millions of people across all continents. The current (seventh) pandemic started in South Asia in 1961, and reached Africa in 1971 and the Americas in 1991. Cholera is now endemic in many countries.

INDIA-ENVIRONMENT-POLUTION

INDIA-ENVIRONMENT-POLUTION

In its extreme manifestation, cholera is one of the most rapidly fatal infectious illnesses known. Within 3–4 hours of onset of symptoms, a previously healthy person may become severely dehydrated and if not treated may die within 24 hours (WHO, 2010). The disease is one of the most researched in the world today; nevertheless, it is still an important public health problem despite more than a century of study, especially in developing tropical countries. Cholera is currently listed as one of three internationally quarantinable diseases by the World Health Organization (WHO), along with plague and yellow fever (WHO, 2000a).

Two serogroups of V. cholerae – O1 and O139 – cause outbreaks. V. cholerae O1 causes the majority of outbreaks, while O139 – first identified in Bangladesh in 1992 – is confined to South-East Asia.

Non-O1 and non-O139 V. cholerae can cause mild diarrhoea but do not generate epidemics.

The main reservoirs of V. cholerae are people and aquatic sources such as brackish water and estuaries, often associated with algal blooms. Recent studies indicate that global warming creates a favorable environment for the bacteria.

Socioeconomic and demographic factors enhance the vulnerability of a population to infection and contribute to epidemic spread. Such factors also mandate the extent to which the disease will reach epidemic proportions and also modulate the size of the epidemic.Known population level (local-level) risk factors of cholera include poverty, lack of development, high population density, low education, and lack of previous exposure. Cholera diffuses rapidly in environments that lack basic infrastructure with regard to access to safe water and proper sanitation. The cholera vibrios can survive and multiply outside the human body and can spread rapidly in environments where living conditions are overcrowded and where there is no safe disposal of solid waste, liquid waste, and human feces.

Mapping the locations of cholera victims, John Snow was able to trace the cause of the disease to a contaminated water source. Surprisingly, this was done 20 years before Koch and Pasteur established the beginnings of microbiology (Koch, 1884).

John Snow's  map

John Snow’s map

Yellow Fever

Yellow fever virus was probably introduced into the New World via ships carrying slaves from West Africa. Throughout the 18th and 19th centuries, regular and devastating epidemics of yellow fever occurred across the Caribbean, Central and South America, the southern United States and Europe. The Yellow Fever Commission, founded as a consequence of excessive disease mortality during the Spanish– American War (1898), concluded that the best way to control the disease was to control the mosquito. William Gorgas successfully eradicated yellow fever from Havana by destroying larval breeding sites and this strategy of source reduction was then successfully used to reduce disease problems and thus finally permit the construction of the Panama Canal in 1904. Success was due largely to a top-down, military approach involving strict supervision and discipline (Gorgas, 1915). In 1946, an intensive Aedes aegypti eradication campaign was initiated in the Americas, which succeeded in reducing vector populations to undetectable levels throughout most of its range.

The production of an effective vaccine in the 1930s led to a change of emphasis from vector control to vaccination for the control of yellow fever. Vaccination campaigns almost eliminated urban yellow fever but incomplete coverage, as with incomplete anti-vectorial measures previously, meant the disease persisted, and outbreaks occurred in remote forest areas.

It was acknowledged by the Health Organization of the League of Nations (the forerunner to the World Health Organization (WHO)) that yellow fever was a severe burden on endemic countries. The work of Soper and the Brazilian Cooperative Yellow Fever Service (Soper, 1934, 1935a, b) began to determine the geographical extent of the disease, specifically in Brazil. Regional maps of disease outbreaks were published by Sawyer (1934), but it was not until after the formation of the WHO that a global map of yellow fever endemicity was first constructed (van Rooyen and Rhodes, 1948). This map was based on expert opinion (United Nations Relief and Rehabilitation Administration/Expert Commission on Quarantine) and serological surveys. The present-day distribution map for yellow fever is still essentially a modified version of this map.

global yellow fever risk map

global yellow fever risk map

Yellow fever is conspicuously absent from Asia. Although there is some evidence that other flaviviruses may offer cross-protection against yellow fever (Gordon-Smith et al., 1962), why yellow fever does not occur in Asia is still unexplained.

It has been estimated that the currently circulating strains of YFV arose in Africa within the last 1,500 years and emerged in the Americas following the slave trade approximately 300–400 years ago. These viruses then spread westwards across the continent and persist there to this day in the jungles of South America.

The 17D live-attenuated vaccine still in use today was developed in 1936, and a single dose confers immunity for at least ten years in 95% of the cases. In a bid to contain the spread of the disease, travellers to countries within endemic areas or those thought to be ‘at risk’ require a certificate of vaccination. The yellow fever certificate is the only internationally regulated certification supported by the WHO. The effectiveness of the vaccine reduces the need for anti-vectorial campaigns directed specifically against yellow fever. As the same major vector is involved, control of Aedes aegypti for dengue reduction will also reduce yellow fever transmission where both diseases co-occur, especially within urban settings.

Dengue

Probable epidemics of dengue fever have been recorded from Africa, Asia, Europe and the Americas since the early 19th century (Armstrong, 1923). Although it is rarely fatal, up to 90% of the

population of an infected area can be incapacitated during the course of an epidemic (Armstrong, 1923; Siler et al., 1926). Widespread movements of troops and refugees during and after World War II introduced vectors and viruses into many new areas. Dengue fever has unsurprisingly been mistaken for yellow fever as well as other diseases including influenza, measles, typhoid and malaria. It is rarely fatal and survivors appear to have lifelong immunity to the homologous serotype.

Far more serious is dengue haemorrhagic fever (DHF), where additional symptoms develop, including haemorrhaging and shock. The mortality from DHF can exceed 30% if appropriate care is unavailable. The most significant risk factor for DHF is when secondary infection with a different serotype occurs in people who have already had, and recovered from, a primary dengue infection.

Dengue has adapted to changes in human demography very effectively. The main vector of dengue is the anthropophilic Aedes aegypti, which is found in close association with human settlements throughout the tropics, breeding mainly in containers in and around, and feeding almost exclusively on humans. As a result, dengue is essentially a disease of tropical urban areas. Before 1970, only nine countries had experienced DHF epidemics, but by 1995 this number had increased fourfold (WHO, 2001). Dengue case numbers have increased considerably since the 1960s; by the end of the 20th century an estimated 50 million cases of dengue fever and 500 000 cases of DHF were occurring every year (WHO, 2001).

The appearance of DHF stimulated large amounts of dengue research, which established the existence of the four serotypes and the range of competent vectors, and led to the adoption of Aedes aegypti control programs in some areas (particularly South-East Asia) (Kilpatrick et al., 1970).

There have been several attempts to estimate the economic impact of dengue: the 1977 epidemic in Puerto Rico was thought to have cost between $6.1 and $15.6 million ($26–$31 per clinical case) (Von Allmen et al., 1979), while the 1981 Cuban epidemic (with a total of 344 203 reported cases) cost about $103 million (around $299 per case) (Kouri et al., 1989).

There is no cure for dengue fever or for DHF. Currently, the only treatment is symptomatic, but this can reduce mortality from DHF to less than 1% (WHO, 2002). Unfortunately, the extent of dengue epidemics means that local public health services are often overwhelmed by the demands for treatment.

Malaria

Malaria is a serious and sometimes fatal disease caused by a parasite that infects a mosquito. People who get malaria are typically very sick with high fevers, shaking chills, and flu-like illness. About 1,500 cases of malaria are diagnosed in the United States each year. The vast majority of cases in the United States are in travelers and immigrants returning from countries where malaria transmission occurs, many from sub-Saharan Africa and South Asia. Malaria has been noted for more than 4,000 years. It became widely recognized in Greece by the 4th century BCE, and it was responsible for the decline of many of the city-state populations. Hippocrates noted the principal symptoms. In the Susruta, a Sanskrit medical treatise, the symptoms of malarial fever were described and attributed to the bites of certain insects. A number of Roman writers attributed malarial diseases to the swamps.

Following their arrival in the New World, Spanish Jesuit missionaries learned from indigenous Indian tribes of a medicinal bark used for the treatment of fevers. With this bark, the Countess of Chinchón, the wife of the Viceroy of Peru, was cured of her fever. The bark from the tree was then called Peruvian bark and the tree was named Cinchona after the countess. The medicine from the bark is now known as the antimalarial, quinine. Along with artemisinins, quinine is one of the most effective antimalarial drugs available today.

quinquin acalisaya

quinquin acalisaya

Cinchona officinalis is a medicinal plant, one of several Cinchona species used for the production of quinine, which is an anti-fever agent. It is especially useful in the prevention and treatment of malaria. Cinchona calisaya is the tree most cultivated for quinine production.

There are a number of other alkaloids that are extracted from this tree. They include cinchonine, cinchonidine and quinidine  (Wikipedia)

Charles Louis Alphonse Laveran, a French army surgeon stationed in Constantine, Algeria, was the first to notice parasites in the blood of a patient suffering from malaria in 1880. Laveran was awarded the Nobel Prize in 1907.

Alphonse Laveran

Alphonse Laveran

Camillo Golgi, an Italian neurophysiologist, established that there were at least two forms of the disease, one with tertian periodicity (fever every other day) and one with quartan periodicity (fever every third day). He also observed that the forms produced differing numbers of merozoites (new parasites) upon maturity and that fever coincided with the rupture and release of merozoites into the blood stream. He was awarded a Nobel Prize in Medicine for his discoveries in neurophysiology in 1906.

malaria_lifecycle.

malaria_lifecycle.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

The Italian investigators Giovanni Batista Grassi and Raimondo Filetti first introduced the names Plasmodium vivax and P. malariae for two of the malaria parasites that affect humans in 1890. Laveran had believed that there was only one species, Oscillaria malariae. William H. Welch, reviewed the subject and, in 1897, he named the malignant tertian malaria parasite P. falciparum. In 1922, John William Watson Stephens described the fourth human malaria parasite, P. ovale. P. knowlesi was first described by Robert Knowles and Biraj Mohan Das Gupta in 1931 in a long-tailed macaque, but the first documented human infection with P. knowlesi was in 1965.

Anopheles mosquito

Anopheles mosquito

Ronald Ross, a British officer in the Indian Medical Service, was the first to demonstrate that malaria parasites could be transmitted from infected patients to mosquitoes in 1997. In further work with bird malaria, Ross showed that mosquitoes could transmit malaria parasites from bird to bird. This necessitated a sporogonic cycle (the time interval during which the parasite developed in the mosquito). Ross was awarded the Nobel Prize in 1902.

Ronald Ross_1899

Ronald Ross_1899

A team of Italian investigators led by Giovanni Batista Grassi, collected Anopheles claviger mosquitoes and fed them on malarial patients. The complete sporogonic cycle of Plasmodium falciparum, P. vivax, and P. malariae were demonstrated. Mosquitoes infected by feeding on a patient in Rome were sent to London in 1999, where they fed on two volunteers, both of whom developed malaria.

The construction of the Panama Canal was made possible only after yellow fever and malaria were controlled in the area. These two diseases were a major cause of death and disease among workers in the area. In 1906, there were over 26,000 employees working on the Canal. Of these, over 21,000 were hospitalized for malaria at some time during their work. By 1912, there were over 50,000 employees, and the number of hospitalized workers had decreased to approximately 5,600. Through the leadership and efforts of William Crawford Gorgas, Joseph Augustin LePrince, and Samuel Taylor Darling, yellow fever was eliminated and malaria incidence markedly reduced through an integrated program of insect and malaria control.

Gorgas-William-Crawford, MD

Gorgas-William-Crawford, MD

During the U.S. military occupation of Cuba and the construction of the Panama Canal at the turn of the 20th century, U.S. officials made great strides in the control of malaria and yellow fever. In 1914 Henry Rose Carter and Rudolph H. von Ezdorf of the USPHS requested and received funds from the U.S. Congress to control malaria in the United States. Various activities to investigate and combat malaria in the United States followed from this initial request and reduced the number of malaria cases in the United States. USPHS established malaria control activities around military bases in the malarious regions of the southern United States to allow soldiers to train year round.

U.S. President Franklin D. Roosevelt signed a bill that created the Tennessee Valley Authority (TVA) on May 18, 1933. The law gave the federal government a centralized body to control the Tennessee River’s potential for hydroelectric power and improve the land and waterways for development of the region. An organized and effective malaria control program stemmed from this new authority in the Tennessee River valley. Malaria affected 30 percent of the population in the region when the TVA was incorporated in 1933. The Public Health Service played a vital role in the research and control operations and by 1947, the disease was essentially eliminated. Mosquito breeding sites were reduced by controlling water levels and insecticide applications.

Chloroquine was discovered by a German, Hans Andersag, in 1934 at Bayer I.G. Farbenindustrie A.G. laboratories in Eberfeld, Germany. He named his compound resochin. Through a series of lapses and confusion brought about during the war, chloroquine was finally recognized and established as an effective and safe antimalarial in 1946 by British and U.S. scientists.

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

A German chemistry student, Othmer Zeidler, synthesized DDT in 1874, for his thesis. The insecticidal property of DDT was not discovered until 1939 by Paul Müller in Switzerland. Various militaries in WWII utilized the new insecticide initially for control of louse-borne typhus. DDT was used for malaria control at the end of WWII after it had proven effective against malaria-carrying mosquitoes by British, Italian, and American scientists. Müller won the Nobel Prize for Medicine in 1948.

Paul Muller

Paul Muller

Malaria Control in War Areas (MCWA) was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic. Many of the bases were established in areas where mosquitoes were abundant. MCWA aimed to prevent reintroduction of malaria into the civilian population by mosquitoes that would have fed on malaria-infected soldiers, in training or returning from endemic areas. During these activities, MCWA also trained state and local health department officials in malaria control techniques and strategies.

The National Malaria Eradication Program, a cooperative undertaking by state and local health agencies of 13 Southeastern states and the CDC, originally proposed by Louis Laval Williams, commenced operations on July 1, 1947. By the end of 1949, over 4,650,000 housespray applications had been made. In 1947, 15,000 malaria cases were reported. By 1950, only 2,000 cases were reported. By 1951, malaria was considered eliminated from the United States.

With the success of DDT, the advent of less toxic, more effective synthetic antimalarials, and the enthusiastic and urgent belief that time and money were of the essence, the World Health Organization (WHO) submitted at the World Health Assembly in 1955 an ambitious proposal for the eradication of malaria worldwide. Eradication efforts began and focused on house spraying with residual insecticides, antimalarial drug treatment, and surveillance, and would be carried out in 4 successive steps: preparation, attack, consolidation, and maintenance. Successes included elimination in nations with temperate climates and seasonal malaria transmission.

Some countries such as India and Sri Lanka had sharp reductions in the number of cases, followed by increases to substantial levels after efforts ceased, while other nations had negligible progress (such as Indonesia, Afghanistan, Haiti, and Nicaragua), and still others were excluded completely from the eradication campaign(sub-Saharan Africa). The emergence of drug resistance, widespread resistance to available insecticides, wars and massive population movements, difficulties in obtaining sustained funding from donor countries, and lack of community participation made the long-term maintenance of the effort untenable.

The goal of most current National Malaria Prevention and Control Programs and most malaria activities conducted in endemic countries is to reduce the number of malaria-related cases and deaths. To reduce malaria transmission to a level where it is no longer a public health problem is the goal of what is called malaria “control.”

The natural ecology of malaria involves malaria parasites infecting successively two types of hosts: humans and female Anopheles mosquitoes. In humans, the parasites grow and multiply first in the liver cells and then in the red cells of the blood. In the blood, successive broods of parasites grow inside the red cells and destroy them, releasing daughter parasites (“merozoites”) that continue the cycle by invading other red cells.

Anopheles mosquito

Anopheles mosquito

The blood stage parasites are those that cause the symptoms of malaria. When certain forms of blood stage parasites (“gametocytes”) are picked up by a female Anopheles mosquito during a blood meal, they start another, different cycle of growth and multiplication in the mosquito.

After 10-18 days, the parasites are found (as “sporozoites”) in the mosquito’s salivary glands. When the Anopheles mosquito takes a blood meal on another human, the sporozoites are injected with the mosquito’s saliva and start another human infection when they parasitize the liver cells.

Malaria. Wikipedia

Malaria. Wikipedia

A Plasmodium from the saliva of a female mosquito moving across a mosquito cell

Thus the mosquito carries the disease from one human to another (acting as a “vector”). Differently from the human host, the mosquito vector does not suffer from the presence of the parasites.

All the clinical symptoms associated with malaria are caused by the asexual erythrocytic or blood stage parasites. When the parasite develops in the erythrocyte, numerous known and unknown waste substances such as hemozoin pigment and other toxic factors accumulate in the infected red blood cell. These are dumped into the bloodstream when the infected cells lyse and release invasive merozoites. The hemozoin and other toxic factors such as glucose phosphate isomerase (GPI) stimulate macrophages and other cells to produce cytokines and other soluble factors which act to produce fever and rigors associated with malaria.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

Plasmodium falciparum-infected erythrocytes, particularly those with mature trophozoites, adhere to the vascular endothelium of venular blood vessel walls and when they become sequestered in the vessels of the brain it is a factor in causing the severe disease syndrome known as cerebral malaria, which is associated with high mortality.

Following the infective bite by the Anopheles mosquito, a period of time (the “incubation period”) goes by before the first symptoms appear. The incubation period in most cases varies from 7 to 30 days. The shorter periods are observed most frequently with P. falciparum and the longer ones with P. malariae.

malaria_lifecycle.

malaria_lifecycle.

Antimalarial drugs taken for prophylaxis by travelers can delay the appearance of malaria symptoms by weeks or months, long after the traveler has left the malaria-endemic area. (This can happen particularly with P. vivax and P. ovale, both of which can produce dormant liver stage parasites; the liver stages may reactivate and cause disease months after the infective mosquito bite.)

The Influenza Pandemic of 1918

The Nation’s Health

If you had lived in the early twentieth century, your life expectancy would
have been much shorter than it is today. Today, life expectancy for men is 75 years;
for women, it is 80 years. In 1918, life expectancy for men was only 53 years.

Women’s life expectancy at 54 was only marginally better.

Why was life expectancy so much shorter?

During the early twentieth century, communicable diseases—that is diseases
which can spread from person to person—were widespread. Influenza and
pneumonia along with tuberculosis and gastrointestinal infections such
as diarrhea killed Americans at an alarming rate but
non-communicable diseases such as cancer and heart disease also
exacted a heavy toll. Accidents, especially in the nation’s unregulated factories
and workshops, were also responsible for maiming and killing many workers.

High infant mortality further shortened life expectancy. In 1918, one in
five American children did not live beyond their fifth birthday. In some
cities, the situation was even worse, with thirty percent of all infants dying
before their first birthday. Childhood diseases such as diphtheria, measles,
scarlet fever and whooping cough contributed significantly to these high
death rates.

osler_at_a_bedside

osler_at_a_bedside

By 1900, an increasing number of physicians were receiving clinical
training. This training provided doctors with new insights into disease
and specific types of diseases. [Credit: National Library of Medicine]

scarlet_fever

scarlet_fever

Quarantine signs such as this one warned visitors away from homes
with scarlet fever and other infectious diseases. [Credit: National
Library of Medicine]

Rat Proofing

Cities often sponsored Clean-Up Days. Here, Public Health Service
employees clean up San Francisco’s streets in a campaign to
eradicate bubonic plague. [Credit: Office of the Public Health
Service Historian]

cleanup days

cleanup days

A young woman is seated with a baby on her lap in the center
of the photo.  On the right are two young children.  One child is
standing.  The other is seated in a crib.  A woman in a long
white apron stands by the stove on the left side of the photo.
She is pulling a bottle out of a pan on the stove.

nurse_helps_with_baby_formula

nurse_helps_with_baby_formula

A public health nurse teaches a young mother how to sterilize
a bottle. [Credit: National Library of Medicine]

Seeking Medical Care

Feeling Sick in 1918?

If you became sick in nineteenth-century America, you might consult
a doctor, a druggist, a midwife, a folk healer, a nurse or even
your neighbor. Most of these practitioners would visit you in your home.

By 1918, these attitudes toward health care were beginning to
change. Some physicians had begun to set up offices where patients
could receive medical care and hospitals, which emphasized sterilization
and isolation, were also becoming popular.

However, these changes were not yet universal and many Americans
still lived their entire lives without visiting a doctor.

How Did Ordinary People View Disease?

Folk Medicine:

In 1918, folk healers could be found all over America. Some of these
healers believed that diseases had a physical cause such as cold
weather but others believed it had a supernatural cause such as a curse.

Treatments advocated by these healers ran the gamut. Herbal remedies
were especially popular. Other popular remedies included cupping,
which entailed attaching a heated cup to the surface of the skin,
and acupuncture. Many people also wore magical objects which they
believed protected the wearer from illness.

During the influenza pandemic of 1918 when scientific medicine
failed to provide Americans with a cure or preventative, many people
turned to folk remedies and treatments.

Scientific Medicine

In the 1880s, building on developments which had been in the
making since the 1830s, a growing number of scientists and
physicians came to believe that disease was spread by
minute pathogenic organisms or germs.

Often called the bacteriological revolution, this new theory
radically transformed the practice of medicine. But while this was a
major step forward in understanding disease, doctors and scientists
continued to have only a rudimentary understanding of the differences
between different types of microbes. Many practicing physicians
did not understand the differences between bacteria and viruses
and this sharply limited their ability to understand disease
causation and disease prevention.

Drugs and Druggists:

Although the early twentieth century witnessed growing attempts
to regulate the practice of medicine, many druggists assumed
duties we associate today with physicians. Some druggists, for
example, diagnosed and prescribed treatments which they
then sold to the patient. Some of these treatments included opiates;
few actually cured diseases.

Desperate times called for desperate remedies and during the
influenza pandemic, many patients turned to these and other drugs
in the hopes that they would provide a cure.

Nurses:

Between 1890 and 1920, nursing schools multiplied and trained
nurses began to replace practical nurses. Isolation practices
sterility, and strict routines, practices associated with professionally
trained nurses, increasingly became standard during this period. In 1918, nurses served as the physician’s hand, assisting doctors as
they made the rounds. During the pandemic, many nurses acted
independently of doctors, treating and prescribing for patients.

Physicians:

Throughout the eighteenth and much of the nineteenth centuries,
pretty much anyone had the right to call oneself a physician. By the
late nineteenth century, growing calls for reform had begun to
transform the profession.

In 1900, every state in the Union had some type of medical registration
law with about half of all states requiring physicians to possess a
medical diploma and pass an exam before they received a license
to practice. However, grandfather clauses which exempted many older
physicians meant that many physicians who practiced in 1918
had been poorly trained.

quack_doctor

quack_doctor

Poor training and loose regulations meant that some doctors were
little more than quacks. [Credit: National Library of Medicine]

drug_ad

drug_ad

Drug advertisers routinely promised quick and painless cures.
[Credit: National Library of Medicine]

While access to the profession was tightening, women and minorities,
including African-Americans, entered the profession in growing
numbers during the early twentieth century.

What Did Doctors Really Know?

Growing understanding of bacteriology enabled early twentieth-
century physicians to diagnose diseases more effectively than their
predecessors but diagnosis continued to be difficult. Influenza was
especially tricky to diagnose and many physicians may have incorrectly
diagnosed their patients, especially in the early stages of the pandemic.

Bacteriology did not revolutionize the treatment of disease. In the
pre-antibiotic era of 1918, physicians continued to rely heavily
on traditional therapeutics. During the pandemic, many physicians
used traditional treatments such as sweating which had their
roots in humoral medicine.

Reflecting the uneven structure of medical education, the level and
quality of care which physicians provided varied wildly.

The Public Health Service

Founded in 1798, the Marine Hospital Service originally provided
health care for sick and disabled seaman. By the late nineteenth
century, the growth of trade, travel and immigration networks
had led the Service to expand its mission to include protecting
the health of all Americans.

In a nation where federal and state authorities had consistently
battled for supremacy, the powers of the Public Health Service
were limited. Viewed with suspicion by many state and local
authorities, PHS officers often found themselves fighting state
and local authorities as well as epidemics—even when they had
been called in by these authorities.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

A network of hospitals in the nation’s ports provided seamen with
access to healthcare. [Credit: Office of the Public Health Service Historian]

In 1918, there were fewer than 700 commissioned officers in the PHS.
Charged with the daunting task of protecting the health of some
106 million Americans, PHS officers were stationed in not only
the United States but also abroad.

Because few diseases could be cured, the prevention of disease
was central to the PHS mission. Under the leadership of Surgeon
General Rupert Blue, the PHS advocated the use of scientific
research, domestic and foreign quarantine, marine hospitals
and statistics to accomplish this mission. hen an epidemic emerged,
the Public Health Service’s epidemiologists tracked the disease,
house by house. The 1918 influenza pandemic occurred too
rapidly for the PHS to develop a detailed study of the pandemic.

typhoid_map

typhoid_map

This map was used to trace a smaller typhoid epidemic which erupted in
Washington, DC in 1906. [Credit: Office of the Public Health Service Historian]

The spread of disease within the US was a serious concern. However,
PHS officers were most concerned about the importation of disease into
the United States. To prevent this, ships could be, and often were,
quarantined by the PHS.

fever-quaranteen-station-1880

fever-quaranteen-station-1880

Travelers and immigrants to the United States were also required
to undergo a medical exam when entering the country. In 1918 alone,
700,000 immigrants underwent a medical exam at the hands of PHS
officers. Within the United States, PHS officers worked directly with
state and local departments of health to track, prevent and arrest
epidemics as they emerged. During 1918, PHS officers found themselves
battling not only influenza but also polio, typhus, typhoid, smallpox
and a range of other diseases. In 1918, the PHS operated research
laboratories stretching from Hamilton, Montana to Washington DC.
Scientific researchers at these laboratories ultimately discovered
both the causes and cures of diseases ranging from Rocky Mountain
Spotted Fever to pellagra.

Sewers and Sanitation:

In the nineteenth century, most physicians and public health experts
believed that disease was caused not by microorganisms but rather by dirt itself.

Sanitarians, as these people were called, argued that cleaning dirt-
infested cities and building better sewage systems would both prevent
and end many epidemics. At their urging, cities and towns across the United
States built better sewage systems and provided citizens with access to
clean water. By 1918, these improved water and sewage systems had greatly
contributed to a decline in gastrointestinal infections and a significant
reduction in mortality rates among infants, children and young adults.

But because diseases are caused by microorganisms, not dirt, these
tactics were not completely effective in ending all epidemics.

Sanitation: Controlling problems at source

Box 1: Sharing toilets in Uganda

A recent survey by the Ministry of Health in Uganda suggested that there is only one toilet for every 700 Ugandan pupils, compared to one for every 328 pupils in 1995. Out of 8000 schools surveyed, only 33% of the 8000 schools sampled have separate latrines for girls. The deterioration in sanitary conditions was attributed to increased enrolment in schools. UNICEF surveyed 90 primary schools in crisis-affected districts of north and west Uganda: only 2% had adequate latrine facilities (IRIN, 1999).

Box 2: Sanitation and diarrhoeal disease

Gwatkin and Guillot (1999) have claimed that diarrhoea accounts for 11% of all deaths in the poorest 20% of all countries. This toll could be reduced by key measures: better sanitation to reduce the cause of water linked diarrhoea; and more widespread use of oral rehydration therapy (ORT) to treat its effects. Improving water supplies, sanitation facilities and hygiene practices reduces diarrhoea incidence by 26%. Even more impressive, deaths due to diarrhoea are reduced by 65% with these same improvements (Esrey et al., 1991). Of the 2.2 million people that die from diarrhoea each year, many of those deaths are caused by one bacteria – Shigella. Simple hand washing with soap and water reduces Shigella and other diarrhoea transmission by 35% (Kotloff et al., 1999; Khan, 1982). ORT is effective in reducing deaths due to diarrhoea but does not prevent it.

http://www.who.int/water_sanitation_health/sanitproblems/en/index1.html

Garbage-A-polluted-creek

Garbage-A-polluted-creek

Influenza Strikes

Throughout history, influenza viruses have mutated and caused
pandemics or global epidemics. In 1890, an especially virulent influenza
pandemic struck, killing many Americans. Those who survived that
pandemic and lived to experience the 1918 pandemic tended to be
less susceptible to the disease.

From Kansas to Europe and back again, wave after wave, the
unfolding of the pandemic, mobilizing to fight influenza, the
pandemic hits, protecting yourself, communication, fading of
the pandemic.

Influenza ward

Influenza ward

When it came to treating influenza patients, doctors, nurses and
druggists were at a loss. [Credit: Office of the Public Health Service Historian]

The influenza pandemic of 1918-1919 killed more people than the
Great War, known today as World War I (WWI), at somewhere
between 20 and 40 million people. It has been cited as the most
devastating epidemic in recorded world history. More people died of
influenza in a single year than in four-years of the Black Death Bubonic
Plague from 1347 to 1351. Known as “Spanish Flu” or “La Grippe”
the influenza of 1918-1919 was a global disaster.

Grim Reaper

Grim Reaper

The Grim Reaper by Louis Raemaekers

In the fall of 1918 the Great War in Europe was winding down and
peace was on the horizon. The Americans had joined in the fight,
bringing the Allies closer to victory against the Germans. Deep within
the trenches these men lived through some of the most brutal conditions
of life, which it seemed could not be any worse. Then, in pockets
across the globe, something erupted that seemed as benign as the
common cold. The influenza of that season, however, was far more
than a cold. In the two years that this scourge ravaged the earth,
a fifth of the world’s population was infected. The flu was most deadly
for people ages 20 to 40. This pattern of morbidity was unusual for
influenza which is usually a killer of the elderly and young children.
It infected 28% of all Americans (Tice). An estimated 675,000
Americans died of influenza during the pandemic, ten times as
many as in the world war. Of the U.S. soldiers who died in Europe,
half of them fell to the influenza virus and not to the enemy (Deseret
News). An estimated 43,000 servicemen mobilized for WWI died
of influenza (Crosby). 1918 would go down as unforgettable year
of suffering and death and yet of peace. As noted in the Journal
of the American Medical Association final edition of 1918:   “The 1918
has gone: a year momentous as the termination of the most cruel war
in the annals of the human race; a year which marked, the end at
least for a time, of man’s destruction of man; unfortunately a year in
which developed a most fatal infectious disease causing the death
of hundreds of thousands of human beings. Medical science for
four and one-half years devoted itself to putting men on the firing
line and keeping them there. Now it must turn with its whole might to
combating the greatest enemy of all–infectious disease,” (12/28/1918).

From Kansas to Europe and Back Again:

scourge ravaged the earth

scourge ravaged the earth

Where did the 1918 influenza come from? And why was it so lethal?

In 1918, the Public Health Service had just begun to require state
and local health departments to provide them with reports about
diseases in their communities. The problem? Influenza wasn’t
a reportable disease.

But in early March of 1918, officials in Haskell County in Kansas
sent a worrisome report to the Public Health Service.Although
these officials knew that influenza was not a reportable disease,
they wanted the federal government to know that “18 cases
of influenza of a severe type” had been reported there.

By May, reports of severe influenza trickled in from Europe. Young
soldiers, men in the prime of life, were becoming ill in large
numbers. Most of these men recovered quickly but some developed
a secondary pneumonia of “a most virulent and deadly type.”

Within two months, influenza had spread from the military to the
civilian population in Europe. From there, the disease spread outward—to Asia, Africa, South America and, back again, to North America.

Wave After Wave:

In late August, the influenza virus probably mutated again and
epidemics now erupted in three port cities: Freetown, Sierra
Leone; Brest, France, and Boston, Massachusetts. In Boston,
dockworkers at Commonwealth Pier reported sick in massive
numbers during the last week in August. Suffering from fevers
as high as 105 degrees, these workers had severe muscle and
joint pains. For most of these men, recovery quickly followed. But
5 to 10% of these patients developed severe and massive
pneumonia. Death often followed.

Public health experts had little time to register their shock at the
severity of this outbreak. Within days, the disease had spread
outward to the city of Boston itself. By mid-September, the epidemic
had spread even further with states as far away as California, North
Dakota, Florida and Texas reporting severe epidemics.

The Unfolding of the Pandemic:

The pandemic of 1918-1919 occurred in three waves. The first
wave had occurred when mild influenza erupted in the late
spring and summer of 1918. The second wave occurred with an
outbreak of severe influenza in the fall of 1918 and the final wave
occurred in the spring of 1919.

In its wake, the pandemic would leave about twenty million dead
across the world. In America alone, about 675,000 people in
a population of 105 million would die from the disease.

Find out what happened in your state during the Pandemic

Mobilizing to Fight Influenza:

Although taken unaware by the pandemic, federal, state and local
authorities quickly mobilized to fight the disease.

On September 27th, influenza became a reportable disease. However,
influenza had become so widespread by that time that most states
were unable to keep accurate records. Many simply failed to
report to the Public Health Service during the pandemic, leaving
epidemiologists to guess at the impact the disease may have
had in different areas.

World War I had left many communities with a shortage of trained
medical personnel. As influenza spread, local officials urgently
requested the Public Health Service to send nurses and doctors.
With less than 700 officers on duty, the Public Health Service was
unable to meet most of these requests. On the rare occasions when
the PHS was able to send physicians and nurses, they often became
ill en route. Those who did reach their destination safely often found
themselves both unprepared and unable to provide real assistance.

In October, Congress appropriated a million dollars for the Public
Health Service. The money enabled the PHS to recruit and pay
for additional doctors and nurses. The existing shortage of doctors
and nurses, caused by the war, made it difficult for the PHS to locate and hire qualified practitioners. The virulence of the disease also meant that many nurses and doctors contracted influenza
within days of being hired.

Confronted with a shortage of hospital beds, many local officials
ordered that community centers and local schools be transformed
into emergency hospitals. In some areas, the lack of doctors meant
that nursing and medical students were drafted to staff these
makeshift hospitals.

The Pandemic Hits:

Entire families became ill. In Philadelphia, a city especially hard hit,
so many children were orphaned that the Bureau of Child Hygiene
found itself overwhelmed and unable to care for them.

As the disease spread, schools and businesses emptied. Telegraph
and telephone services collapsed as operators took to their
beds. Garbage went uncollected as garbage men reported sick.
The mail piled up as postal carriers failed to come to work.

State and local departments of health also suffered from high
absentee rates. No one was left to record the pandemic’s spread
and the Public Health Service’s requests for information went
unanswered.

As the bodies accumulated, funeral parlors ran out of caskets
and bodies went uncollected in morgues.

Protecting Yourself From Influenza:

In the absence of a sure cure, fighting influenza seemed an
impossible task.

In many communities, quarantines were imposed to prevent
the spread of the disease.Schools, theaters, saloons, pool
halls and even churches were all closed. As the bodies
mounted, even funerals were held out doors to protect mourners
against the spread of the disease.

Emergency Hospital for Influenza Patients

An Emergency Hospital for Influenza Patients

The effect of the influenza epidemic was so severe that the
average life span in the US was depressed by 10 years.
The influenza virus had a profound virulence, with a mortality
rate at 2.5% compared to the previous influenza epidemics, which
were less than 0.1%. The death rate for 15 to 34-year-olds of
influenza and pneumonia were 20 times higher in 1918 than in
previous years (Taubenberger). People were struck
with illness on the street and died rapid deaths.

One anecdote shared of 1918 was of four women playing bridge
together late into the night. Overnight, three of the women died
from influenza (Hoagg). Others told stories of people on their way
to work suddenly developing the flu and dying within hours
(Henig). One physician writes that patients with seemingly
ordinary influenza would rapidly “develop the most viscous
type of pneumonia that has ever been seen” and later when
cyanosis appeared in the patients, “it is simply a struggle for air
until they suffocate,” (Grist, 1979). Another physician recalls
that the influenza patients “died struggling to clear their airways
of a blood-tinged froth that sometimes gushed from their nose
and mouth,” (Starr, 1976). The physicians of the time were
helpless against this powerful agent of influenza. In 1918 children
would skip rope to the rhyme (Crawford):

I had a little bird,

Its name was Enza.

I opened the window,

And in-flu-enza.

schools inspected -

schools inspected –

The influenza pandemic circled the globe. Most of humanity felt the
effects of this strain of the influenza virus. It spread following
the path of its human carriers, along trade routes and shipping lines.
Outbreaks swept through North America, Europe, Asia, Africa, Brazil
and the South Pacific (Taubenberger). In India the mortality rate was
extremely high at around 50 deaths from influenza per 1,000
people (Brown). The Great War, with its mass movements of men
in armies and aboard ships, probably aided in its rapid diffusion
and attack. The origins of the deadly flu disease were unknown but
widely speculated upon. Some of the allies thought of the epidemic as a
biological warfare tool of the Germans. Many thought it was a result of
the trench warfare, the use of mustard gases and the generated “smoke
and fumes” of the war. A national campaign began using the ready
rhetoric of war to fight the new enemy of microscopic proportions. A
study attempted to reason why the disease had been so devastating
in certain localized regions, looking at the climate, the weather and
the racial composition of cities. They found humidity to be linked with
more severe epidemics as it “fosters the dissemination of the bacteria,”
(Committee on Atmosphere and Man, 1923). Meanwhile the new
sciences of the infectious agents and immunology were
racing to come up with a vaccine or therapy to stop the epidemics.

The experiences of people in military camps encountering the
influenza pandemic: An excerpt for the memoirs of a survivor at
Camp Funston of the pandemic Survivor A letter to a fellow physician
describing conditions during the influenza epidemic at Camp Devens.

A collection of letters of a soldier stationed in Camp Funston Soldier

The origins of this influenza variant is not precisely known. It is thought
to have originated in China in a rare genetic shift of the influenza virus.
The recombination of its surface proteins created a virus novel to
almost everyone and a loss of herd immunity. Recently the virus
has been reconstructed from the tissue of a dead soldier and is
now being genetically characterized.

The name of Spanish Flu came from the early affliction and large
mortalities in Spain (BMJ,10/19/1918) where it allegedly killed 8
million in May (BMJ, 7/13/1918). However, a first wave of influenza
appeared early in the spring of 1918 in Kansas and in military
camps throughout the US. Few noticed the epidemic in the midst of
the war. Wilson had just given his 14 point address. There was
virtually no response or acknowledgment to the epidemics in March
and April in the military camps. It was unfortunate that no steps were
taken to prepare for the usual recrudescence of the virulent influenza
strain in the winter. The lack of action was later criticized when the
epidemic could not be ignored in the winter of 1918 (BMJ, 1918).
These first epidemics at training camps were a sign of what was
coming in greater magnitude in the fall and winter of 1918 to the
entire world.

The war brought the virus back into the US for the second wave
of the epidemic. It first arrived in Boston in September of 1918
through the port busy with war shipments of machinery and supplies.
The war also enabled the virus to spread and diffuse. Men across
the nation were mobilizing to join the military and the cause. As they
came together, they brought the virus with them and to those they
contacted. The virus  killed almost 200,00 in October of 1918
alone. In November 11 of 1918 the end of the war enabled a resurgence.
As people celebrated Armistice Day with parades and large parties, a
complete disaster from the public health standpoint, a rebirth of
the epidemic occurred in some cities. The flu that winter was beyond
imagination as millions were infected and thousands died. Just as
the war had effected the course of influenza, influenza affected
the war. Entire fleets were ill with the disease and men on the front
were too sick to fight. The flu was devastating to both sides, killing
more men than their own weapons could.

With the military patients coming home from the war with battle wounds
and mustard gas burns, hospital facilities and staff were taxed
to the limit. This created a shortage of physicians, especially in the
civilian sector as many had been lost for service with the military.
Since the medical practitioners were away with the troops, only
the medical students were left to care for the sick. Third and forth
year classes were closed and the students assigned jobs as
interns or nurses (Starr,1976). One article noted that “depletion has
been carried to such an extent that the practitioners are brought
very near the breaking point,” (BMJ, 11/2/1918). The shortage was
further confounded by the added loss of physicians to the epidemic.
In the U.S., the Red Cross had to recruit more volunteers to contribute
to the new cause at home of fighting the influenza epidemic. To respond
with the fullest utilization of nurses, volunteers and medical supplies, the
Red Cross created a National Committee on Influenza. It was involved
in both military and civilian sectors to mobilize all forces to fight Spanish
influenza (Crosby, 1989). In some areas of the US, the nursing shortage
was so acute that the Red Cross had to ask local businesses to
allow workers to have the day off if they volunteer in the hospitals
at night (Deseret News). Emergency hospitals were created to
take in the patients from the US and those arriving sick from overseas.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

red_cross_public_health_nurse

red_cross_public_health_nurse

The pandemic affected everyone. With one-quarter of the US and
one-fifth of the world infected with the influenza, it was  impossible
to escape from the illness. Even President Woodrow Wilson suffered
from the flu in early 1919 while negotiating the crucial treaty of
Versailles to end the World War (Tice). Those who were
lucky enough to avoid infection had to deal with the public health
ordinances to restrain the spread of the disease.

The public health departments distributed gauze masks to be worn
in public. Stores could not hold sales, funerals were limited
to 15 minutes. Some towns required a signed certificate to
enter and railroads would not accept passengers without
them. Those who ignored the flu ordinances had to pay steep
fines enforced by extra officers (Deseret News). Bodies pilled up
as the massive deaths of the epidemic ensued. Besides the
lack of health care workers and medical supplies, there was a shortage
of coffins, morticians and gravediggers (Knox). The conditions in 1918
were not so far removed from the Black Death in the era of the
bubonic plague of the Middle Ages.

iowa_flu

iowa_flu

In 1918-19 this deadly influenza pandemic erupted during the final
stages of World War I. Nations were already attempting to deal with
the  effects and costs of the war. Propaganda campaigns and war
restrictions and rations had been implemented by governments.
Nationalism pervaded as people accepted government authority.
This allowed the public health departments to easily step in and
implement their restrictive measures. The war also gave science
greater importance as governments relied on scientists, now armed
with the new germ theory and the development of antiseptic surgery,
to design vaccines and reduce mortalities of disease and battle
wounds. Their new technologies could preserve the men on
the front and ultimately save the world. These conditions
created by World War I, together with the current social attitudes
and ideas, led to the relatively calm response of the public and
application of scientific ideas. People allowed for strict measures
and loss of freedom during the war as they submitted to the
needs of the nation ahead of their personal needs. They had
accepted the limitations placed with rationing and drafting.
The responses of the public health officials reflected the new
allegiance to science and the wartime society. The medical
and scientific communities had developed new theories and
applied them to prevention, diagnostics and treatment of the
influenza patients.

The Medical and Scientific Conceptions of Influenza

Scientific ideas about influenza, the disease and its origins,
shaped the public health and medical responses. In 1918
infectious diseases were beginning to be unraveled. Pasteur
and Koch had solidified the germ theory of disease through
clear experiments clever science. The bacillus responsible
for many infections such as tuberculosis and anthrax  had
been visualized, isolated and identified. Koch’s postulates
had been developed to clearly link a disease to a specific
microbial agent.

Robert Koch

Robert Koch

The petri dish was widely used to grow sterile cultures of bacteria
and investigate bacterial flora. Vaccines had been created for
bacterial infections and even the unseen rabies virus by
serial passage techniques. The immune system was explained by
Paul Erhlich and his side-chain theory. Tests of antibodies such as
Wasserman and coagulation experiments were becoming commonplace.
Science and medicine were on their way to their complete entanglement
and fusion as scientific principles and methodologies made their way
into clinical practice, diagnostics and therapy.

The Clinical Descriptions of Influenza

Patients with the influenza disease of the epidemic were generally
characterized by common complaints associated with the flu. They had
body aches, muscle and joint pain, headache, a sore throat and a
unproductive cough with occasional harsh breathing (JAMA, 1/25/1919).

The most common sign of infection was the fever, which ranged from
100 to 104 F and lasted for a few days. The onset of the epidemic influenza
was peculiarly sudden, as people were struck down with dizziness, weakness
and pain while on duty or in the street (BMJ, 7/13/1918). After  the
disease was established the mucous membranes became reddened
with sneezing. In some cases there was a hemorrhage of the
mucous membranes of the nose and bloody noses were commonly
seen. Vomiting occurred on occasion, and also sometimes diarrhea
but more commonly there was constipation (JAMA, 10/3/1918).

The danger of an influenza infection was its tendency to progress into
the often fatal secondary bacterial infection of pneumonia. In the
patients that did not rapidly recover after three or four days of fever, there
is an “irregular pyrexia” due to bronchitis or broncopneumonia (BMJ,
7/13/1918). The pneumonia would often appear after a period of
normal temperature with a sharp spike and expectorant of bright
red blood. The lobes of the lung became speckled with “pneumonic
consolidations.” The fatal cases developed toxemia and vasomotor
depression (JAMA, 10/3/1918). It was this tendency for secondary
complications that made this influenza infection so deadly.

pneumonia

pneumonia

hospital ward in 1918

hospital ward in 1918

A military hospital ward in 1918

In the medical literature characterizing the influenza disease, new
diagnostic techniques are frequently used to describe the clinical
appearance. The most basic clinical guideline was the temperature,
a record of which was kept in a table over time. Also closely
monitored was the pulse rate. One clinical account said that
“the pulse was remarkably slow,” (JAMA, 4/12/1919) while others
noted that the pulse rate did not increase as expected. With the
pulse, the respiration rate was measured and reported to provide
clues of the clinical progression.
Patients were also occasionally “roentgenographed” or chest x-rayed,
(JAMA, 1/25/1919). The discussion of clinical influenza also often
included analysis of the blood. The number of white blood cells were
counted for many patients. Leukopenia was commonly associated
with influenza. The albumin was also measured, since it was noted that
transient albuminuria was frequent in influenza patients. This was
done by urine analysis. The Wassermann reaction was another
added new test of the blood for antibodies (JAMA, 10/3/1918).
These new measurements enabled to physicians to have an
image of action and knowledge using scientific instruments. They
could record precisely the progress of the influenza infection and perhaps
were able to forecast its outcome.

The most novel of these tests were the blood and sputum cultures.
Building on the germ theory of disease, the physicians and their
associated research scientists attempted to find the culprit for this
deadly infection. Physicians would commonly order both blood and sputum
cultures of their influenza and pneumonia patients mostly for research
and investigative purposes. At the military training camp
Camp Lewis during a influenza epidemic, “in all cases of pneumonia.
a sputum study, white blood and differential count, blood culture
and urine examinations were made as routine,” (JAMA, 1/25/1919).

The bacterial flora of the nasopharynx of some patients was also cultured
since droplet infection was where the disease disseminated. The
collected swabs and specimens were inoculated onto blood agar of
petri dishes. The grown up bacterial colonies were closely studied to
find the causal organism. Commonly found were pneumococcus,
streptococcus, staphylococcus and Bacillus influenzae (JAMA, 4/12/1919).

pneumonia

pneumonia

These new laboratory tests used in the clinical setting brought in a solid
scientific, biological link to the practice of medicine. Medicine had
become fully scientific and technologic in its understanding and
characterization of the influenza epidemic.

Treatment and Therapy

The therapeutic remedies for influenza patients varied from the
newly developed drugs to oils and herbs. The therapy was much less
scientific than the diagnostics, as the drugs had no clear explanatory
theory of action. The treatment was largely symptomatic, aiming to
reduce fever or pain. Aspirin, or acetylsalicylic acid was a common remedy.
For secondary pneumonia doses of epinephrin were given. To
combat the cyanosis physicians gave oxygen by mask or some
injected it under the skin (JAMA, 10/3/1918). Others used salicin which
reduced pain, discomfort and fever and claimed to reduce the infectivity
of the patient. Another popular remedy was cinnamon in powder or oil form
with milk to reduce temperature (BMJ, 10/19/1918). Finally, salt of quinine
was suggested as a treatment. Most physicians agreed that the patient should
be  kept in bed (BMJ, 7/13/1918). With that was the advice of plenty of
fluids and nourishment. The application of cold to the head, with
warm packs or warm drinks was also advised. Warm baths were used
as a hydrotherapeutic method in hospitals but were discarded for
lack of success (JAMA, 10/3/1918). These treatments, like the
suggested prophylactic measures of the public health officials, seemed to
originate in the common social practices and not in the growing field of
scientific medicine. It seems that as science was entering the medical
field, it served only for explanatory, diagnostic and preventative
measures such as vaccines and technical tests. This science had
little use once a person was ill.

However, a few proposed treatment did incorporate scientific ideas
of germ theory and the immune system. O’Malley and Hartman
suggested to treat influenza patients with the serum of convalescent
patients. They utilize the theorized antibodies to boost the immune
system of sick patients. Other treatments were “digitalis,” the
administration of isotonic glucose and sodium bicarbonate intravenously
which was done in military camps (JAMA, 1/4/1919). Ross and
Hund too utilized ideas about the immune system and properties of the
blood to neutralize toxins and circulate white blood cells. They believed
that the best treatment for influenza should aim to: “…neutralize or render
the intoxicant inert…and prevent the blood destruction with its destructive
leukopenia and lessened coagulability,” (JAMA, 3/1/1919). They tried
to create a therapeutic immune serum to fight infection. These therapies
built on current scientific ideas and represented the highest
biomedical, technological treatment like the antitoxin to diphtheria.

influenza

influenza

In July, an American soldier said that while influenza caused a heavy
fever, it “usually only confines the patient to bed for a few days.” The
mutation of the virus changed all that. [Credit: National Library of Medicine]

recovering_from_influenza

recovering_from_influenza

An old cliché maintained that influenza was a wonderful disease as
it killed no one but provided doctors with lots of patients. The 1918
pandemic turned this saying on its head. [Credit: The Etiology of
Influenza in 1918]

During the 1890 influenza epidemic, Pfeiffer found what he
determined to be the microbial agent to cause influenza.
In the sputum and respiratory tract of influenza patients in 1892,
he isolated the bacteria Bacillus influenzae , which was
accepted as the true “virus” though it was not found in localized
outbreaks (BMJ, 11/2/1918). However, in studies of the 1907-8
epidemic in the US, Lord had found the bacillus in only 3 of 20 cases.
He also found the bacillus in 30% of cultures of sputum from TB patients.
Rosenthal further refuted the finding when he found the bacillus in 1 of 6
healthy people in 1900 (JAMA, 1/18/1919). The bacillus was also
found to be present in all cases of whooping cough and many cases
of measles, chronic bronchitis and scarlet fever (JAMA, 10/5/1918).
The influenza pandemic provided scientists the opportunity to confirm
or refute this contested microbe as the cause of influenza. The sputum
studies from the Camp Lewis epidemic found only a few influenza cases
harvesting the influenza bacilli and mostly type IV pneumococcus . They
concluded that “the recent epidemic at Camp Lewis was an acute
respiratory infection and not an epidemic due to Bacillus influenzae ,”
(JAMA, 1/25/1919). This finding along with others suggested to most
scientists that the Pfeiffer’s Bacillus was not the cause of influenza.

In the 1918-19 influenza pandemic, there was a great drive to find the
etiological agent responsible for the deadly scourge. Scientists in their
labs were working hard, using the cultures obtained from physician clinics,
to isolate the etiological agent for influenza. As a report early in the
epidemic said, “the ‘influence’ of influenza is still veiled in mystery, ”
(JAMA, 10/5/1918). The nominated bacillus influenzae bacteria
seemed to be incorrect and scientists scrambled to isolate the true cause.
In the journals, many authors speculated on the type of agent- was
it a new microbe, was it a bacteria, was it a virus? One journal offered
that “the severity of the present pandemic, the suddenness of onset…
led to the suggestion that the disease cannot be influenza but some other
and more lethal infection,” (BMJ, 11/2/1918). However, most accepted that
the epidemic disease was influenza based on the familiar symptoms
and known pattern of disease. The respiratory disease of influenza was
understood to give warning in the late spring of its potential effects
upon its recrudescence once the weather turned cold in the winter
(BMJ, 10/19/1918).One article with foresight stated that ” there can
be no question that the virus of influenza is a living organism…

flu virus EM

flu virus EM

it is possibly beyond the range of microscopic vision,” (BMJ, 11/16/1918). Another
article confirmed the idea of an “undiscovered virus” and noted that pneumococci
and streptococci were responsible for “the gravity of the secondary pulmonary
complications,” (BMJ, 11/2/1918). The article went on to offer the idea of a
symbiosis of virus and secondary bacterial infection combining to make it
such a severe disease.

The investigators as they attempted to find the responsible agent for the influenza
pandemic were developing ideas of infectious microbes and the concept of the
virus. The idea of the virus as an infectious agent had been around for years.
The articles of the period refer to the “virus” in their discussion but do not
consistently use it to be an infectious microbe, distinctive from bacteria. The
term virus has the same usage and application as bacillus. In 1918, a virus
was defined scientifically to be a submicroscopic infectious entity which could
be filtered but not grown in vitro . In the 1880s Pasteur developed an attenuated
vaccine for the rabies virus by serial passage way ahead of his time. Ivanoski’s
work on the tobacco mosaic virus in 1890s lead to the discovery of the virus.
He found an infectious agent that acted as a micro-organism as it multiplied
yet which passed through the sterilizing filter as a nonmicrobe. By the 1910s
several viruses, defined as filterable infectious microbes, had been identified
as causing infectious disease (Hughes). However, the scientists were still
conceptually behind in defining a virus; they distinguished it only by size
from a bacteria and not as an obligate parasite with a distinct life cycle
dependent on infecting a host cell.

The influenza epidemic afforded the opportunity to research the etiological
agent and develop the idea of the virus. Experiments by Nicolle and Le Bailly in
Paris were the earliest suggestions that influenza was caused by a “filter-passing
virus,” (BMJ, 11/2/1918). They filtered out the bacteria from bronchial expectoration
of an influenza patient and injected the filtrate into the eyes and nose of two monkeys.
The monkeys developed a fever and a marked depression. The filtration was later
administered to a volunteer subcutaneously who developed typical signs of influenza.
They reasoned that the inoculated person developed influenza from the filtrate since
no one else in their quarters developed influenza (JAMA, 12/28/1918). These scientists
followed Koch’s postulates as they isolated the causal agent from patients with the
illness and used it to reproduce the same illness in animals. Through these studies,
the scientists proved that influenza was due to a submicroscopic infectious agent
and not a bacteria, refuting the claims of Pfeiffer and advancing virology. They were
on their way to discerning the virus and characterizing the orthomyxo viruses that
lead to the disease of influenza.

These scientific experiments which unravel the cause of influenza, had immediate
preventative applications. They would assist in the effort to create a effective
vaccine to prevent influenza. This was the ultimate goal of most studies, since
vaccines were thought to be the best preventative solution in the early 20th century.
Several experiments attempted to produce vaccines, each with a different
understanding of the etiology of fatal influenza infection. A Dr. Rosenow invented
a vaccine to target the multiple bacterial agents involved from the serum of patients.
He aimed to raise the immunity to against the bacteria, the “common causes of death,
“and not the cause of the initial symptoms by inoculating with the proportions found
in the lungs and sputum (JAMA, 1/4/1919). The vaccines made for the British forces
took a similar approach and were “mixed vaccines” of pneumococcus and
lethal streptococcus. The vaccine development therefore focused on the culture
results of what could be isolated from the sickest patients and lagged behind the
scientific progress.

Fading of the Pandemic:

In November, two months after the pandemic had erupted, the Public Health Service
began reporting that influenza cases were declining.

Communities slowly lifted their quarantines. Masks were discarded. Schools were
re-opened and citizens flocked to celebrate the end of World War I.

Communities and the disease continued to be a threat throughout the spring of 1919.

By the time the pandemic had ended, in the summer of 1919, nearly 675,000
Americans were dead from influenza. Hundred of thousands more were orphaned
and widowed.

The Legacy of the Pandemic

No one knows exactly how many people died during the 1918-1919 influenza
pandemic. During the 1920s, researchers estimated that 21.5 million people died
as a result of the 1918-1919 pandemic. More recent estimates have estimated
global mortality from the 1918-1919 pandemic at anywhere between 30 and 50
million. An estimated 675,000 Americans were among the dead.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969. These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Research, forgetting the pandemic of 1918-1919, scientific milestones, 20th century influenza or global pandemics.
The Influenza Pandemic occurred in three waves in the United States throughout
1918 and 1919.

More Americans died from influenza than died in World War I. [Credit: National Library of Medicine]

All of these deaths caused a severe disruption in the economy. Claims against life
insurance policies skyrocketed, with one insurance company reporting a 745 percent
rise in the number of claims made. Small businesses, many of which had been unable to operate during the pandemic, went bankrupt.

Joseph goldberger

Joseph goldberger

Joseph Goldberger, one of the leading researchers in the PHS, studied influenza
during the pandemic. But Goldberger had multiple interests and influenza research
became less important to him in the years following 1918. [Credit: Office of the Public
Health Service Historian]

In the summer and fall of 1919, Americans called for the government to research
both the causes and impact of the pandemic. In response, both the federal government
and private companies, such as Metropolitan Life Insurance, dedicated money
specifically for flu research.

In an attempt to determine the effect influenza had different communities, the Public
Health Service conducted several small epidemiological studies. These studies,
however, were conducted after the pandemic and most PHS officers
admitted that the data which was collected was probably inaccurate.

PHS scientists continued to search for the causative agent of influenza in their
laboratories as did their fellow scientists in and outside the United States.

But while there was a burst of enthusiasm for funding flu research in
1918- 1919, the funds allocated for this research were actually fairly meager.
As time passed, Americans became less interested in the pandemic and its
causes. And even when funding for medical research dramatically increased
after World War II, funding for research on the 1918-1919 pandemic remained
limited.

Forgetting the 1918-1919 Pandemic:

In the years following 1919, Americans seemed eager to forget the pandemic.
Given the devastating impact of the pandemic, the reasons for this forgetfulness
are puzzling.

It is possible, however, that the pandemic’s close association with World War I
may have caused this amnesia. While more people died from the pandemic than
from World War I, the war had lasted longer than the pandemic and caused
greater and more immediate changes in American society.

Influenza also hit communities quickly. Often it disappeared within a few weeks of
its arrival. As one historian put it, “the disease moved too fast, arrived, flourished
and was gone before…many people had time to fully realize just how great
was the danger.” Small wonder, then, that many Americans forgot about the
pandemic in the years which followed.

Scientific Milestones in Understanding and Preventing Influenza:

In the early stages of the pandemic, many scientists believed that the agent
responsible for influenza was Pfeiffer’s bacillus. Autopsies and research conducted
during the pandemic ultimately led many scientists to discard this theory.

In late October of 1918, some researchers began to argue that influenza was
caused by a virus. Although scientists had understood that viruses could cause
diseases for more than two decades, virology was still very much in its infancy at
this time.

It was not until 1933 that the influenza A virus, which causes almost every type
of endemic and pandemic influenza, was isolated. Seven years later, in 1940,
the influenza B virus was isolated. The influenza C virus was finally isolated in 1950.

Influenza vaccine was first introduced as a licensed product in the United States in
1944. Because of the rapid rate of mutation of the influenza virus, the
effectiveness of a given vaccine usually lasts for only a year or two.

By the 1950s, vaccine makers were able to prepare and routinely release vaccines
which could be used in the prevention or control of future pandemics. During the
1960s, increased understanding of the virus enabled scientists to develop both
more potent and purer vaccines.

Mass production of influenza vaccines continued, however, to require several
months lead time.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969.

These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics
were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Tuberculosis

Mycobacterium tuberculosis was first discovered in 1882 by Robert Koch and is one of almost 200 mycobacterial species which have been detected by molecular techniques. The genus Actinobacteria (given its own family, the Mycobacteriaceae) includes pathogens known to cause serious diseases in mammals, including tuberculosis (MTBC) and leprosy (M. leprae). Mycobacteria are grouped neither as Gram-positive nor Gram-negative bacteria. MTBC consists of M. tuberculosis, M. bovis, M. bovis BCG (bacillus Calmette-Guérin), M. africanum M. caprae, M. microti, M. canettii and M. pinnipedii, all of which share genetic homology, with no significant variation between sequences (∼0.01 to 0.03%), although differences in phenotypes are present. Cells in the genus have a typical rod, or slightly curved-shape, with dimensions of 0.2 to 0.6 μm by 1 to 10 μm.

Mycobacterium tuberculosis has a waxy mycolic acid lipid complex coating on its cell surface. The cells are impervious to Gram staining, so a common staining procedure used is Ziehl-Neelsen (ZN) staining. The outer compartment of the cell wall contains lipid-linked polysaccharides, is water-soluble, and interacts with the immune system. The inner wall is impermeable. Mycobacteria have some unique qualities that are divergent from members of the Gram-positive group, such as the presence of mycolic acids in the cell wall.

MTBC and M. leprae replication occurs in the tissues of warm-blooded human hosts. This air-borne pathogen is transmitted from an active pulmonary tuberculosis patient by coughing. Droplet nuclei, approximately 1 to 5 μm in size “meander” in the air and are transmitted to susceptible individuals by inhalation. Mycobacteria are incapable of replicating in or on inanimate objects. The risk of infection is dependent to the load of the bacillus that has been inhaled, level of infectiousness, contact perimeter and the immune competency of potential hosts. Due to the size of the droplets inhaled into the lungs, the infection penetrates the defense systems of the bronchi and enters the terminal alveoli. Invading bacteria are then engulfed by alveolar macrophage and dendritic cells.

The cell-mediated immune response alleviates the multiplication of M. tuberculosis and halts infection. Infected individuals with strong immune systems are generally able to combat the infection within 2 to 8 weeks post-infection, when the active cell-mediated immune response stops further multiplication of M. tuberculosis. Tuberculosis infection shows several significant clinical manifestations in pulmonary and extra-pulmonary sites. Prolonged coughing, severe weigh-lost, night sweats, low-grade fever, dyspnoea and chest pain are clinical symptoms indicated from pulmonary infections.

Fort Bayard, N.M., T.B. service assignment

Fort Gayard, NM

Fort Gayard, NM

Fort Bayard, NM Post Hospital circa 1890

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

Tuberculosis, (Pvt.) Richard Johnson said, was “regarded as a much dreaded disease that was easily contracted by association.” In fact, so many hospital corpsmen requested transfers out that the Surgeon General established a policy that no such requests would be considered until after two years of service. Consequently, Johnson noted, “During my time there we had a high percentage of desertions.” For example, all four of the men who arrived with Johnson, within a year—“two of them,” he dryly observed, “owing me money.”

Four years later another young man arrived at Fort Bayard. He, too, remarked on the long journey by rail through the “desert waste of New Mexico,” and then the wagon ride over “dry desolate foothills,” to the post. But his reaction was different from Johnson’s. Capt. Earl Bruns moved from being a patient to a physician at the hospital. For Bruns Fort Bayard was “a veritable oasis in the desert, studded with shade trees, green lawns, shrubbery, and flowers.” He credited the hospital commander, Colonel (Col.) George E. Bushnell, writing that, “[i]n this one spot one man had made the desert bloom like a rose.”

Johnson’s and Bruns’ different views from 1904 and 1908, respectively, may reflect the fact that Johnson was healthy and assigned grudgingly to work at the tuberculosis hospital, whereas Bruns had few other options and came in hopes of regaining his health—or it may reflect the improvements Bushnell made during his first years in command. But every week for the more than twenty years that Fort Bayard was an Army tuberculosis hospital, workers and patients arrived with dread and foreboding, or joy and relief—or a mix of them all.

The approach Fort Bayard and George Bushnell took to tuberculosis was similar to how physicians manage the disease today in that it involved isolating the patient, treating the disease, and educating the patient and his family on how to maintain their health. The hospital offered patients sanctuary from the demands, fears, and prejudices regarding tuberculosis in the outside world. Fort Bayard treated tuberculosis patients with prolonged bed rest, fresh air, and a healthy diet, but undertaking this “rest treatment”—confining oneself to bed for months—proved difficult if not impossible for many patients. Fort Bayard involved patients’ adaptation to new lifestyles as people with tuberculosis. Finally, Fort Bayard managed patients’ transition back to the outside world.

One of the most striking aspects of Fort Bayard was that many of the medical staff had tuberculosis themselves, including George Bushnell. Tuberculosis weakened Bushnell’s lungs and shaped his life in numerous ways. He tired easily, had to carefully monitor his health, and as Earl Bruns observed, “was never a well man.” Bushnell had active tuberculosis five times in his life: the fourth time in 1919 with a breakdown from the strain of wartime work; and the fifth and the final illness in 1924 that lead to his death at age 70. In 1911 he advised his superiors that, “I did not consider myself strong enough to carry on the work of commanding this Hospital and keeping myself in condition for active duty.” The War Department generally required officers in poor physical condition to retire, but the Surgeon General secured a waiver for Bushnell, because “the interests of the service would suffer by his retirement.” After a leave of absence in 1909–10, Bushnell’s annual reports on the competency of his officers included his own name on the list of those competent for hospital duty, but “unfit for active field service.”

“What would our sanatorium movement and our anti-tuberculosis crusade amount to,” wrote tuberculosis expert Adolphus Knopf, “were it not for the labors of tuberculous physicians, or one-time tuberculous physicians, who, because of their infirmity, had become interested in tuberculosis?” Well-known leaders in the antituberculosis movement such as Edward Trudeau and Lawrence Flick established their sanatoriums after they recovered from tuberculosis in order to offer others the treatment. Twenty-one of the first thirty recipients of the Trudeau Medal, established in 1926 for outstanding work in tuberculosis, had the disease. James Waring, a tuberculosis physician who arrived at a Colorado Springs sanatorium on a stretcher in 1908, later wrote, “It has been my good fortune to serve three separate and extended ‘hitches’ as a ‘bed patient,’ the time so spent numbering in all about nine years.” He, like many physicians, saw his personal experience as an asset in his practice. The three key figures in the Army tuberculosis program during World War I were Bushnell, Bruns, and Gerald Webb of Colorado Springs who started a tuberculosis sanatorium after his wife died of the disease.

Bushnell turned tuberculosis into an asset for the Army Medical Department, making Fort Bayard a center of national expertise on the disease. His personal experience with chronic pulmonary tuberculosis gave him good rapport and credibility with many of his patients. Medical officer Earl Bruns wrote that, “[H]e went among the patients and talked to them individually” and thereby provided “a living example of a cure due to rational treatment.” Bruns described how Bushnell spent his days attending to patients, carrying out administrative duties, and devoted hours to supervising the work in the gardens and grounds of Fort Bayard.

(Who’s Who in America, 1924-25. E. H. Bruns in American Review of Tuberculosis, June 1925. 0. B. Webb in Outdoor Life, Sept. 1924. Lancet. Lond., 1924. Jour. Am. Med. Ass’n., 1924, p. 374.)

General George M. Sternberg

In addition to being an Army surgeon, Sternberg was also a noted bacteriologist who, in 1880, had translated Antoine Magnin’s The Bacteria, which presented the latest research in germ theory. Sternberg’s work contributed to preparing American understanding of Robert Koch’s pronouncement in 1882 of the existence of the tubercle bacillus (Ott 1996:55). Over the next two decades Koch’s analysis gained converts, leading to the universally accepted belief that tuberculosis represented a bacterium infection that could be diagnosed and then monitored by microscopic inspection of patient’s sputum.

Sternberg was no doubt aware of the efforts of Edward Livingston Trudeau. Beginning in the 1870s, when he undertook his own recovery from consumption by withdrawing to the Adirondack Mountains, Trudeau had become an advocate of extended bed rest in remote, healthful environments. Quickly accepting Koch’s research, Trudeau argued that those afflicted by the tubercle bacillus could best be healed when removed from cities and placed under the care of physicians who carefully monitored their weight and sputum and who prescribed constant bed rest with exposure to fresh air. Preferring the term “sanatorium,” derived from the Latin word “to heal,” to “sanitarium,” derived from the Latin term for health, Trudeau founded his Adirondack Cottage Sanatorium at Saranac, New York, in 1885. This spawned the opening of hundreds of similar institutions throughout the country (Caldwell 1988:70).

In 1899, Fort Bayard remained within the Army under the auspices of the Army Medical Department. The Army’s decision to retain the fort, even after it had outlived its military usefulness, grew from the strong interest that General George M. Sternberg, Surgeon General of the Army, had in pulmonary tuberculosis and its treatment.
Sternberg was also aware of the relatively good health that the Army’s soldiers had enjoyed serving in the higher elevations of the American West. Members of Zebulon Pike’s expedition of 1810 and of Fremont’s exploratory parties of the 1840s had witness their health improve while in the Rocky Mountains.

………………………………………………………………………………………………………………………………………………..

Upon assuming command in 1904, Bushnell, who had studied botany for years, immediately began to plant flowers, shrubs, and trees. When President Theodore Roosevelt created the Gila Forest Reserve in 1905, Bushnell ensured that Fort Bayard, which adjoined the Reserve, was part of a government reforestation project. The first year alone the Forest Service gave the hospital 250 seedlings of Himalayan cedar and yellow pine. Bushnell also got approval to fence in land for pasturing dairy cattle and arranged to recultivate long-neglected garden plots. The first year he predicted that the garden would generate “about 1300 dollars worth of produce.” After the quartermaster located an underground water source, Bushnell redoubled his cultivation efforts, planting trees, flowers, and grass to mitigate the wind and dust, and “to beautify the Post.” In later years Bushnell successfully grew beans from ancient cave dwellers (Anasazi beans), and made a less successful effort to grow Giant Sequoia from California.28 By 1910 Fort Bayard had four acres of vegetable gardens, a greenhouse, an orchard of 200 fruit trees, and alfalfa fields and hay fields for the dairy herd of 115 Holsteins, which the Silver City Enterprise proclaimed “one of the finest in the west.” The hospital also raised all of the beef consumed at the hospital (thereby avoiding Daniel Appel’s purchasing problems) and consumed pork at small expense by feeding the pigs the waste food. The hospital laboratory raised its own Belgian hares and guinea pigs for experiments.

Bushnell oversaw years of construction at Fort Bayard. In the wake of Florence Nightingale’s writings, nineteenth-century sanitation practices stressed cleanliness and ventilation, giving rise to pavilion style hospitals, narrow one- or two-story buildings lined with windows to provide patients with ample ventilation. In March 1904, Bushnell sent the Surgeon General plans for an “open court building” in modified pavilion style (Figure 2-1).

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

The building consisted of a quadrangle of long, narrow dressing rooms around an open court with porches along both the exterior and interior of the building. The rooms could be used for sleeping in inclement weather and the porches allowed patients to seek sun or shade as they wished. Wide doors enabled the easy movement of beds between the rooms and the porches. “The object of this style of building is to facilitate sleeping out of doors, which is now considered so important in modern sanatoria for the treatment of tuberculosis,” Bushnell explained.

The United States escaped the cauldron of WWI until April 1917. But after years of trying to maintain neutrality, President Woodrow Wilson’s administration mobilized the nation to fight in the most deadly enterprise the world had ever seen. Modern industrialized warfare would kill millions of soldiers, sailors, and civilians and unleash disease and famine across the globe. Typhus flourished in Eastern Europe and a lethal strain of influenza exploded out of the Western Front in 1918, producing one of the worst pandemics in history. Although eclipsed by such fierce epidemics, tuberculosis also fed on the war.

He was ordered to the office of The Surgeon General on June 2, 1917, and placed in charge of the Division of Internal Medicine and on June 13 there appeared S. G. O. Circular No. 20, Examinations for pulmonary tuberculosis in the military service, establishing a standard method of examination of the lungs for tuberculosis. Through his efforts a reexamination of all personnel already in the service was made by tuberculosis examiners and about 24,000 were rejected on that score. He had charge of the location, construction, and administration of all army tuberculosis hospitals, of which eight were built with a capacity of 8,000 patients.

With his relief from service in 1919 he took up his residence on a small farm at Bedford, Mass., where he prepared his Study of the Epidemiology of Tuberculosis (1920) and later Diseases of the Chest (1925) in collaboration with Dr. Joseph H. Pratt of Boston. As chief delegate of the National Tuberculosis Association he attended the first meeting of the International Union Against Tuberculosis in London in 1921. During the winter of 1922-23 he delivered a series of lectures on military medicine at Harvard University. In the summer of 1923 he moved to California and took up his residence at Pasadena.

………………………………………………………………………………………………………………………………………………..

In eighteen months the Selective Service registered twenty-five million men for the draft, examined ten million for military service, and enlisted more than four million soldiers, sailors, and Marines. To the dismay of many people, medical screening boards across the nation soon discovered that American men were not as strong and healthy as they had assumed. Of those eligible for military service, 30 percent were physically unfit; a number of them deemed ineligible to serve had tuberculosis. Therefore, in 1917 Surgeon General William Gorgas called George Bushnell to Washington, DC, to establish the Office of Tuberculosis in the Division of Internal Medicine, leaving Bushnell’s protégé, Earl Bruns, in charge of Fort Bayard. Given the Medical Department’s mission to maintain a strong and healthy fighting force, Bushnell’s new job was to minimize the incidence of tuberculosis among active-duty soldiers and avoid the high cost of disability pensions for men who incurred the disease during military service. It was a tall order.

Wartime tuberculosis had already received attention in 1916, when reports circulated that the French army had sent home 86,000 men with the disease, raising the specter that life in the trenches would generate hundreds of thousands of cases. One investigator found that tuberculosis rates in the British army were double those in peacetime, reversing the prewar downward trend. The head of the New York City Public Health Department, Hermann Biggs, declared that “tuberculosis
offers a problem of stupendous magnitude in France.” Subsequent studies revealed that only 20 percent or less of the French soldiers sent home with tuberculosis actually had the disease; others were either misdiagnosed or had had tuberculosis prior to entering the military and therefore had not contracted it in the trenches. The reports nevertheless galvanized public health officials to address the tuberculosis problem. The Rockefeller Foundation, for example, in cooperation with the American Red Cross, established a Commission for the Prevention of Tuberculosis in France to help the French and protect any Americans from contracting tuberculosis “over there.”

Bushnell established four “tuberculosis screens” by (1) examining all volunteers and draftees before enlistment, (2) checking recruits again in the training camps, (3) examining soldiers already in the Army for tuberculosis, and (4) screening military personnel at discharge to ensure they returned to civil life in sound condition. To implement these activities, Bushnell developed a protocol under which physicians could quickly examine men for tuberculosis as part of the larger physical examination process. He standardized the procedures for examinations throughout the Army, and crafted a narrow definition of what constituted a tuberculosis diagnosis to enable the Army to enlist as many young men as possible. Despite these efforts, soldiers developed active cases of tuberculosis throughout the war. Bushnell’s office also created eight more tuberculosis hospitals in the United States and designated three hospitals with the American Expeditionary Forces (AEF) in France to care for soldiers who developed active tuberculosis in the camps and trenches. Short of resources and knowledge, however, the Army Medical Department at times struggled just to provide beds for tuberculosis patients, let alone deliver the individual care Bushnell and his staff had provided at Fort Bayard before the war.

Overburdened medical personnel worked long hours, in often poor conditions. Thousands of tuberculosis patients resented the diagnosis and protested the conditions in which at times they were virtually warehoused. The draft, which brought millions of young men into government control and responsibility, also exposed the Army Medical Department to public scrutiny. Congress launched an investigation in 1919. World War I, which so dramatically changed the world, profoundly altered the Army’s tuberculosis program as well. It also challenged George Bushnell’s expertise. The Army’s tuberculosis expert had founded his policies on assumptions that, although widely held at the time, proved to be inaccurate and costly in lives and treasure. Wartime tuberculosis, therefore, shows the power of disease to overwhelm both knowledge and institutions.

Bushnell and his contemporaries were familiar with the concept of immunity and the power of vaccination, and the Army Medical Department vaccinated soldiers for smallpox and typhoid. Extending this concept of immunity to tuberculosis, medical officers differentiated between primary infection in childhood and secondary infection later in life. Observing that tuberculosis was often fatal for infants and young children, they reasoned that for survivors, an early infection of tuberculosis bacilli immunized a person against the disease later in life.
A “primary infection,” wrote Bushnell, gave a person some immunity, which “while not sufficient in many cases to prevent extension of disease [within the body]…is sufficient to counteract new infections from without.”8 In an article on “The Tuberculous Soldier,” the revered physician William Osler agreed. For years autopsies had uncovered healed tuberculosis lesions in people who had died in accidents or of other diseases. Although it was not known how many men between the ages of eighteen and forty harbored the tubercle bacillus, Osler wrote, “We do know that it is exceptional not to find a few [lesions] in the bodies of men between these ages dead of other diseases.” Thus, he argued, “In a majority of cases the germ enlists with the soldier. A few, very few, catch the disease in infected billets or barracks.”9 Bushnell reasoned if adults developed tuberculosis, “they do it on account of failure of their resistance.”

At one point Bushnell told the chief surgeon of the AEF, “Personally I have no fear of the contagion of tuberculosis between adults and see no reason why patients of this kind should not be treated in the ordinary hospital.” He asserted that the “really cruel persecution of the consumptive…through the fear that he will infect others, is based on what I must characterize as highly exaggerated notions of the danger of such infection.” This, too, was the prevailing view. Boston bacteriologist Edward O. Otis, who served as a medical officer during the war, wrote that “Undue fear of the communicability of pulmonary tuberculosis from one adult to another is unwarranted in the present state of our knowledge.”
Bushnell reasoned that if men infected with tuberculosis could indeed easily spread it to others, there would be much more tuberculosis in the Army than there was. British physician Leslie Murry, reasoned that although the crowded and damp conditions of trench warfare would have unfavorable effects on soldiers’ health, living outside with plenty of fresh air and good food and hygienic practices would improve their resistance to tuberculosis. Public health specialist George Thomas Palmer countered that although reactivation may not be higher in the military than in civil life, the United States had enough men without tuberculosis to bar anyone suspected of it from the military and thereby avoid an “added financial burden to the nation.” The challenge was to keep tuberculosis out of the Army and tuberculars off the disability rolls, but not to exclude so many men as to impair the nation’s ability to amass an army.

Bushnell’s views of tuberculosis immunity, contagion, interaction with military life, and the risk of overdiagnosis shaped the Army Medical Department programs for screening recruits. He knew he could not guarantee that all tuberculosis could be eliminated from the Army, but asserted that, “a sufficiently rigid selection of promising material in itself practically excludes tuberculosis.” In addition to enlisting the strongest men, Bushnell believed that a massive screening program would pay for itself by eliminating those who would later cost the government in medical services and disability benefits.

But the nation at war did not have the time or resources for the meticulous one-hour examination practiced at Fort Bayard, so Bushnell developed a protocol for civilian and military physicians to examine volunteers, draftees, trainees, and soldiers for tuberculosis in a matter of minutes. Circular No. 20 detailed how physicians should examine recruits, and became the single most important Army tuberculosis document during the war. The circular explained that the apices, or the tops of lungs, were the most common location for tuberculosis lesions, and that “the only trustworthy sign of activity in apical tuberculosis is the presence of persistent moist rales.” Circular No. 20 directed that “the presence of tubercle bacilli in the sputum is a cause for rejection,” and that “no examination for tuberculosis is complete without auscultation following a cough.” It recommended that a sputum sample “be coughed up in [the examiner’s] presence,” to ensure that it was actually from the examinee.

The last one-third of the document detailed X-ray examinations, summarizing eight different kinds of conditions that may appear and that would be grounds for rejection, and which conditions would not. By 1915, a Fort Bayard medical officer stated that X-ray technology “has become one of the most valued procedures in the diagnosis of pulmonary tuberculosis.” Medical officers F. E. Diemer and R. D. MacRae at Camp Lewis, Washington, argued in the pages of the JAMA that X-rays should be the primary diagnostic tool, not an “adjunct.” World War I ultimately, however, did encourage X-ray technology by revealing its power to thousands of physicians, stimulating the search for technical advances, and demonstrating the importance of specialization in reading X-rays. By the end of the war, the Army Medical Department had shipped to France hundreds of X-ray machines for use in Army hospitals and at the bedside, and developed various modes of X-ray equipment, including X-ray ambulances

Calculating that it would require 600 examiners for the screening process, the Medical Department turned to training general practitioners from civil life who knew little about tuberculosis. Bushnell’s office established a six-week tuberculosis course to prepare physicians. The first course at the Army Medical School in Washington, DC, was so popular that instructors offered it at several other training camps in the country. General Hospital No. 16, operating in conjunction with Yale Medical School, also offered a course on hospital administration to train medical officers to run tuberculosis hospitals.

Public health officials and the National Tuberculosis Association asked to be informed of any tuberculous individuals being sent to their communities, including the name and address of the “party assuming responsibility for such continued treatment and care.” The journal American Medicine published an article by British tuberculosis specialist Halliday Sutherland, who expressed concern that if men declined treatment and returned home they could spread tuberculosis to their families. He suggested that the U.S. Army retain men diagnosed with tuberculosis so that the government could provide treatment and discipline them if they resisted. Members of Congress opposed simply discharging men with tuberculosis. Representative Carl Hayden of Arizona argued that such men had given up their civilian lives upon induction into the Army, only to discover “that they were afflicted with a dread disease which prevents them from earning a livelihood.” He suggested that “some provision should be made for the care of such men until they are able to provide for themselves.”

While Bushnell’s policies succeeded in suppressing tuberculosis rates in the Army, the narrow definition of a tuberculosis diagnosis explicitly allowed men with healed lesions in their lungs to serve, and the rapid screening system caused some examiners to miss cases of active disease. Bushnell recognized that “a standard, though imperfect, is believed to be an indispensable adjunct in Army tuberculosis work not only to support the examiner but also to secure the necessary uniformity of practice in the matter of discharge for tuberculosis.” Nationwide, local draft boards and training camps rejected more than 88,000 men for tuberculosis, about 2.3 percent of the 3.8 million men examined. Postwar assessments calculated that of the more than two million soldiers who went to France to serve in the AEF, only 8,717 were evacuated with a diagnosis of tuberculosis, an incidence of only 0.4 percent.

In early 1918 a strep infection in the training camps in the United States caused medical officers to send hundreds of trainees to Army hospitals misdiagnosed with tuberculosis, crowding hospitals and generating paperwork and confusion. For a time, therefore, the Office of The Surgeon General ordered that no one should be discharged for tuberculosis from the training camps unless he had bacilli in his sputum—meaning the very severe cases. More than 50 percent of the patients being sent back to the United States from France with a diagnosis of tuberculosis did not actually have the disease. Bushnell viewed such overdiagnoses as “evil,” because it took men out of the AEF and overburdened tuberculosis hospitals and naval transports, which had to segregate suspected tuberculosis cases in isolation rooms or on open decks.

Faced with what he called “leaking” of soldiers from the AEF due to erroneous tuberculosis diagnoses, Bushnell turned to a specialist for assistance, Gerald B. Webb (Figure 4-3), from Colorado Springs.61 An Englishman by birth, Webb had married an American, and when she developed tuberculosis the couple traveled to Colorado Springs, Colorado, for treatment. His wife struggled with the disease for ten years until her death in 1903, and afterward Webb stayed on in Colorado Springs, remarrying and building a medical practice specializing in tuberculosis. In addition to his medical practice, Webb pioneered research into the body’s immune function, searched for a tuberculosis vaccine, and was a founder of the American Association of Immunologists (1913). Still somewhat bored in Colorado Springs, Webb volunteered for the Medical Corps soon after the United States declared war and helped organize and run tuberculosis screening boards at Camp Russell, Wyoming, and Camp Bowie, Texas. Bushnell
appointed him senior tuberculosis consultant for the AEF. After meeting with Bushnell in Washington and attending the Army War Course for senior officers at Columbia University, Webb sailed to France in March 1918.

Gerald B. Webb, World War I, Gerald B. Webb Papers.

Gerald B. Webb, World War I,

Gerald B. Webb, World War I,

Photograph courtesy of Special Collections, Tutt Library, Colorado College, Colorado Springs, Colorado.

Immunity in tuberculosis: Further experiments Unknown Binding – 1914

Webb instituted a screening process similar to that in the United States, distributing Circular No. 20, and preparing an illustrated version for medical officers in the field. He established a policy directing that only patients with sputum positive for tuberculosis should be sent back to the United States. Others would be tagged “tuberculosis observation” and sent to one of three hospitals designated as tuberculosis observation centers. There, specialists—Bushnell’s “good tuberculosis men”—would distinguish tuberculosis signs from other lung problems such as bronchitis and pneumonia, determining that he was free of disease, and send only patients who were indeed positive for tuberculosis back to the homeland.

Webb traveled to field and base hospitals throughout France. He would typically spend three days at a hospital, examining patients, leading conferences, giving lectures, and, according to his biographer, Helen Clapesattle, “preaching his gospel of fresh air and absolute rest.” He recruited a radiologist to teach the proper reading of X-ray plates, and advocated the early detection of tuberculosis, explaining, “Just as the wounded do better if they are got to the surgeons quickly, so the tuberculosis-wounded are more likely to recover if they are spotted and sent to the doctors early.”

In the 1930s, as Webb had concluded in 1919, scientists came to recognize that early tuberculosis infections did not provide protection and that adults could be reinfected with tuberculosis and develop active disease. In the meantime, with his AEF work done, in January 1919 Webb returned to his family and medical practice in Colorado Springs. The National Tuberculosis Association recognized Webb’s war work by electing him president in 1920, and Webb set the Association on a course of tuberculosis research on the immunity question and the standardization of X-ray diagnostics. He did not return to military service, but was a mentor for young physicians Esmond Long and James Waring, who would be leaders in the Army Medical Department’s tuberculosis program during the next war.

May 1941, as the United States stood on the brink of another world war, Benjamin Goldberg, president of the American College of Chest Physicians, recited some stunning figures at the association’s annual meeting in Cleveland, Ohio. He calculated that from 1919 to 1940 the Veterans Administration had admitted 293,761 tuberculosis patients to its hospitals. These patients had received government care and benefits for a total of 1,085,245 patient-years, at a cost of $1,185,914,489.56. Goldberg’s remarks reveal that although tuberculosis rates in the United States were declining 3 to 4 percent annually during the interwar years, the government’s burden to care for tuberculosis patients remained heavy. The Army was only three-quarters the size it was before World War I (131,000 versus 175,000 strength) and experienced no major epidemics, so that suicide and automobile accidents became the leading causes of death in the peacetime Army. Although hospital admissions of active duty personnel for tuberculosis declined during the decade, tuberculosis admissions at Fitzsimons Hospital in Denver remained constant due to a steady stream of patients who were veterans of the war. Tuberculosis, in fact, became a leading cause of disability discharges from the Army and, with nervous and mental disorders, generated the greatest amount of veterans’ benefits between the wars,

The story of tuberculosis in the Army after World War I, then, is one of increasing demand and decreasing resources, a dynamic that left Fitzsimons financially strapped even before the country entered the Great Depression. An examination of Fitzsimons’ postwar environment—the modern hospital and technology, the ever-changing landscape of veterans’ benefits, and new, invasive treatments for tuberculosis—illuminates these stresses.

President Franklin Delano Roosevelt proclaimed a “limited national emergency” on 8 September 1939, a week after Germany invaded Poland. But due to underfunding during the interwar period, one observer wrote that, “to prepare for war the Medical Department had to start almost from scratch.”1 Given the lean years of the 1920s and 1930s and the Army Medical Department’s policy of discharging officers with tuberculosis from duty, Surgeon General James C. Magee had to turn to the civilian sector for a tuberculosis expert. He recruited Esmond R. Long, M.D., Ph.D., director of the Henry Phipps Institute for the Study, Prevention and Treatment of Tuberculosis in Philadelphia. He could not have made a better choice. Long was also professor of pathology at the University of Pennsylvania, director of medical research for the National Tuberculosis Association, and the youngest person to be awarded the Trudeau Medal at age forty-two years (in 1932) for his tuberculosis research.2 He would now become the Army’s point man on the disease and stand at the front lines of the Medical Department’s struggle with tuberculosis beginning before Pearl Harbor to well after V-J (Victory-Japan) Day.

His mission to reduce the effect of tuberculosis on the Army differed from that of Colonel (Col.) George Bushnell in the previous war because disease was less of a threat. In fact, World War II would be the first war in which more American personnel died of battle wounds than of disease. Of 405,399 recorded fatalities, battle deaths outnumbered those from disease and nonbattle injuries more than two to one: 291,557 to 113,842.3 Malaria, sexually transmitted diseases, and respiratory infections did sicken millions of soldiers, sailors, Marines, and airmen, but most survived. Thanks in part to sulfa drugs and, beginning in 1943, penicillin to treat bacterial infections, the Army Medical Department had only 14,904 deaths of 14,998,369 disease admissions worldwide, a 0.1 percent death rate.4 Tuberculosis declined, too, representing only 1 percent of Army hospital admissions for diseases—1.2 per 1,000 cases per year—a rate much lower than the 12 per 1,000 cases per year during World War I. The Medical Department concluded that “tuberculosis was not a major cause of non-effectiveness during the war.”

But Sir Arthur S. McNalty, chief medical officer of the British Ministry of Health (1935–40), called tuberculosis “one of the camp followers of war.” War abetted tuberculosis, he explained, because of the “lowering of bodily resistance and increased physical or mental strain or both.”6 It also found fertile ground in crowded barracks and camps, and ran rampant in the World War II prison camps and Nazi concentration camps. And just one active case of tuberculosis per thousand in the Army meant thousands of tuberculosis sufferers among the 11 million Americans in uniform, each of whom consumed Medical Department resources: the average hospital stay per case during the war was 113 days.7

But if tuberculosis was a camp follower, Esmond Long (Figure 8-1) was a tuberculosis follower.8 He tracked it down, studied it, and tried to prevent its spread at every stage of American involvement in the war. With war looming in 1940, the National Research Council asked Long to chair the Division of Medical Sciences, Subcommittee for Tuberculosis, to advise the government on preventing and controlling tuberculosis in both civilian and military populations during war mobilization. Once the United States entered the war, Long received a commission as a colonel in the Medical Corps and moved his family from Philadelphia to Washington, DC. Working out of the Office of The Surgeon General, Long set up a screening process with the Selective Service to keep tuberculosis out of the Army and then traveled to more than ninety induction camps to ensure adherence to the procedures. He also oversaw the expansion of tuberculosis treatment facilities in the United States, inspected Fitzsimons and other Army tuberculosis hospitals, advised medical officers on treating patients, kept abreast of research developments in the labs, monitored outbreaks of tuberculosis in the theaters of war, and wrote articles for medical and lay periodicals to publicize the Army’s antituberculosis program.

In 1945 Long traveled to the European theater to inspect hospitals caring for tubercular refugees and liberated prisoners of war (POWs). There he saw the horrors of the concentration camps at Buchenwald and Dachau where Army medical personnel cared for thousands of former prisoners sick and dying of typhus, starvation, and tuberculosis. After the war Long organized the tuberculosis control program for the Allied occupation of Germany, and returned annually in the 1950s to assess its progress. He split his time between the Army Medical Department and the Veterans Administration (VA) to supervise the transition of the federal tuberculosis treatment program from the War Department to the VA. He also helped organize and evaluate the antibiotic trials, which ultimately led to an effective cure for tuberculosis. After returning to civilian life Long continued to study tuberculosis in the Army, and he wrote the key tuberculosis chapters for the Army Medical Department’s official history of the war.

With Long as a guide, this chapter shows how war once again served as handmaiden to disease around the globe. This time the Army Medical Department assumed not only national but international responsibilities for the control of tuberculosis in military and civilian populations, among friend and foe. Long and the Army Medical Department did succeed in demoting tuberculosis from the leading cause of disability discharge for American World War I personnel (13.5 percent of discharges), to thirteenth position during the years 1942–45 (1.9 percent of all discharges), behind conditions such as psychoneuroses, ulcers, respiratory diseases, arthritis, and other diseases.9 But this achievement required continued vigilance, an Army-wide surveillance program, and dedicated personnel and resources. The first step was to keep tuberculosis out of the Army.

After war broke out in Europe, Congress passed the National Defense Act of 1940, which established the first peacetime military draft in U.S. history, increasing Army strength eightfold from 210,000 in September 1939 to almost 1.7 million (1,686,403) by December 1941. This resulted in a 75 percent rise in the number of patients in military hospitals, straining the Medical Department, which had only seven general hospitals and 119 station hospitals in 1939.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Figure.. Esmond R. Long, who directed the Army tuberculosis program during World War II. Photograph courtesy of the National Library of Medicine, Image #B017302.

“Good Tuberculosis Men”

Soon appropriating freely, pledging “all of the resources of the country“ to meet the crisis, the War Department was constantly readjusting to meet the escalating emergency.

The National Research Council Committee on Medicine, Subcommittee on Tuberculosis, chaired by Long, met for the first time on 24 July 1940 and prioritized its responsibilities: first, develop recommendations on how to screen draft registrants for tuberculosis; second, screen civilians in federal service and wartime industries; third, figure out how to care for people rejected by the draft for the disease; and finally, help civilian and military agencies prepare for tuberculosis in war refugee populations. In its first nine-hour meeting, the subcommittee decided on centralized tuberculosis screening centers at 200 recruiting stations and generated a list of tuberculosis specialists nationwide to evaluate recruits and interpret X-rays at those centers. Subcommittee members stressed the importance of maintaining good records for processing any subsequent benefits claims and, most importantly, called for X-ray screening of all inductees—not just those who looked like they might have tuberculosis.

The War Department leadership initially rejected such comprehensive screening of inductees as expensive and time-consuming. The fact that tuberculosis death rates in the country had fallen two-thirds from 140 per 100,000 people in 1917 to 45 per 100,000 people in 1941, and in the Army from 4.6 per 1,000 in 1922 to 1.4 per 1,000 in 1940, may have led to complacency. But Long, his colleagues, and the national tuberculosis community, mindful of the cost to the nation in sickness, death, and disability benefits in the previous war, persisted. The American College of Chest Surgeons asked in July 1940, “Shall We Spread or Eliminate Tuberculosis in the Army?” and their president, Benjamin Goldberg, reported that the VA had spent almost $1.2 billion on tuberculosis patients through 1940. One medical officer calculated that 31 percent of all veterans who died as a result of World War I service and whose dependents received benefits, had died of tuberculosis. Even the lay press chimed in with a TIME magazine article, “TB Warning,” that stressed the importance of chest X-rays.16 Advocates pointed out that X-ray technology was more available and less expensive than in the previous war, and radiologists were more plentiful and skillful. They were also confident that new technology, such as the development of a lens that allowed the direct and rapid photography of a fluoroscopic image and new 4 x 5 inch films, which made storage and transport easier than that of the 11 x 14 inch films, rendered screening more practical than in 1917–18.

The Army Medical Department agreed with the National Research Council subcommittee. Since 1934 it had required X-rays for all Army personnel assigned overseas, but it had not yet convinced the War Department on universal screening. In June 1941, Brigadier General (Brig. Gen.) Charles Hillman, Chief, Office of The Surgeon General Professional Service Division, told the National Tuberculosis chairman, C. M. Hendricks, that “the desirability of routine X-rays had long been recognized by the Surgeon General’s Office,” but “considerations other than medical entered the picture and the character of induction

Camp Follower: Tuberculosis in World War II 277

Examinations had to be adapted to the limitations of time, place, and available equipment.” When Fitzsimons informed Hillman later that new recruits were arriving at the hospital with tuberculosis, he responded almost plaintively. “I am working with the Adjutant General to devise some method by which every volunteer for enlistment in the Regular Army will have a chest X-ray and serological test before acceptance.” He asked for all available evidence of sick recruits, explaining that “data on Regular Army men of short service now in Fitzsimons with tuberculosis will help me get the thing across.” As the data and advice accumulated, in January 1942, the Adjutant General required that all voluntary applicants and reenlisting men be given chest X-rays. Finally, on 15 March 1942, mobilization regulations made chest X-rays mandatory in all induction physicals.

With universal screening in place, Long, as chief of the tuberculosis branch in the Office of The Surgeon General, oversaw the screening process and faced a task similar to that of George Bushnell in 1917–18, finding that fine line between excluding as much tuberculosis as possible from the Army without rejecting too few or too many men. Conscious of his predecessor’s miscalculations, Long was careful not to criticize Bushnell’s tuberculosis program, at one point noting that World War I medical officers were “not to be reproached for not having knowledge that came into existence only later, any more than the chief of the Army air service in 1917 is to be reproached because more efficient airplanes are available now than then.”

The wartime emergency produced a public health campaign regarding tuberculosis and other disease threats. A War Department pamphlet, What Every Citizen Should Know about Wartime Medicine, presented the issue as one of maintaining troop health and limiting public costs. “The strenuous activity of soldiering is likely to cause extension of an incipient (early) tuberculous invasion of the lungs, or to precipitate the breakdown and reactivation of arrested cases,” it explained. Such illness could result in disability “and the necessity of providing long care of these patients in military hospitals where they must remain isolated from nontuberculous patients.” The Public Health Service also created a tuberculosis office to handle the expected increase in tuberculosis, and, as the National Research Council Subcommittee recommended, gave war industry workers chest examinations.

As military and civilian screening boards found thousands of people with active tuberculosis and sent many of them to tuberculosis sanatoriums and hospitals, they generated what a public health nurse referred to as “potentially the greatest case finding program that workers in tuberculosis control have ever known.” At the same time, however, war mobilization drew civilian medical personnel into the military, reducing staffing in home front institutions. Army medical personnel ultimately numbered more than 688,000, including 48,000 physicians in the Medical Corps, 14,000 dentists in the Dental Corps, and 56,000 nurses in the Army Nurse Corps—a large portion of the nation’s medical professionals.27 To maintain his nursing staff, VA Director Frank Hynes even asked the Army Nurse Corps in May 1942 not to hire VA nurses away from his hospitals.

Army tuberculosis rates during World War II, while lower than during World War I, did show a similar “U” curve with high rates at the beginning of the war as the Selective Service built up the military forces and cases that had eluded screening became active during training or combat (Figure 8-2). Tuberculosis rates fell as radiologists became more proficient at identifying tuberculosis infections, and then another sharp, higher increase in cases at the end of the war as discharge examinations found people who had developed active tuberculosis during their service. Postwar studies also revealed a seemingly paradoxical phenomenon that during the war military personnel serving overseas had lower tuberculosis rates than those serving in the United States, yet higher rates when they returned home.

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

Chart comparing the incidence curves of tuberculosis in the Army during World War I and World War II. From Esmond R. Long, “Tuberculosis,” in John Boyd Coates, Robert S. Anderson, and W. Paul Havens, eds., Internal Medicine in World War II, Medical Department, U.S. Army in World War II, vol. 2, Infectious Diseases (Washington, DC: Office of The Surgeon General, Department of the Army, 1961), chart 17, p. 335. Available at http://history.amedd.army.mil/booksdocs/wwii/infectiousdisvolii/chapter11chart17.pdf.

The Medical Department of the United States Army in the World War. Communicable and Other Diseases. Washington: U. S. Government Printing Office, 1928, vol. IX, pp. 171-202.
Letter, The Adjutant General, to Commanding Generals of all Corps Areas and Departments, 25 Oct. 1940, subject: Chest X-rays on Induction Examinations.
M. R. No. 1-9, Standards of Physical Examination During Mobilization, 31 Aug. 1940 and 15 Mar. 1942
Long, E. R.: Exclusion of Tuberculosis. Physical Standards for Induction and Appointment.[Official record.]

Long, E. R., and Stearns, W. II.: Physical Examination at Induction; Standards With Respect to Tuberculosis Induction and Their Application as Illustrated by a Review of 53,400 X-ray Films of Men in the Army of the United States. Radiology 41: 144-150, August 1943.
Long, Esmond R., and Jablon, Seymour: Tuberculosis in the Army of the United States in World War II. An Epidemiological Study with an Evaluation of X-ray Screening. Washington: U. S. Government Printing Office, 1955.

It is estimated that, before roentgen examination became mandatory (MR No. 1-9, 15 March 1942), one. million men had been accepted without this form of examination. Where roentgen examination was practiced, it resulted in a rejection rate of about 1 percent for tuberculosis. Applying this figure, it can be estimated that some 10,000 men were accepted who would have been rejected if they had been subjected to chest roentgen-ray study. Various studies have shown that approximately one-half of these would have been cases of active

http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Troops who developed tuberculosis were not discovered until their separation examinations, conducted when they were once again in the United States.

In the end, the screening process rejected 171,300 men for tuberculosis as the primary cause (thousands more had tuberculosis in addition to the disqualifying condition), and Long calculated that this saved the government millions of dollars in hospitalization costs. After the war, however, Long identified two factors that allowed tuberculous men into the Army: the failure to screen all inductees until March 1942, and the 4 x 5 inch stereoscopic (fluorographic) films, which were used in the interest of speed but which Long believed caused examiners to miss about 10 percent of minimal tuberculosis lesions in recruits. To better understand the latter problem he had two radiologists read the same X-rays and found substantial disagreement between their findings. Long therefore concluded that “if the induction films had each been read by two different radiologists, undoubtedly many more of the men who had tuberculosis at entry could have been excluded from service.” The Army ultimately discharged 15,387 enlisted men for tuberculosis during the war, which earned it thirteenth position as a cause of disability discharge.

American military forces fought in nine theaters of war—five in the Pacific and Asia, the other four in North Africa, the Mediterranean, Europe, and the Middle East. The Allies gave priority to defeating Germany and Italy in Europe beginning with operations in North Africa and the Mediterranean. After fighting in Tunisia in 1942–43, the Allies invaded Sicily on 10 July 1943, and moved up the Italian peninsula. By April 1944—in preparation for the D-Day invasion on 6 June 1944—the United States had more than 3 million soldiers in Europe, supported by 258,000 medical personnel managing a total of 318 hospitals with 252,050 beds. The war against Japan got off to a slower start as U.S. military forces developed the means to execute an island war across vast expanses of ocean. After fighting began in the Southwest Pacific, military forces grew from 62,500 troops in March 1942 to 670,000 in the summer of 1944 with 60,140 medical personnel. Even though military personnel developed tuberculosis in all of the nine theaters, the numbers were not high and tuberculosis was not a major military problem. In the Southwest Pacific theater, for example, only sixty-four of more than 40,000 hospital admissions were for the disease.

Tuberculosis was of the greatest consequence in the North Africa and Mediterranean theaters, in part due to poor screening early in the war, but also because, according to historian Charles Wiltse, it was the theater “in which the lessons of ground combat were learned by the Medical Department as much as by the line troops.” In general, medical personnel learned the importance of treating battle casualties as promptly as possible and keeping hospitals and clearing stations mobile and far forward to shorten evacuation and turnaround times. With regard to tuberculosis, the Medical Department had to relearn the World War I lesson of the importance of having skilled practitioners—or “good tuberculosis men”—in theater. They also ascertained which treatments were appropriate close to the battle lines and which were not, and when and how best to evacuate tubercular patients to the United States.

When soldiers with tuberculosis began to appear at Army medical stations in North Africa in late 1942, Major General (Maj. Gen.) Paul R. Hawley, chief of medical services for the European theater of operations, called for a tuberculosis specialist. On Long’s recommendation, Hawley appointed Col. Theodore Badger (Figure 8-3) as a senior consultant in tuberculosis on 2 January 1943. A professor of medicine at the Harvard School of Medicine, Badger had served in the Navy during World War I, and then attended Yale and Harvard where he earned his medical degree. Chief of medical service of the 5th General Hospital (GH), organized out of Harvard, Badger would play a role similar to that played by Gerald Webb during World War I—medical specialist, teacher, and troubleshooter.

Assessing the tuberculosis situation in the Mediterranean theater, Badger identified five hazards: (1) the development of active disease in American troops who had not been X-rayed upon induction; (2) association with British troops and civilians who had not been screened for tuberculosis; (3) drinking of nonpasteurized and possibly infected milk that could transmit tuberculosis; (4) battlefield conditions that could activate soldiers’ latent infections; and (5) the undetermined effects of other respiratory infections.41 Badger soon got the Army to use pasteurized milk and to establish X-ray centers with the proper equipment and trained staff, but he was not able to examine the thousands of American soldiers in the war zone. To gauge the extent of the tuberculosis problem he therefore arranged for a mobile X-ray unit to conduct spot surveys of troops in the field. Three examinations of some 3,000 troops each found only about 1 percent with signs of tuberculosis. To avoid losing manpower, Badger reported in mid-1943 that “up to the present time no individual has been removed from duty because of X-ray findings, and follow-up study has, so far, not indicated the necessity for it.” Badger planned to recheck those with suspicious films every few months to see if the signs had advanced. Badger recommended that patients with pleural effusion, the accumulation of fluid between the layers of the membranes that line the lungs and chest cavity that often indicates tuberculosis, be evacuated back to the United States. He also ended the practice of transporting some tuberculosis patients sitting up

. As the first true air war, World War II saw the introduction of air evacuation when Army aeromedical squadrons deployed in early 1943. After successful trials in the Pacific and North Africa, air evacuation increased so that during the Battle of the Bulge (1944–45), some patients arrived in U.S. hospitals within three days of being wounded. Some medical officers were concerned about the effects of transporting tuberculosis patients by air where they would be exposed to high speeds, jolting, and reduced air pressure. Tuberculosis specialists in New Mexico and Colorado therefore studied 143 white, male military patients, twenty-two-years old to twenty-eight-years old, with active tuberculosis flown to Army hospitals in nonpressurized air ambulances for any signs of trouble. Fearing the worst, they instead found that “severe discomfort, pulmonary hemorrhage, and spontaneous pneumothorax did not occur in the series either during or following the flight,” and concluded that air transport up to 10,000 feet was safe and preferable to time-consuming travel by water. By the end of the war the consensus was that rapid air evacuation to the United States also reduced the need to give a tuberculosis patient a pneumothorax in the field.

From the roof of Fitzsimons’ new building in April 1943, Rocky Mountain News reporter John Stephenson could see the Rocky Mountain Arsenal, the Denver Ordnance Plant, and Lowry Field, “places where the Army studies how to kill people.” But, he wrote, “The Army is merciful. It lets the right-hand of justice know what the left hand of mercy is doing at Fitzsimons General Hospital.” The largest Army hospital in the world, Fitzsimons had 322 buildings on 600 acres, paved streets with traffic lights, a post office, barbershop, pharmacy school, dental school, print shop, bakery, fire department, and chapel. It was, wrote Stephenson, “a city of 10,000.”61 No longer a liability, Fitzsimons was the pride of the Army Medical Department. One Army inspector reported that “it is apparent that no expense has been spared in this extraordinary building or in the general equipment and maintenance of the whole hospital plant.”62 As Congressman Lawrence Lewis had hoped, Fitzsimons’ mission now extended beyond caring for tuberculosis patients to meeting the general medical and surgical needs of the wider military community in the Denver region.

During the war the hospital maintained about 3,500 beds, reaching its highest daily patient population after the war—3,719 on 3 February 1946. The annual occupancy rate, calculated in patient days, increased from 603,683 in 1942 to a high of 1,097,760 for 1945, about 85 percent capacity.

With the reduction of tuberculosis in the Army over the years, the percentage of tuberculosis patients among all those at Fitzsimons had declined from 80 percent to 90 percent in the 1920s to 40 percent to 50 percent in the late 1930s. As the Army grew it now rose again. During the war Fitzsimons admitted more than 8,100 patients with tuberculosis. In fact, in 1943, only eighteen patients had battle injuries; the rest were in the hospital for illness and noncombat injuries. Unlike during the previous war, however, this Medical Department had a network of more than fifty veterans’ hospitals to which it could transfer patients too disabled by tuberculosis or other disease or injury to return to duty. Now, instead of allowing patients to stay in the service and receive the benefit of hospitalization with the hopes that they would recover and return to duty, the Medical Department discharged patients to VA hospitals as soon as they were determined to be unfit for military service, thereby reserving capacity for active-duty personnel. Maj. D. P. Greenlee had returned from a training course in penicillin therapy at Bushnell General Hospital in Utah to supervise the administration of the new drug on a variety of infections. He soon reported a cure rate of 93 percent. There were fewer victories in tuberculosis treatment.

During the war about one-quarter of all tuberculosis patients were treated with pneumothorax. During the war Fitzsimons surgeon Col. John B. Grow and other surgeons tried lung resection to treat tuberculosis, with few patient deaths. In 1946, however, when Grow’s staff contacted thirty patients who had had such surgery, they found that half of them were doing well, but three others had died, seven were seriously ill, and the rest were still under treatment. “It was felt that pulmonary resection in the presence of positive sputum was extremely hazardous and the indications were consequently narrowed down.”

Outside the operating rooms, the “City of 10,000” had a rich social life with people arriving at the post from all corners of the country. With Congressman Lewis’s acquisition of the School for Medical Technicians, Fitzsimons assumed the role of medical trainer, offering six- to twelve-week courses in technical training for dental, laboratory, X-ray, surgical, clinical, and pharmacy assistants. By 1946 the School had graduated more than 28,000 such technicians to serve around the world. The Women’s Army Corps arrived at Fitzsimons in February 1944 when 165 women attended the medical technicians school as part of the first coeducational class.74 Members of the Women’s Army Corps, rehabilitation aides, Education Department staff, dietitians, as well as nurses increased the female presence at Fitzsimons, as did activities of welfare organizations such as the Gold Star Mothers, the Red Cross, and the Junior League. Fitzsimons’ patients and staff also enjoyed visits from celebrities, including Jack Benny, Miss America, Gary Cooper, Dorothy Lamour, and other entertainers such as the big band leader Fred Waring and his Pennsylvanians, the Denver Symphony Orchestra, and an African American Methodist Church children’s choir from Denver. Like communities across the country, the hospital participated in war bond campaigns and had a huge war garden that produced thousands of ears of sweet corn and bushels of other vegetables.

Despite national mobilization and generous congressional funding, the Army could not escape the strain on its hospitals. By July 1944, Fitzsimons had reached capacity so the Medical Department designated two more hospitals as specialty centers for tuberculosis. Earl Bruns’ widow Caroline, who lived in Denver at the time, was no doubt pleased when the department named Bruns General Hospital in Santa Fe, New Mexico, in honor of her husband. Bruns along with Moore General Hospital in Swannanoa, North Carolina, cared for enlisted patients with minimal or suspected tuberculosis.

As Allied troops liberated France in 1944 and crossed into Germany they encountered thousands of refugees or “displaced persons”—escaped prisoners from Nazi concentration camps, exhausted and terrified Jews, slave laborers, political prisoners, Allied POWs, and other victims. The Nazi camps that held these people served as incubators for diseases such as tuberculosis and typhus, and the frightened, sick, and starved refugees inundated Army hospitals in late 1944 and early 1945. Theodore Badger reported one of the first waves that arrived on 18 December 1944 when 304 men, most of them Russians, came to the 50th GH in Commercy, France. They had been in the Nazi labor camps for the mines and heavy industries, where thousands died and survivors were malnourished and sick. All of the 304 had tuberculosis, 90 percent with moderate or advanced disease. Four were dead on arrival, eight more died in the first week, and one-third of the patients would die by May.96 Alarmed, Gen. Hawley, Chief Surgeon of the European Theater of Operations, ordered that all displaced civilians and recovered military personnel be examined for signs of tuberculosis “to establish the gravity of the situation.” The situation was dire. At one time the 46th GH had more than 1,000 tuberculosis patients, all recovered Allied POWs, causing Esmond Long to remark that the hospital “had the largest number of tuberculosis patients of any Army hospital in the world.”

The 46th GH from Portland, Oregon, which had cared for tuberculosis patients in the Mediterranean theater, also stood on the front lines of the tuberculosis problem in Europe. Serving at Besancon, France, the hospital would receive the Meritorious Service Unit Plaque and Col. J. G. Strohm, the commanding officer, the Bronze Star Medal for service during the liberation of France. During the spring of 1945, the 46th GH admitted 2,472 Russians, forty-one Poles, and 128 Yugoslav POWs and former slave laborers freed by American forces. The influx began on 12 March and within four days the 46th GH had admitted 1,200 such patients.

“The hospital staff was agast [sic] at the terrible physical condition of these people,” reported the hospital commander.99 When Badger visited the 46th GH in March 1945 he said the patients “constitute one of the most seriously affected groups with tuberculosis and malnutrition that I have ever seen,” explaining that most of them suffered “acute fulminating, rapidly fatal disease, mixed with chronic, slowly progressive, fibrotic tuberculosis. ”Medical personnel (Figure 8-4) cared for these patients as best they could, comforting many of them as they died. They began the rest treatment with some men but, as Badger reported, convincing Allied POWs to submit to absolute bed rest after months of confinement was “practically impossible.” Badger was able to report that after a month “those men who did not die of acute tuberculosis showed marked improvement.”

46th General Hospital nurses who cared for former prisoners of war.

46th General Hospital nurses who cared for former prisoners of war.

Figure 8-4. 46th General Hospital nurses who cared for former prisoners of war. Photograph courtesy of Oregon Health Sciences University, Historical Collections and Archives, Portland, Oregon.

26th Gen Hospital WWII, North Africa

26th Gen Hospital WWII, North Africa

In late 1944 Hawley requested 100,000 additional hospital beds for the displaced persons and POWs he expected to encounter after the German surrender, but Gen. George Marshall and Secretary of War Henry L. Stimson denied the request, believing they could not spare resources of that magnitude. The European Theater, they decided, must use German medical personnel and hospitals to care for the prisoners. Only after the war did American hospital units transfer their equipment and supplies to German civilians and Allies for their use.

The liberation of Europe also freed American POWs, who, not surprisingly, had higher rates of tuberculosis than other American military personnel. Captured British medical officer Capt. A. L. Cochrane cared for some of them in the prison where he was confined and noted sardonically that imprisonment was “an excellent place to study tuberculosis; [and] to learn the vast importance of food in human health and happiness.” German prison guards gave POWs only 1,000 to 1,500 calories per day, so Red Cross food parcels, which provided an additional 1,500 daily calories per person, were critical to preventing malnutrition and physical breakdown. Cochrane observed that the American and British POWs received the most parcels and had the lowest tuberculosis rates in the camp, while the Russians received nothing at all and had the highest rates. During the eighteen months that French POWs received the Red Cross parcels, he noted, just two men of 1,200 developed tuberculosis but when parcels for the French ceased to arrive in 1945, their tuberculosis rate rose to equal that of the Russians. The situation, he concluded, showed the “vast importance of nutrition in the incidence of tuberculosis.” Not all Americans got their parcels, though. William H. Balzer, with an American artillery unit, was captured in February 1943, and remembered how German guards stole the Americans’ packages.
Balzer survived imprisonment but never recovered from the ordeal. Severely disabled (70 percent), he died in 1960 on his forty-sixth birthday.

Exact tuberculosis rates among American POWs are not known because the rush of events surrounding the liberation of prisoners from German and Japanese control prevented a systematic X-ray survey. Rates did appear to be higher, though, for prisoners of the Japanese than for prisoners of the Germans. Long reported that about 0.6 percent of recovered troops from European POW camps had tuberculosis, whereas data from the Pacific theater suggested that 1 percent of recovered prisoners had tuberculosis. Moreover, an analysis of the chest X-rays done at West Coast debarkation hospitals revealed that 101 (or 2.7 percent) of 3,742 former POWs of the Japanese showed evidence of active tuberculosis. John R. Bumgarner was a tuberculosis ward officer at Sternberg General Hospital in Manila, the Philippines, before the war. A POW for forty-two months after the Japanese invasion, he described his experience in Parade of the Dead. Bumgarner did what he could to care for many of the 13,000 prisoners in the camp, but knew that “my patients were poorly diagnosed and poorly treated.” The narrow cots were so close together, he wrote, “the crowding and the breathing of air loaded with this bacilliary miasma from coughing ensured that those mistakenly segregated would be infected.”

Bumgarner was able to stay relatively healthy throughout his imprisonment. His luck ended, however, because “on my way home across the Pacific I had the first symptoms of tuberculosis.” Severe chest pain and subsequent X-rays at Letterman Hospital in San Francisco revealed active disease. “I had gone through more than four years of hell—now this!” Discharged on disability for tuberculosis in September 1946 he began to work at the Medical College of Virginia but soon had a lung hemorrhage. This time it took eight years of rest, with surgery and new antibiotic treatment for him to recover. By 1956, however, Bumgarner had married his sweetheart, Evelyn, and begun a medical career in cardiology that lasted for thirty years.

Tuberculosis continued to take its toll on POWs for years after the war. The VA followed POWs as a special group because, explained Long, of “the hardships that many of these men endured, and the notorious tendency for tuberculosis to make its appearance years after the acquisition of infection.” A follow-up study published in 1954 reported that for American POWs during the six years after liberation tuberculosis was the second highest cause of death, after accidents.

If the challenges Army medical personnel faced in caring for sick and starving POWs and refugees were unprecedented, the scale of disease and suffering they encountered in the Nazi concentration camps was almost unimaginable. Allied troops had heard about secret and deadly camps but were not prepared for what they found. As the Allies converged on Berlin from the East and the West, the Nazis evacuated thousands of prisoners—most of them Jews seized from across Europe, as well as POWs—to interior camps to hide their crimes and prevent the inmates from falling into Allied hands. These evacuations became death marches as SS (abbreviation of Schutzstaffel, which stood for “defense squadron”) guards beat and murdered people, and failed to feed them for days on end. Survivors were crowded into camps such as Buchenwald and Dachau making them even more chaotic and deadly. Americans, therefore, liberated camps that were riven with disease, especially typhus, tuberculosis, and malnutrition.

The Allies liberated Buchenwald on 11 April 1945. The following day the world learned that Franklin Roosevelt had died. Americans then liberated Dachau on 29 April, the day Italian partisans executed Mussolini in Milan, and the next day Hitler killed himself in his bunker. Dachau (Figure 8-5) had been the first of hundreds of concentration camps in the German Reich to which the Nazis sent political enemies, the disabled, people accused of socially deviant behavior, and, increasingly after the Kristallnacht pogroms of 1938, Jewish men, women, and children. In January1945 Dachau held 67,000 prisoners, but with troops of the Seventh U.S. Army approaching the SS began evacuating and killing prisoners. Capt. Marcus J. Smith, a medical officer in his thirties, arrived at Dachau on 30 April 1945, the day after liberation, part of a small team trained to treat persons displaced by the war. Horror greeted him outside the camp in a train of forty boxcars loaded with more than two thousand corpses. Smith called the frost that had formed on the bodies in the intense cold, “Nature’s shroud.” Inside Dachau he encountered more grotesque piles of naked, skeletal bodies of prisoners and scattered, mutilated bodies of German guards.

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Figure 8-5. Dachau survivors gather by the moat to greet American liberators, 29 April 1945. Photograph courtesy of the United States Holocaust Memorial Museum, Washington, DC.
Smith found more than 30,000 prisoners, mostly Jews of forty nationalities, and all men except for about 300 women the SS had kept in a brothel. They were in desperate condition. Typhus and dysentery raged, at least half of the prisoners were starving, and hundreds had advanced tuberculosis. “The well, the sick, the dying, and the dead lie next to each other in these poorly ventilated, unheated, dark, stinking buildings,” Smith told his wife. The men were “malnourished and emaciated, their diseases in all stages of development: early, late, and terminal.” He wondered, “What am I going to write in my notebook?” and then started a list of needed supplies: clothes, shoes, socks, towels, bedding, beds, soap, toilet paper, more latrines, and new quarters. He almost despaired. “What are we going to do with the starving patients? How will we care for them without sterile bandages, gloves, bedpans, urinals, thermometers, and all the basic material? How do we manage without an organization? No interns, no nursing staff, no ambulances, no bathtubs, no laboratories, no charts, and no orderlies, no administrator, and no doctors.… I feel helpless and empty. I cannot think of anything like this in modern medical history.”

American efforts did prevent a deadly typhus epidemic from sweeping postwar Europe and helped contain tuberculosis rates in Germany, but the Nazis had created a human catastrophe so immense that even the most dedicated efforts would at times fall short.

Faced with horror on such a scale, Smith and other Army Medical Department personnel assigned to the concentration camps threw themselves into the work of cleansing, comforting, treating, and nurturing their patients. American commanders called in at least six Army evacuation hospitals (EH) to care for the sick and dying in the liberated camps. EH No. 116 and EH No. 127 began arriving at Dachau on 2 May with some forty medical officers, forty nurses, and 220 enlisted men. Consulting with Smith and his team, the units set up in the former SS guard barracks. They tore out partitions to create larger wards, scrubbed the walls and floors with Cresol solution, sprayed them with dichloro-diphenyl-trichloroethane (DDT), and then set up cots to create two hospitals of 1,200 beds each. Medical staff also discovered physician-prisoners who had cared for the sick and injured as well as they could, and could now advise and assist, and in some cases translate for the medical staff. In two days the hospitals were ready to admit patients by triage, segregating them by disease and prognosis. Laurence Ball, the EH No. 116 commander, noted that more than 900 patients had “two or more diseases, such as malnutrition, typhus, diarrhea, and tuberculosis.” Staff bathed and deloused them, gave them clean pajamas, and put them to bed.

Death by overeating was but one of the dangers that the prisoners faced. During May 1945, American hospitals at Dachau had more than 4,000 typhus patients and lost 2,226 to typhus and other diseases. Typhus, a rickettsial disease transmitted by body lice, had a mortality rate as high as 40 percent. With no medical cure, treatment consisted of supportive care—keeping patients clean and nourished—to mitigate effects of prolonged fever, such as the breakdown of tissue into gangrene. The Americans knew that typhus had taken three million lives in Eastern Europe after World War I, but now they had a means of prevention and better weapons—a typhus vaccine and DDT. On 2 May, the day the evacuation hospitals arrived, the commander of the Seventh Army imposed quarantines for typhus and tuberculosis, and summoned the U.S. Typhus Commission, which had controlled a typhus outbreak in Naples, Italy. A typhus team arrived the next day and began to immunize American personnel and dust them with DDT. On 7 May staff began to vaccinate inmates but kept typhus patients isolated for at least twenty-one days from the onset of illness to prevent transmission to others. This meant that the Americans did not immediately enter the inner camp barracks—the worst, most typhus-infested part of the camp—nor did they quickly relieve crowding there for fear of spreading typhus-bearing lice. It took over a week for personnel to prepare more spacious and clean quarters.

Smith wrote his lists, reported to his wife, and kept track of the daily death toll, finding comfort as the number of people who died daily fell from 200 during the first week to twenty by the end of May. Another medical officer performed autopsies. He chose ten of the dead bodies, five from the death train and five from the camp yard, to see what had caused their deaths. All had typhus and extreme malnutrition, eight had advanced tuberculosis, and some bodies had signs of fractures and head injuries.

Survivors in Dachau, 1 May 1945

Survivors in Dachau, 1 May 1945

By the end of May, conditions at Dachau had improved. Typhus was abating and American officials began to release groups of inmates by nationality. Beyond Dachau, the U.S. Typhus Commission tracked down new cases of typhus in civilian and military populations, deloused one million people, sprayed fifteen tons of DDT, and created a cordon sanitaire on the Rhine requiring all who crossed from Germany to be vaccinated and dusted to prevent the spread of disease. Thus the Army averted a broader typhus epidemic.138 The tuberculosis situation was more complicated and presented the Americans with a conundrum. What to do with thousands of people suffering from a long-term, infectious, and deadly disease?

As with the American POWs, tuberculosis continued to follow Dachau survivors into their new lives. Thousands of Jewish survivors emigrated to what would become the state of Israel. Fifteen years after liberation, the Israeli Minister of Health reported that although concentration camp survivors comprised only 25 percent of the population, they accounted for 65 percent of the tuberculosis cases in the country. Tuberculosis continued to thrive in Europe as well.

Historian Albert Cowdrey has credited the American actions with preventing a number of postwar scourges: “No one can prove that a great typhus epidemic, mass deaths of prisoners of war, or widespread outbreaks of disease among the German population would have taken place without the efforts of Army doctors of the field forces and the military government.” But, he continued, “conditions were ripe for such tragedies to occur, and Army medics brought both professional knowledge and military discipline to forestalling what might have been the last calamities of the war in Europe.” Thus, as usual, in public health the good news is no news at all.

Thousands of men survived the Vietnam War because of the quality of their hospital care. US hospitals in Vietnam were the best that could be deployed, incorporating several improvements from previous field hospitals. Army doctors were better trained, and they had good facilities at the semi-permanent base camps. As a result, more advanced surgical procedures were possible: more laparotomies, thoracotomies, vascular repairs (including even aortic and carotid repairs), advanced neurosurgery for head wounds, and other medical procedures. Blood transfusions were performed, with massive quantities of blood available for seriously wounded patients; some patients received as many as 50 units of blood. Advances in equipment resulted in the development of intensive care units with mechanical ventilators. There were far more medications available for particular diseases than in earlier conflicts.

With about 30 physicians assigned, the 12th could keep four or five operating tables going all day, and two or three all night. A common practice was delayed primary closure for wounds with a high likelihood of infection. Instead of stitching the wound closed immediately, dirt and contaminants were flushed out, bleeding was controlled, dead flesh was removed (debrided), the wound was packed with sterile gauze, and antibiotics were administered. For a few days the patient healed, while nurses changed the bandages and made sure the wound did
not get worse. Then doctors removed any remaining contaminants or dead flesh and stitched up the wound. This procedure reduced the incidence of infection compared to immediate wound closure at a risk of a larger scar.

In any given year in Vietnam, about one soldier in three was hospitalized for disease. The main causes for hospitalization were malaria, psychiatric problems, and ordinary fevers. Although many men fell sick, competent care was available and most recovered quickly and returned to duty.

The war spurred advances in surgery and medical trauma research. New surgical techniques allowed limbs that previously would have been amputated to remain functional. Nurse anesthetist Rosemary Sue Smith recalled the development of new blood-handling procedures:

We started separating blood into its components, because we were getting a lot of aggregates that were causing a lot of disseminated intravascular coagulopathy in patients, and causing a lot of blood clots, and pulmonary thrombosis, and a lot of ARDS, Adult Respiratory Distress Syndrome, which started in Da Nang and was called Da Nang Lung initially. It has developed into today being called Adult Respiratory Distress Syndrome, and they did a lot of research on this, and they were having us separate our blood into its components, into fresh frozen plasma and into platelets, and then we started doing blood tests to see which the patients would need. If their platelets were low, or if their blood clotting factors were low, we would just give them the particular products. We actually started breaking these products down and administering them in the Vietnam War, and it’s carried over into civilian life now. They’re used today in acute trauma to prevent disseminated intravascular coagulopathy and prevent Adult Respiratory Distress Syndrome on massive traumas that have to be naturally resuscitated with blood and blood products.

In the 1960s, intensive care was still quite new and the 12th had only one (later two) intensive care wards fully equipped and staffed. A key piece of equipment was the ventilator, then called “respirator.” Ventilators worked on pure oxygen until 1969, when research revealed physiological problems from prolonged breathing of pure oxygen. Early ventilators required considerable maintenance; valves needed frequent cleaning or the machines broke down.

Antibiotics were important because of the wide variety of bacteria and large number of penetrating wounds; in the face of a possible systemic infection (the development of sepsis), antibiotics were delivered through an IV. Nurse Rosie Parmeter recalled having to prepare antibiotics to be delivered through an IV several times a day for each patient, a necessary but time-consuming task.

About two-thirds of patients cared for by the 12th were US military; the other third were mainly Vietnamese but also included nonmilitary Americans and Free World Military Assistance Forces personnel. Staff regularly dealt with the Vietnamese, both military and civilian, enemy and friendly. There were wards set aside for enemy prisoners (who were stabilized, then transferred to hospitals at POW camps) and civilians. Wounded South Vietnamese Army soldiers were also stabilized and transferred to hospitals run by the Army of the Republic of Vietnam (ARVN). Civilian patients often stayed longer because the war swamped the available hospitals for Vietnamese civilians.

Through the years of the Vietnam War, US forces sustained 313,616 wounded in action; at peak strength, there were 26 American hospitals. The 12th Evacuation Hospital was at Cu Chi for 4 years and treated just over 37,000 patients. Records for the 12th are incomplete, but the average died-of-wounds rate in Vietnam was about 2.8% of patients who reached a hospital alive. Applied to the 12th, that rate amounted to about 1,036 patients, including prisoners and Vietnamese as well as Americans. But over 36,000 people survived and could return home because of the treatment they received at the 12th Evac.

Sources:

Fort Bayard,  by David Kammer, Establishment of Fort Bayard Army Post
http://newmexicohistory.org/places/fort-bayard
George Ensign Bushnell, Colonel, Medical Corps, U. S. Army
THE ARMY MEDICAL BULLETIN, NUMBER 50 (OCTOBER 1939)
http://history.amedd.army.mil/biographies/bushnell
Chapter One, The Early Years: Fort Bayard, New Mexico
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=332041d7-dbd2-4edf-823f-29a66c0b65ef
Dachau concentration camp (Wikipedia)
http://en.wikipedia.org/wiki/Dachau_concentration_camp
Office of Medical History – United States Army
Esmond R. Long, M. D., TUBERCULOSIS IN WORLD WAR I
Chapter 14 – Tuberculosis
http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Chapter Four, Tuberculosis in World War I
Chapter Five, “A Gigantic Task”: Treating and Paying for Tuberculosis in the Interwar Period
Chapter Six, “Good Tuberculosis Women”: Tuberculosis Nursing during the Interwar Period
Chapter Seven, Surviving the Great Depression: Fitzsimons and the New Deal
Chapter Eight, Camp Follower: Tuberculosis in World War II
http://www.cs.amedd.army.mil/FileDownload aspx?
Good Tuberculosis Men”: The Army Medical Department’s Struggle with Tuberculosis Carol R. Byerly
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=986faf8a-b833-46a8-a251-00f72c91da2f

The Global Distribution of Yellow Fever and Dengue
D.J. Rogers1, A.J. Wilson1, S.I. Hay1,2, and A.J. Graham1
Adv Parasitol. 2006 ; 62: 181–220. http://dx.doi.org:/10.1016/S0065-308X(05)62006-4.

http://www.historyofvaccines.org/content/timelines/yellow-fever

History of yellow fever
http://en.wikipedia.org/wiki/History_of_yellow_fever

Additional Reading:

Open Wound: The Tragic Obsession of William Beaumont.
Jason Karlawish

The Great Influenzs. John M. Barry.
Penguin. 2004.
Univ Mich Press. 2011.

Flu. The story of the great influenza pandemic of 1918 and
the search for the virus that caused it.
Gina Kolata.
Touchstone. 1999

Pestilence. A Medieval Tale of Plague.
Jeani Rector
The HorrorZime. 2012

Knife Man: The extraordinary life of John Hunter, Father of Modern Surgery
Wendy Moore.
Broadway Books. 2005

Hospital.
Julie Salamon.
Penguin Press. 2008.

Overdosed America.

John Abramson.
Harper. 2004.

Sick.
Jonathen Cohn.
Harper Collins. 2007.
.

Read Full Post »

« Newer Posts - Older Posts »