Feeds:
Posts
Comments

Archive for the ‘Advanced Drug Manufacturing Technology’ Category

Where is the most promising avenue to success in Pharmaceuticals with CRISPR-Cas9?

Author: Larry H. Bernstein, MD, FCAP

 

2.1.2.3

Where is the most promising avenue to success in Pharmaceuticals with CRISPR-Cas9?  Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

There has been a rapid development of methods for genetic engineering that is based on an initial work on bacterial resistance to viral invasion.  The engineering called RNA inhibition (RNAi) has gone through several stages leading to a more rapid and more specific application with minimal error.

It is a different issue to consider this application with respect to bacterial, viral, fungal, or parasitic invasion than it would be for complex human metabolic conditions and human cancer. The difference is that humans and multi-organ species are well differentiated systems with organ specific genome translation to function.

I would expect to see the use of genomic alteration as most promising in the near term for the enormous battle against antimicrobial, antifungal, and antiparasitic drug resistance.  This could well be expected to be a long-term battle because of the invading organisms innate propensity to develop resistance.

A CRISPR/Cas system mediates bacterial innate immune evasion and virulence

Timothy R. Sampson, Sunil D. Saroj, Anna C. Llewellyn, Yih-Ling Tzeng David S. Weiss

Affiliations, Contributions, Corresponding author

Nature 497, 254–257 (09 May 2013),  http://dx.doi.org:/10.1038/nature12048

CRISPR/Cas (clustered regularly interspaced palindromic repeats/CRISPR-associated) systems are a bacterial defence against invading foreign nucleic acids derived from bacteriophages or exogenous plasmids1234. These systems use an array of small CRISPR RNAs (crRNAs) consisting of repetitive sequences flanking unique spacers to recognize their targets, and conserved Cas proteins to mediate target degradation5678. Recent studies have suggested that these systems may have broader functions in bacterial physiology, and it is unknown if they regulate expression of endogenous genes910. Here we demonstrate that the Cas protein Cas9 of Francisella novicida uses a unique, small, CRISPR/Cas-associated RNA (scaRNA) to repress an endogenous transcript encoding a bacterial lipoprotein. As bacterial lipoproteins trigger a proinflammatory innate immune response aimed at combating pathogens1112, CRISPR/Cas-mediated repression of bacterial lipoprotein expression is critical for F. novicida to dampen this host response and promote virulence. Because Cas9 proteins are highly enriched in pathogenic and commensal bacteria, our work indicates that CRISPR/Cas-mediated gene regulation may broadly contribute to the regulation of endogenous bacterial genes, particularly during the interaction of such bacteria with eukaryotic hosts.

http://www.nature.com/nature/journal/v497/n7448/carousel/nature12048-f1.2.jpg

http://www.nature.com/nature/journal/v497/n7448/carousel/nature12048-f2.2.jpg

http://www.nature.com/nature/journal/v497/n7448/carousel/nature12048-f4.2.jpg

Zhang lab unlocks crystal structure of new CRISPR/Cas9 genome editing tool

Paul Goldsmith,  2015 Aug

In a paper published today in Cell researchers from the Broad Institute and University of Tokyo revealed the crystal structure of theStaphylococcus aureus Cas9 complex (SaCas9)—a highly efficient enzyme that overcomes one of the primary challenges to in vivo mammalian genome editing.

First identified as a potential genome-editing tool by Broad Institute core member Feng Zhang and his colleagues (and published by Zhang lab in April 2015), SaCas9 is expected to expand scientists’ ability to edit genomes in vivo. This new structural study will help researchers refine and further engineer this promising tool to accelerate genomic research and bring the technology closer to use in the treatment of human genetic disease.

“SaCas9 is the latest addition to our Cas9 toolbox, and the crystal shows us its blueprint,” said co-senior author Feng Zhang, who in addition to his Broad role, is also an investigator at the McGovern Institute for Brain Research, and an assistant professor at MIT.

The engineered CRISPR-Cas9 system adapts a naturally-occurring system that bacteria use as a defense mechanism against viral infection. The Zhang lab first harnessed this system as an effective genome-editing tool in mammalian cells using the Cas9 enzymes from Streptococcus thermophilus (StCas9) andStreptococcus pyogenes (SpCas9). Now, Zhang and colleagues have detailed the molecular structure of SaCas9, providing scientists with a high-resolution map of this enzyme. By comparing the crystal structure of SaCas9 to the crystal structure of the more commonly-used SpCas9 (published by the Zhang lab in February 2014), the team was able to focus on aspects important to Cas9 function— potentially paving the way to further develop the experimental and therapeutic potential of the CRISPR-Cas9 system.

Paper cited: Nishimasu H et al. “Crystal Structure of Staphylococcus aureus Cas9.” Cell, http://dx.doi.org:/10.1016/j.cell.2015.08.007

Advances in CRISPR-Cas9 genome engineering: lessons learned from RNA interference

Rodolphe Barrangou1,†, Amanda Birmingham2,†, Stefan Wiemann3, Roderick L. Beijersbergen4, Veit Hornung5 and Anja van Brabant Smith2
Nucleic Acids Research, 2015 Mar 23.  http:dx.doi.org:/10.1093/nar/gkv226

RNAi and CRISPR-Cas9 have many clear similarities. Indeed, the mechanisms of both use small RNAs with an on-target specificity of ∼18–20 nt. Both methods have been extensively reviewed recently (3–5) so we only highlight their main features here. RNAi operates by piggybacking on the endogenous eukaryotic pathway for microRNA-based gene regulation (Figure 1A). microRNAs (miRNAs) are small, ∼22-nt-long molecules that cause cleavage, degradation and/or translational repression of RNAs with adequate complementarity to them(6).RNAi reagentsfor research aim to exploit the cleavage pathway using perfect complementarity to their targets to produce robust downregulation of only the intended target gene. The CRISPRCas9 system, on the other hand, originates from the bacterial CRISPR-Cas system, which provides adaptive immunity against invading genetic elements (7). Generally, CRISPR-Cas systems provide DNA-encoded (7), RNAmediated (8), DNA- (9) or RNA-targeting(10) sequencespecific targeting. Cas9 is the signature protein for Type II CRISPR-Cas systems (11

Figure 1. (not shown) The RNAi and CRISPR-Cas9 pathways in mammalian cells. (A) miRNA genes code for primary miRNAs that are processed by the Drosha/DGCR8 complex to generate pre-miRNAs with a hairpin structure. These molecules are exported from the nucleus to the cytoplasm, where they are further processed by Dicer to generate ∼22-nt-long double-stranded mature miRNAs. The RNA duplex associates with an Argonaute (Ago) protein and is then unwound; the strand with a more unstable 5 end (known as the guide strand) is loaded into Ago to create the RNA-induced silencing complex (RISC) while the unloaded strand is discarded. Depending on the degree of complementarity to their targets, miRNAs cause either transcript cleavage and/or translational repression and mRNA degradation. siRNAs directly mimic mature miRNA duplexes, while shRNAs enter the miRNA pathway at the pre-miRNA hairpin stage and are processed into such duplexes. (B) CRISPR-Cas9-mediated genome engineering in mammalian cells requires crRNA, tracrRNA and Cas9. crRNA and tracrRNA can be provided exogenously through a plasmid for expression of a sgRNA, or chemically synthesized crRNA and tracrRNA molecules can be transfected along with a Cas9 expression plasmid. The crRNA and tracrRNA are loaded into Cas9 to form an RNP complex which targets complementary DNA adjacent to the PAM. Using the RuvC and HNH nickases, Cas9 generates a double-stranded break (DSB) that can be either repaired precisely (resulting in no genetic change) or imperfectly repaired to create a mutation (indel) in the targeted gene. There are a myriad of mutations that can be generated; some mutations will have no effect on protein function while others will result in truncations or loss of protein function. Shown are mutations that will induce a frame shift in the coding region of the mRNA (indicated by red X’s), resulting in either a truncated, non-functional protein or loss of protein expression due to nonsense-mediated decay of the mRNA.

Both RNAi and CRISPR-Cas9 have experienced significant milestones in their technological development, as highlighted in Figure 2 (7–14,16–22,24–51) (highlighted topics have been detailed in recent reviews (2,4,52–58)). The CRISPR-Cas9 milestones to date have mimicked a compressed version of those for RNAi, underlining the practical benefit of leveraging similarities to this well-trodden research path. While RNAi has already influenced many advances in the CRISPR-Cas9 field, other applications of CRISPR-Cas9 have not yet been attained but will likely continue to be inspired by the corresponding advances in the RNAi field (Table 1). Of particular interest are the potential parallels in efficiency, specificity, screening and in vivo/therapeutic applications, which we discuss further below.

Figure2. Timeline of milestones for RNAi and CRISPR-Cas9. Milestones in the RNAi field are noted above the line and milestones in the CRISPR-Cas9 field are noted below the line. These milestones have been covered in depth in recent reviews (2,4,52–29).
Table 1. Summary of improvements in the CRISPR-Cas9 field that can be anticipated by corresponding RNAi advances

Work performed during the first few years of intensive RNAi investigations demonstrated that, when taking 70– 75% reduction in RNA levels as a heuristic threshold for efficiency (59), only a small majority of siRNAs and shRNAs function efficiently (24,60) when guide strand sequences are chosen randomly. This observation led to the development in 2004 of rational design algorithms for siRNA molecules (Figure2), followed later by similar algorithms for shRNAs. These methods have been able to achieve∼75% correlation and >80% positive predictive power in identifying functional siRNAs (61) but have been somewhat less effective for shRNAs (62) (perhaps because in most cases, shRNAs produce less knockdown than do siRNAs, likely due to a smaller number of active molecules in each cell). crRNAs also vary widely in efficiency: reports have demonstrated indel (insertion and deletion) creation rates between 5 and 65% (20,25), though the average appears to be between 10 and 40% in unenriched cell populations. Indeed, a growing amount of evidence suggests a wide range of crRNA efficiency between genes and even between exons of the same gene, yielding some ‘super’ crRNAs that are more functional(26,27).

Perhaps in no other area are the lessons of RNAi as obvious as in that of specificity. While RNAi was originally hailed as exquisitely specific (64), subsequent research has shown that in some circumstances it can trigger non-specific effects and/or sequence-specific off-target effects (65). Many non-specific effects seen with this approach are mediated by the inadvertent activation of pattern recognition receptors (PRRs) of the innate immune system that have evolved to sense the presence of nucleic acids in certain sub-cellular compartments. siRNA length, certain sequence motifs, the absence of 2-nt 3 overhangs and cell type are important factors for induction of the mammalian interferon response (66–68). Additionally, the general perturbation of cellular or tissue homeostasis by the delivery process itself can also trigger unwanted responses (most likely secondary to innate immune damage-sensing pathways) such as the wide-spread alteration of gene expression caused by cationic lipids, especially when used at high concentrations (69). Such nonspecific effects associated with delivery will still exist for CRISPR-Cas9 but can likely be overcome by minimizing lipid concentration as is now routinely done in RNAi studies. Similarly, the introduction of chemical modifications into the backbone of an siRNA duplex (e.g. 2-O-methyl ribosyl) can block the recognition of RNA molecules by PRRs (66,70–71),

RNAi can also produce sequence-specific off-target effects, which were initially described in early 2003 (31), but whose potential impact was not fully appreciated until well after the method had become a widely used research and screening technique (e.g. (74)). Cleavage-based off-targeting, which occurs when RISC encounters an unintended transcript target with perfect or near-perfect complementarity to its guide strand, can induce knockdownequivalenttothatofintendedtargetdown-regulation and was originally hypothesized to be the main cause of sequence-specific off-target effects. It took several years to determine that these effects were in fact primarily caused byRNAireagentsactingina‘miRNA-like’fashion,downregulating unintended targets by small (usually <2-fold) amounts primarily through seed-based interactions with the 3 UTR of those unintended targets. Because miRNAlike off-targeting is generally seed-based and all transcripts contain matches to a variety of 6–8-base motifs, such off targeting can affect tens to hundreds of transcripts. Furthermore, if the RNAi reagent contains a seed mimicking that of an endogenous miRNA, the off-targeting may affect the pathway or family of targets evolutionarily selected for regulation by that miRNA. It is not possible to design RNAi reagents that do not contain seed regions found in the transcriptome’s 3 UTRs and the non-seed factors that conclusively determine whether or not a seed-matched transcript is in fact off-targeted have not yet been identified. Both rational design and chemical modifications such as 2 O-methyl ribosyl substitutions can mitigate seed-based off-target effects (32), but without a full solution, specificity remains a well-known pain point for RNAi users.

Of particular importance is evaluating whether the lower efficiencies seen using CRISPR-Cas9 are sufficient to generate a desired phenotype in the screening assay––that is, determining whether the phenotype is detectable in the targeted cell population. In this regard, two factors are of special concern: the ploidy of the gene locus of interest (as tumor cell lines are often aneuploid) and the likelihood of disrupting the reading frame by the induced mutation (since +3 or−3 indels would not serve this purpose). Taking these factors into account, the chance of obtaining a high percentage of cells that have a functional knockout in a bulk cell culture is relatively low under typical screening conditions. Consequently, it is unlikely that traditional arrayed loss-of-signal screens such as those common in RNAi will be widely feasible in bulk-transfected cells using CRISPR-Cas9.

RNAi has demonstrated tremendous value as a functional genomics tool, especially with the technological advances described above that enhance efficiency and decrease offtarget effects (118). Likewise, CRISPR-Cas9 has already proven to be a valuable tool for functional genomics studies. Although we have highlighted many points on which the RNAi field can offer pertinent guidance for the effective development and exploitation of CRISPR-Cas9, it is important to remember the fundamental differences that underlie these techniques (Table 3). These contrasts must be considered when selecting the most appropriate method for studying a particular gene or genome.
Molecular consequences. One such fundamental difference between the two is the molecular consequences of their actions. RNAi results in knockdown at the RNA level while CRISPR-Cas9 causes a change in the DNA of the genome; as a corollary, RNAi happens predominantly in the cytoplasm, while CRISPRCas9 acts in the nucleus. These contrasts highlight the differing applicability of the techniques: for example, circRNAs (119,120) that differ from their linear counterparts by splice order in the final transcript can be interrogated by RNAi but not CRISPR-Cas9, while intron functionality can be investigated by CRISPR-Cas9 but not RNAi. For more prosaic targets of interest, in some cases the resulting phenotype associated with either knockdown or knockout may be similar but in others there may be significant differences that result from repression of gene expression compared to a complete null genotype.AlthoughCRISPRCas9-based approaches for drug target identification have been developed (121), repression of gene expression may better model a potential drug’s means of activity and thus be more relevant for drug discovery efforts.

Duration of effect. Because of differences in their mode of action, CRISPRCas9 and RNAi also differ in their duration of effect. siRNA knockdown is typically transient (lasting 2–7 days), while genome engineering with CRISPR-Cas9 induces a permanent effect that, if all alleles are affected, sustainably removes gene function and activity. shRNA knockdown can be either short- or long-term depending on whether the shRNA is continuously expressed, providing some middleground; shRNA activity can also be turned on and off with inducible vectors (122,123) although some leakage can occur even in the off state, depending on the inducible system. Inducible or transient systems will also likely be necessary for studying essential genes viaCRISPR-Cas9

Modulation of non-coding genes Most protein-coding genes will be easily down-modulated by either RNAi or CRISPR-Cas9. For permanent disruption of protein-coding genes using CRISPR-Cas9, frameshift mutations in a critical coding exon (i.e. an early protein-coding exon that is used by all relevant transcript variants) must occur, while RNAi reagents can be targeted essentially anywhere within the transcript.However,knockdown or knockout of non-coding RNAs is more nuanced. The study of small non-coding genes, particularly, is complicated for both RNAi and CRISPR-Cas9 by the limited design space for targeting the non-coding gene without affecting nearby genes.

The fact that CRISPR-Cas9 is not an endogenous mammalian system provides the opportunity for innovative protein evolution studies that are not possible with RNAi. Given this, we anticipate that the CRISPR-Cas9 field will expand beyond the canonical S. pyogenes SpyCas9 in combination with the NGG PAM that has been the focus of virtually all mammalian applications to date. Indeed, other Cas9 proteins are being increasingly characterized (145) with their respective PAMs (of various sizes and sequences) in order to expand targeting specificity.

The new frontier of genome engineering with CRISPR-Cas9
GENOME EDITING
Jennifer A. Doudna* and Emmanuelle Charpentier
Science 346, 1258096 (2014). http://dx.doi .org/10.1126/ science.125809

Fig. 1.Timeline of CRISPR-Cas and genome engineering research fields. Key developments in both fields are shown. These two fields merged in 2012 with the discovery that Cas9 is an RNA-programmable DNA endonuclease, leading to the explosion of papers beginning in 2013 in which Cas9 has been used to modify genes in human cells as well as many other cell types and organisms.

Functionality of CRISPR-Cas9 Bioinformatic analyses first identified Cas9 (formerly COG3513, Csx12, Cas5, or Csn1) as a large multifunctional protein (36) with two putative nuclease domains, HNH (38, 43, 44) and RuvC-like (44). Genetic studies showed that S. thermophilus Cas9 is essential for defense against viral invasion (45, 66), might be responsible for introducing DSBs into invading plasmids and phages (67), enables in vivo targeting of temperate phages and plasmids in bacteria (66, 68), and requires the HNH and RuvC domains to interfere with plasmid transformation efficiency (68). In 2011 (66), trans-activating crRNA (tracrRNA) —a small RNA that is trans-encoded upstream of the type II CRISPR-Cas locus in Streptococcus pyogenes—was reported to be essential for crRNA maturation by ribonuclease III and Cas9, and tracrRNA-mediated activation of crRNA maturation was found to confer sequence-specific immunity against parasite genomes. In 2012 (64), the S.pyogenes CRISPR-Cas9proteinwasshown tobeadual-RNA–guidedDNAendonucleasethat uses the tracrRNA:crRNA duplex (66) to direct DNA cleavage (64) (Fig. 2). Cas9 uses its HNH domain to cleave the DNA strand that is complementary to the 20-nucleotide sequence of the crRNA; the RuvC-like domain of Cas9 cleaves the DNA strand opposite the complementary strand (64, 65) (Fig. 2). Mutating either the HNH or the RuvC-like domain in Cas9 generates a variant protein with single-stranded DNA cleavage (nickase) activity, whereas mutating both domains (dCas9; Asp10 → Ala, His840 → Ala) results in an RNA guided DNA binding protein(64,65). DNA target recognition requires both base pairing to the crRNA sequence and the presence of a short sequence (PAM) adjacent to the targeted sequence in the DNA (64, 65) (Fig. 2). The dual tracrRNA:crRNA was then engineered as a single guide RNA (sgRNA) that retains two critical features: the 20-nucleotide sequence at the 5′end of the sgRNA that determines the DNA target site by Watson-Crick base pairing,and the double-stranded structure at the 3′ side of the guide sequence that binds to Cas9 (64) (Fig. 2). This created a simple two-component system in which changes to the guide sequence (20 nucleotides in the native RNA) of the sgRNA can be used to program CRISPR-Cas9 to target any DNA sequence of interest as long as it is adjacent to a PAM (64).

Fig. 2. Biology of the type II-A CRISPR-Cas system.The type II-A system from S. pyogenes is shown as an example. (A) The cas gene operon with tracrRNA and the CRISPR array. (B) The natural pathway of antiviral defense involves association of Cas9 with the antirepeat-repeat RNA (tracrRNA: crRNA) duplexes, RNA co-processing by ribonuclease III, further trimming, R-loop formation, and target DNA cleavage. (C) Details of the natural DNA cleavage with the duplex tracrRNA:crRNA

Mechanism of CRISPR-Cas9–mediated genome targeting. Structural analysis of S. pyogenes Cas9 has revealed additional insights into the mechanism of CRISPR-Cas9 (Fig. 3). Molecular structures of Cas9 determined by electron microscopy and x-ray crystallography show that the protein undergoes large conformational rearrangement upon binding to the guide RNA, with a further change upon association with a target doublestranded DNA (dsDNA). This change creates a channel, running between the two structural lobes of the protein, that binds to the RNA-DNA hybrid as well as to the coaxially stacked dualRNA structure of the guide corresponding to the crRNA repeat–tracrRNA antirepeat interaction (77, 78). An arginine-rich a helix (77–79) bridges the two structural lobes of Cas9 and appears to be the hinge between them.

Fig. 4. CRISPR-Cas9 as a genome engineering tool. (A) Different strategies for introducing blunt double-stranded DNA breaks into genomic loci, which become substrates for endogenous cellular DNA repair machinery that catalyze nonhomologous end joining (NHEJ) or homology-directed repair (HDR). (B) Cas9 can function as a nickase (nCas9) when engineered to contain an inactivating mutation in either the HNH domain or RuvC domain active sites. When nCas9 is used with two sgRNAs that recognize offset target sites in DNA, a staggered double-strand break is created. (C) Cas9 functions as an RNA-guided DNA binding protein when engineered to contain inactivating mutations in both of its active sites.This catalytically inactive or dead Cas9 (dCas9) can mediate transcriptional down-regulation or activation, particularly when fused to activator or repressor domains. In addition, dCas9 can be fused to fluorescent domains, such as green fluorescent protein (GFP), for live-cell imaging of chromosomal loci. Other dCas9 fusions, such as those including chromatin or DNA modification domains, may enable targeted epigenetic changes to genomic DNA.

The programmable binding capability of dCas9 can also be used for imaging of specific loci in live cells. An enhanced green fluorescent protein– tagged dCas9 protein and a structurally optimized sgRNA were shown to produce robust imaging of repetitiveand nonrepetitiveelementsin telomeres and coding genes in living cells (131). This CRISPR imaging tool has the potential to improve the current technologies for studying conformational dynamics of native chromosomes in living cells, particularlyifmulticolorimagingcanbedeveloped using multiple distinct Cas9 proteins. It may also be possible to couple fluorescent proteins or small molecules to the guide RNA, providing an orthogonal strategy for multicolor imaging using Cas9. Novel technologies aiming to disrupt proviruses may be an attractive approach to eliminating viral genomes from infected individuals and thus curing viral infections. An appeal of this strategy is that it takes advantage of the primary native functions of CRISPR-Cas systems as antiviral adaptive immune systems in bacteria. The targeted CRISPR-Cas9 technique was shown to efficiently cleave and mutate the long terminal repeat sites of HIV-1 and also to remove internal viral genes from the chromosome of infected cells (132, 133). CRISPR-Cas9 is also a promising technology in the field of engineering and synthetic biology. A multiplex CRISPR approach referred to as CRISPRm was developed to facilitate directed evolution of biomolecules (134). CRISPRm consists of the optimization of CRISPR-Cas9 to generate quantitative gene assembly and DNA library insertion into the fungal genomes, providing a strategy to improve the activity of biomolecules. In addition, it has been possible to induce Cas9 to bind single stranded RNA in a programmable fashion by using short DNA oligonucleotides containing PAM sequences (PAMmers) to activate the enzyme, suggesting new ways to target transcripts without prior affinity tagging (135).  Several groups have developed algorithmic tools that predict the sequence of an optimal sgRNA with minimized off-target effects (for example, http://tools.genome-engineering.org, http://zifit.partners.org, and www.e-crisp.org) (141–145).

Our understanding of how genomes direct development, normal physiology, and disease in higher organisms has been hindered by a lack of suitable tools for precise and efficient gene engineering. The simple two-component CRISPRCas9system,usingWatson-Crickbasepairing by aguideRNAtoidentifytargetDNAsequences,is a versatile technology that has already stimulated innovative applications in biology. Understanding the CRISPR-Cas9 system at the biochemical and structural level allows the engineering of tailored Cas9 variants with smaller size and increased specificity. A crystal structure of the smaller Cas9 protein from Actinomyces, for example, showed how natural variation created a streamlined enzyme, setting the stage for future engineered Cas9 variants (77). A deeper analysis of the large panel of naturally evolving bacterial Cas9 enzymes may also reveal orthologs with distinct DNA binding specificity, will broaden the choice of PAMs, and will certainly reveal shorter variants more amenable for delivery in human cells.

Furthermore, specific methods for delivering Cas9 and its guide RNA to cells and tissues should benefit the field of human gene therapy. For example, recent experiments confirmed that the Cas9 protein-RNA complex can be introduced directly into cells using nucleofection or cell-penetrating peptides to enable rapid and timed editing (89,152), and transgenic organisms
that express Cas9 from inducible promoters are being tested. An exciting harbinger of future research in this area is the recent demonstration that Cas9–guide RNA complexes, when injected into adult mice, provided sufficient editing in the liver to alleviate a genetic disorder (153). Understanding the rates of homology-directed repair afterCas9-mediatedDNAcuttingwilladvancethe field by enabling efficient insertion of new or corrected sequences into cells and organisms. In addition, the rapid advance of the field has raised excitement about commercial applications of CRISPR-Cas9.

CRISPR Needle with DNA Nanoclews 

GEN 2015 Aug

A team of researchers from North Carolina State University (NC State) and the University of North Carolina at Chapel Hill (UNC-CH) have created and utilized a nanoscale vehicle composed of DNA to deliver the CRISPR-Cas9 gene editing complex into cells both in vitro and in vivo.

When the nanoclew comes into contact with a cell, the cell absorbs the nanoclew completely—swallowing it and wrapping it an endosome. Nanoclews are coated with a positively charged polymer that breaks down the endosome, setting the nanoclew free inside the cell, thus allowing CRISPR-Cas9 to make its way to the nucleus. [North Carolina State University]

  • “Traditionally, researchers deliver DNA into a targeted cell to make the CRISPR RNA and Cas9 inside the cell itself—but that limits control over its dosage,” explained co-senior author Chase Beisel, Ph.D., assistant professor in the department of chemical and biomolecular engineering at NC State. “By directly delivering the Cas9 protein itself, instead of turning the cell into a Cas9 factory, we can ensure that the cell receives the active editing system and can reduce problems with unintended editing.”
  • The findings from this study were published recently in Angewandte Chemie through an article entitled “Self-Assembled DNA Nanoclews for the Efficient Delivery of CRISPR-Cas9 for Genome Editing.”
  • The nanoclews are made of a single, tightly-wound strand of DNA. The DNA is engineered to partially complement the relevant CRISPR RNA it will carry, allowing the CRISPR-Cas9 complex to loosely attach itself to the nanoclew. “Multiple CRISPR-Cas complexes can be attached to a single nanoclew,” noted lead author Wujin Sun, a Ph.D. student in Dr. Gu’s laboratory.
  • When the nanoclew comes into contact with a cell, the cell absorbs the nanoclew completely through typical endocytic mechanisms. The nanoclews are coated with a positively charged polymer, in order to break down the endosomal membrane and set the nanoclew free inside the cell. The CRISPR-Cas9 complexes will then free themselves from the nanoclew structure to make their way to the nucleus. Once the CRISPR-Cas9 complex reaches the nucleus than the gene editing can begin.
  • In order to test their delivery method, the investigators created fluorescently labeled cancer cells in culture and within mice. The CRISPR nanoclew was then designed to target the gene generating fluorescent protein in the cells—if the glowing stopped than the nanoclews worked. “And they did work. More than one-third of cancer cells stopped expressing the fluorescent protein,” Dr. Beisel stated.

Imitating Viruses to Deliver Drugs to Cells

2015 Aug – by CNRS (Délégation Paris Michel-Ange)

Figure (not shown). Assembly of the artificial virus and protein delivery: the virus consists of an initial polymer (pGi-Ni2+, left) on which the proteins to be delivered bind. It is encapsulated (right) by a second polymer (πPEI), which binds to the cell surface.

Viruses are able to redirect the functioning of cells in order to infect them. Inspired by their mode of action, scientists from the CNRS and Université de Strasbourg have designed a “chemical virus” that can cross the double lipid layer that surrounds cells, and then disintegrate in the intracellular medium in order to release active compounds. To achieve this, the team used two polymers they had designed, which notably can self-assemble or dissociate, depending on the conditions. This work, the result of collaborative efforts by chemists, biologists and biophysicists, is published in the 1st September issue of Angewandte Chemie International Edition.

Biotechnological advances have offered access to a wealth of compounds with therapeutic potential.  Many of these compounds are only active inside human cells but remain unusable because the lipid membrane surrounding these cells is a barrier they cannot cross. The challenge is therefore to find transfer solutions that can cross this barrier.

By imitating the ability of viruses to penetrate into cells, chemists in the Laboratoire de Conception et Application de Molécules Bioactives (CNRS/Université de Strasbourg) sought to design particles capable of releasing macromolecules that are only active inside cells. To achieve this, these particles must comply with several, often contradictory, constraints. They must remain stable in the extracellular medium, they must be able to bind to the cells so that they be internalized, but they must be more fragile inside the cells so that they can release their content. Using two polymers designed by the team, the scientists succeeded in creating a “chemical virus” that meets the conditions necessary for the direct delivery of active proteins into cells.

In practice, the first polymer (pGi-Ni2+) serves as a substrate for the proteins that bind to it. The second, recently patented polymer (πPEI), encapsulates this assembly thanks to its positive charges, which bind to the negative charges of pGi-Ni2+. The particles obtained (30-40 nanometers in diameter) are able to recognize the cell membrane and bind to it. This binding activates a cellular response: the nanoparticle is surrounded by a membrane fragment and enters the intracellular compartment, called the endosome. Although they remain stable outside the cell, the assemblies are attacked by the acidity that prevails within this new environment.  Furthermore, this drop in pH allows the πPEI to burst the endosome, releasing its content of active compounds.

Thanks to this assembly, the scientists were able to concentrate enough active proteins within the cells to achieve a notable biological effect. Thus by delivering a protein called caspase 3 into cancer cell lines, they succeeded in inducing 80% cell death.1

The in vitro results are encouraging, particularly since this “chemical virus” only becomes toxic at a dose ten times higher than that used during the study. Furthermore, preliminary results in the mouse have not revealed any excess mortality. However, elimination by the body of the two polymers remains an open question. The next stage will consist in testing this method in-depth and in vivo, in animals. In the short term, this system will serve as a research tool to vectorize2 recombinant and/or chemically modified proteins into cells. In the longer term, this work could make it possible to apply pharmaceutical proteins to intracellular targets and contribute to the development of innovative drugs.

This work was made possible by the collaboration of biophysicists and biologists. The skills in electron cryomicroscopy available at the Institut de Génétique et de Biologie Moléculaire et Cellulaire (CNRS/Université de Strasbourg/Inserm), and the expertise in atomic force microscopy of the Laboratoire de Biophotonique et Pharmacologie (CNRS/Université de Strasbourg) enabled highly precise characterization of the molecular assemblies. The Laboratoire Biotechnologie et Signalisation Cellulaire (CNRS/Université de Strasbourg) supplied the recombinant proteins encapsulated in the artificial virus.

A CRISPR view of development

Melissa M. Harrison,1 Brian V. Jenkins,2 Kate M. O’Connor-Giles,3,4 and Jill Wildonger2
1Department of Biomolecular Chemistry, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53706, USA; 2Biochemistry Department, University of Wisconsin-Madison, Madison, Wisconsin 53706, USA; 3Laboratory of Genetics, 4Laboratory of Cell and Molecular Biology, University of Wisconsin-Madison, Madison, Wisconsin 53706, USA
GENES & DEVELOPMENT 2015 Aug; 28:1859–1872
http://www.genesdev.org/cgi/doi/10.1101/gad.248252.114.

The CRISPR (clustered regularly interspaced short palindromic repeat)–Cas9 (CRISPR-associated nuclease 9) system is poised to transform developmental biology by providing a simple, efficient method to precisely manipulate the genome of virtually any developing organism. This RNA-guided nuclease (RGN)-based approach already has been effectively used to induce targeted mutations in multiple genes simultaneously, create conditional alleles, and generate endogenously tagged proteins. Illustrating the adaptability of RGNs, the genomes of >20 different plant and animal species as well as multiple cell lines and primary cells have been successfully modified. Here we review the current and potential uses of RGNs to investigate genome function during development.

Through the regulated process of development, a single cell divides and differentiates into the multitude of specialized cells that compose a mature organism. This process is controlled in large part by differential gene expression, which generates cells with distinct identities and phenotypes despite nearly identical genomes. Recent advances in genome engineering provide the opportunity to efficiently introduce almost any targeted modification in genomic DNA and, in so doing, the unprecedented ability to probe genome function during development in a diverse array of systems.

The CRISPR–Cas9 system has propelled genome editing from being a technical possibility to a practical reality for developmental biology studies due to the simplicity with which the Cas9 nuclease is recruited to a specific DNA sequence by a small, easily generated guide RNA (gRNA) that recognizes its genomic target via standard Watson-Crick base-pairing.

Cas9 enzymes from type II CRISPR–Cas systems are emerging as the sequence-specific nucleases of choice for genome engineering for several reasons. Most notably, as anRNA-guidednuclease(RGN),Cas9isguidedbyasingle gRNA that is readily engineered. In the case of the most commonly used Cas9, derived from Streptococcus pyogenes, the gRNA targeting sequence comprises 20 nucleotides (nt) that can be ordered as a pair of oligonucleotides and rapidly cloned. In contrast, generating an effective ZFN or TALEN is labor-intensive (see Box 1). ZFNs and TALENs are proteins that combine uniquely designed and generated DNA-binding sequences with the FokI nuclease cleavage domain. FokI is an obligate dimer, necessitating the generation of two novel proteins per editing experiment compared with a single gRNA for CRISPR–Cas9-mediated targeting.

Figure 1. (not shown) The flexibility and adaptability of the CRISPR–Cas9 system offers vast potential for genome manipulations. (A) Overview of the CRISPR–Cas9 system. At its simplest, the system consists of the chimeric gRNA (purple), which guides the Cas9 nuclease to the genomic target site (red). The genomic target site is composed of 20 base pairs (bp) of homology with the gRNA (red) and a PAM sequence (white). Cleavage (scissors) occurs 3 bp 59 of the PAM. (B) Components required for RGN-mediated genome editing. The CRISPR–Cas9 components can be delivered as DNA, RNA, or protein, as indicated, and introduced into the cell or embryo through injection, transfection, electroporation, or infection. Organisms and cells expressing transgenic Cas9 are available, and in Drosophila, both the transgenic Cas9-expressing strains and those expressing transgenic gRNA have been shown to increase targeting efficacy. To introduce designer mutations and/or exogenous sequence, a ssDNA or dsDNA donor template is included. (C) Genome engineering outcomes. Cas9-induced DSBs can be repaired by either NHEJ or HDR. (Top left) The DSB generated by a single gRNA can be repaired by NHEJ to generate indels. (Bottom left, dashed box) With the use of two gRNAs, NHEJ can result in larger deletions. If the gRNAs target sequences on different chromosomes, it is possible to generate chromosomal translocations and inversions. (Right) With the inclusion of a researcher-designed donor template, HDR makes it possible to generate conditional alleles (top), fluorescently or epitope tagged proteins (middle), specific mutations (bottom), or any combination thereof. The donor template can also be designed to correct a mutation in the organism or cell or replace a gene. (D) Catalytically inactive dCas9 provides a platform for probing genomic function. dCas9 can be fused to any number of different effectors to allow for the visualization of where specific DNA sequences localize, the repression or activation of transcription, or the immunoprecipitation of the bound chromatin.

Box: 1. A miniguide to genome engineering techniques

Zinc finger nucleases (ZFNs), transcriptional activator-like effector nucleases (TALENs), and CRISPR (clustered regularly interspaced short palindromic repeat)–Cas9 (CRISPR-associated nuclease 9) all function on a similar principle: A nuclease is guided to a specific sequence within the genome to induce a double strand DNA break (DSB). Once a DSB is generated, the cell’s intrinsic DNA repair machinery is set in motion, and it is during the repair of the DSB that the genome is modified. DSBs are typically repaired by either non-homologous end joining (NHEJ) or homology-directed repair (HDR) (Fig. 1C). In NHEJ, the two cleaved ends of the DSB are ligated together. During this process, DNA of varying sizes, generally on the order of a few base pairs, is occasionally inserted and/or deleted randomly. When a DSB is targeted to a coding exon, these insertions or deletions (indels) can result in a truncated gene product. If two DSBs are induced, NHEJ can generate deletions, eliminating an entire gene or region. HDR uses homologous sequence as a template to repair the DSB. Researchers can take advantage of this repair pathway to introduce designer mutations or exogenous sequence, such as genetically encoded tags, by supplying the cell with a donor DNA template that has homology with the sequence flanking the DSB. Note that cells can also use endogenous DNA as a template, in which case the DSB is repaired without incorporation of the donor-supplied edits. It is important to keep in mind that although the researcher directs where the DSB occurs in the genome, the cell is in control of how the DSB is repaired, which determines the ultimate outcome of a genome-editing experiment.

ZFNs

ZFNs are fusion proteins comprised of DNA-binding C2H2 zinc fingers fused to the nonspecific DNA cleavage domain of the nuclease Fok1 (for review, see Carroll 2011). Each zinc finger can be engineered to recognize a nucleotide triplet, and multiple (typically three to six) zinc fingers are joined in tandem to target specific genome sequences. Because the Fok1 cleavage domain must dimerize to be active, two ZFNs are required to create a DSB. This technique, which was first  successfully used in fruit flies more than a decade ago (Bibikova et al. 2002), has since been used to modify the genomes of many different organisms, including those that had not previously been developed as genetic model systems.

TALENs

Similar to ZFNs, TALENs are chimeric proteins comprised of a programmable DNA-binding domain fused to the Fok1 nuclease domain (for review, see Joung and Sander 2013). TALEs are naturally occurring proteins that are secreted by the bacteria Xanthamonas and bind to sequences in the host plant genome, activating transcription. The TALE DNA binding domain is composed of multiple repeats, each of which are 33–35 amino acids long. Each repeat recognizes a single nucleotide in the target DNA sequence. Nucleotide specificity is conferred by a two-amino-acid hypervariable region present in each repeat. Sequence-specific TALENs are generated by modifying the two residues in the hypervariable region and concatenating multiple TALE repeats together. Because the TALE DNA-binding domain is fused to Fok1, TALENs, like ZFNs, must also be used as dimers to generate DSBs.

RGNs hold great potential for dissecting how the genome functions during development. Since the CRISPR–Cas9 system has been recently described in detail elsewhere (Hsu et al. 2014; Sander and Joung 2014), we provide just a brief overview of the system (Box1; Fig.1A–C) and focus here on a few practical considerations for using RGNs to edit the genome of a developing organism.

The CRISPR–Cas9 system

The CRISPR–Cas9 genome-editing method is derived from a prokaryotic RNA-guided defense system (Gasiunas et al. 2012; Jinek et al. 2012, 2013; Cong et al. 2013; Mali et al. 2013c). CRISPR repeats were first discovered in the Escherichia coli genome as an unusual repeat locus (Ishino et al. 1987). The significance of this structure was appreciated later when investigators realized that phage and plasmid sequences are similar to the spacer sequences in CRISPR loci (Bolotin et al. 2005; Mojica et al. 2005; Pourcel et al. 2005). Soon afterward, it was shown that spacers are derived from viral genomic sequence (Barrangou et al. 2007). In the CRISPR–Cas system, short sequences (referred to as ‘‘protospacers’’) from an invading viral genome are copied as‘‘spacers’’ between repetitive sequences in the CRISPR locus of the host genome. The CRISPR locus is transcribed and processed into short CRISPR RNAs (crRNAs) that guide the Cas to the complementary genomic target sequence. There are at least eleven different CRISPR– Cas systems, which have been grouped into three major types (I–III). In the type I and II systems, nucleotides adjacent to the protospacer in the targeted genome comprise the protospacer adjacent motif (PAM). The PAM is essential for Cas to cleave its target DNA, enabling the CRISPR–Cas system to differentiate between the invading viral genome and the CRISPR locus in the host genome, which does not incorporate the PAM. For additional details on this fascinating prokaryotic adaptive immune response, see recent reviews (Sorek et al. 2013; Terns and Terns 2014). Type II CRISPR–Cas systems have been adapted as a genome-engineering tool. In this system, crRNA teams up with a second RNA, called trans-acting CRISPR RNA (tracrRNA), which is critical for crRNA maturation and recruiting the Cas9 nuclease to DNA (Deltcheva et al. 2011; Jinek et al. 2012). The RNA that guides Cas9 uses a short (;20-nt) sequence to identify its genomic target. This three-component system was simplified by fusing together crRNA and tracrRNA, creating a single chimeric ‘‘guide’’ RNA (abbreviated as sgRNA or simply gRNA) (Gasiunas et al. 2012; Jinek et al. 2012). While some early experiments indicated that a gRNA may not cleave a subset of targets as efficiently as a crRNA in combination with tracrRNA (Mali et al. 2013c), the ease of using a single RNA has led to the widespread adoption of gRNAs for genome engineering. A number of resources for designing experiments using the CRISPR–Cas9 system are freely available online. (A comprehensive list is available at http://www. geewisc.wisc.edu.)

The current methods of producing the CRISPR–Cas9 components provide great flexibility in terms of expression and delivery, and biologists can exploit these options to control when and where DSBs are generated in an organism. To introduce DSBs and generate modifications early in development, the CRISPR–Cas9 components can be injected as DNA, RNA, or protein into most developing organisms. This approach, which has been widely used, generates mosaic organisms for analysis. To gain control over which tissues are affected, a plasmid expressing Cas9 under the control of tissue-specific enhancers can be used. Since each cell has a choice of whether to repair a breakthrough NHEJ or HDR, a variety of different repair events will be present in the injected organism (and in individual cells). The frequency at which both alleles of a gene are affected has been reported to be high enough to visualize null phenotypes in developing mice and zebrafish (Jao et al. 2013; Wang et al. 2013a; Yasue et al. 2014; Yen et al. 2014).

Genome engineering with RGNs enables the direct manipulation of nearly any sequence in the genome to determine its role in development. The major limitation as to which genomic loci can be targeted is the requirement of a specific protospacer adjacent motif (PAM). The PAM is a short DNA motif adjacent to the Cas9 recognition sequence in the target DNA and is essential for cleavage. The most commonly used S. pyogenes Cas9 requires the PAM sequence 59-NGG (in cell lines, other PAMs are recognized, including 59-NAG, but at a lower frequency) (Jinek et al. 2012; Esvelt et al. 2013; Hsu et al. 2013; Jiang et al. 2013a; Zhang et al. 2014). The PAM is critical for cleavage and increases target specificity but, conversely, can also make some segments of the genome refractory to Cas9 cleavage. For example, AT-rich genomic sequences may contain fewer PAM sites that would be recognized and cleaved by S. pyogenes Cas9. Thus, some poly(dA-dT) tracts, which are implicated in nucleosome positioning (for review, see Struhl and Segal 2013), may be difficult to manipulate using S. pyogenes Cas9.

With RGNs, a variety of genomic manipulations are brought within reach of developmental biologists studying a diversity of organisms (Table 1 [nt shown]). This approach also makes it possible to readily generate mutations in different genetic strains, making it easier to control genetic background and eliminating the need to carry out multigenerational mating schemes to bring different mutations together in the same animal. While the CRISPR–Cas9 system has been widely used to introduce indels and deletions, HDR makes it possible to introduce more precise gene mutations, deletions, and exogenous sequences, such as loxP sites and green fluorescent protein (GFP).

Multiplexing advantages

Genes that have essential roles in development are often functionally redundant, and thus the effects of mutating a single gene can be masked by the presence of another gene. Due to the ease and efficiency with which gRNAs can be generated, multiple gRNAs can be used in a single experiment to simultaneously mutate multiple genes, overcoming issues of redundancy. Recent technical innovations now make it possible to express multiple gRNAs from a single transcript (Nissim et al. 2014; Tsai et al. 2014), making RGN multiplexing experiments even easier to carry out. Such multiplexing experiments will also facilitate multifaceted experiments, including epistasis tests and manipulating genes that are physically very close together in the genome. Multiplexing has already been used successfully to simultaneously disrupt both Tet1 and Tet2 in developing mice following injection into zygotes (Wang et al. 2013a). The CRISPR– Cas9 system has also been used to eliminate two genes in monkeys (Niu et al. 2014b).

Many gene products of interest to developmental biologists are essential early in development, and mutations in these genes are lethal to an animal before it reaches later developmental stages. Conditional alleles provide spatial and temporal control over gene inactivation and therefore have been invaluable tools for working with genes that cause early lethality. Conditional alleles have also been used to determine where and when a gene is acting during development. The utility of exerting conditional control over gene activity is widely recognized, and an international consortium is currently working to create a library of conditional alleles for  ~ 20,000 genes in the mouse genome (Skarnes et al. 2011). Since the expression of the conditional allele reflects the expression pattern of the recombinase, it is advantageous to have a variety of lines that express recombinase in specific tissues or at discrete developmental stages. The CRISPR– Cas9 system was recently used to generate two different Cre recombinase-expressing lines in rats (Ma et al. 2014b). Thus, RGNs are being used to rapidly generate the tools necessary to probe gene function in a tissue- and time-dependent manner.

RGNs open the door to quickly and easily tagging endogenous genes for developmental studies. Furthermore, because the CRISPR–Cas9 system is amenable to multiplexing, tags could be added simultaneously to multiple genes or different splice isoforms of a single gene. There is an ever-growing number of genetically encoded molecular tags that can be used for functional analysis, protein purification, or protein and RNA localization studies.

One of the first reportsof the use of RGNs for genome engineering demonstrated success in induced pluripotent stem cells (iPSCs) with a frequency of between 2% and 4% when assayed by deep sequencing of bulk culture (Mali et al. 2013c). Recovery of engineered cells is increased when Cas9-expressing cells are marked with a fluorescent marker and selected by cell sorting (Ding et al. 2013). Using this strategy, it was reported that clones containing at least one mutant allele could be isolated at frequencies between 51% and 79%. In comparison, TALENs designed against the same set of genes resulted in between 0% and 34% of clones containing at least one mutant allele.

The relative ease of generating mutant animals will yield many additional animal models of disease and supply a means of testing whether specific polymorphisms are the proximal cause of disease in vivo. Additionally, the CRISPR–Cas9 system is amenable to application in organisms not widely used for genetic studies. Organisms that may be better suited to mimic human disease can now be more easily used to generate disease models. For example, mouse models of the bleeding disorder von Willebrand disease fail to fully recapitulate the human disease.

Apart from point mutations and gene deletions, large chromosomal rearrangements can drive specific cancers. By simultaneously introducing gRNAs targeting two different chromosomes or two widely separated regions of the same chromosome, RGNs have been used to introduce targeted inversions and translocations into otherwise wild-type human cells (Choi and Meyerson 2014; Torres et al. 2014). These engineered cells will ultimately allow for studies of the causative role of these gene fusions in cancer progression. Translocations that drive lung adenocarcinoma (Choi and Meyerson 2014), acute myeloid leukemia, and Ewing’s sarcoma (Torres et al. 2014) have been generated in both HEK293 cells and more physiologically relevant cell types (nontransformed immortalized lung epithelial cells and human mesenchymal stem cells). Additionally, cell lines harboring chromosomal inversions found in lung adenocarcinoma have also been created (Choi and Meyerson 2014).

The first RGN based genetic screens were recently carried out in cultured mammalian cells (Koike-Yusa et al. 2014; Shalem et al. 2014; Wang et al. 2014; Zhou et al. 2014). When carrying out such a screen, it is important to consider both the number of genes targeted by the library and the degree of coverage of each gene. The largest library reported to date is comprised of 90,000 gRNAs designed to target 19,000 genes, which equates to about four to five gRNAs per targeted gene (Koike-Yusa et al. 2014).The screens identified targets affecting the DNA mismatch repair pathway (Koike-Yusa et al. 2014; Wang et al. 2014), resistance to bacterial and chemical toxins (Koike-Yusa et al. 2014; Wang et al. 2014; Zhou et al. 2014), and cell survival and proliferation (Shalem et al. 2014; Wang et al. 2014). The Zheng group (Shalem et al. 2014) also compared the results of their screen for genes involved in resistance to a drug that inhibits B-Raf with a prior RNAi screen that used the same cell line and drug. This comparison revealed that gRNAs identified targets that could be validated more consistently and efficiently than shRNAs, pointing to the potential advantages of using gRNAs to knock out, rather than knock down, gene function in genetic screens.

The question remains whether similar screens can be performed in a developing organism. Excitingly, two recent proof-of-principle studies using worms and mice indicate that RGNs will likely be useful for in vivo genetic screens, including unbiased forward genetic screens (Liu et al. 2014a; Mashiko et al. 2014).

In regards to knocking down gene expression, it remains to be determined how effective CRISPRi and dCas9 chimeras are in comparison with RNAi. Notably, CRISPRi and the dCas9 chimeras designed to inhibit gene expression are reportedly less effective in cultured mammalian cells than in bacteria (Gilbert et al. 2013). Nonetheless, given the ease with which dCas9 and TALE platforms can be programmed and their versatility, the potential application of these approaches to investigating genome dynamics in vivo is enticing to consider.

The majority of RGN-editing experiments have taken advantage of NHEJ to create small indels and larger deletions, which are useful for disrupting gene expression. However, to introduce specific mutations or other tailored modifications (e.g., genetically encoded tags), the HDR pathway must be activated. In most eukaryotic cells, DSBs are repaired more frequently through NHEJ than HDR (for review, see Lieber et al. 2003; Carroll 2014).

Pharma IQ (PiQ), 2015 Sep 1

Pharma IQ spoke to Bhuvaneish, a Post Doctorate Fellow in neurodegenerative disorders.

Bhuvaneish T.S joined the Scottish Centre for Regenerative Medicine – University of Edinburgh, almost  two years ago to establish and drive the use of CRISPR Cas9 within the University’s lab and apply it as a model for different disorders

Aim: To model motor neuron diseases using human pleuripotent stem cells

Bhuvaneish notes: “The disease modelling of neurodegenerative disorders, using human IPS (Induced Pluripotent Stem Cells), is quite challenging because of the technical variability in generating the IPS lines between different patient samples and also the varied genetic background between the donors. So this is a complex problem and leads to [difficulties when] interpreting the results and it’s also possible to generate erroneous results rather than proper scientific results because of the variations.

“One way to overcome this problem is using multiple lines for our study. So instead of using two or three patient donors, increasing their sample number to five or six, which is a tedious process.

“The other option, which [is] the ideal scenario, is to generate isogenic stem cells that differ only in the disease causing genetic variant.  So that’s where the CRISPR Cas9 comes in and it’s a quite handy tool for us.

“In a nutshell what you could do is take patients’ stem cells and then perform a gene correction in CRISPR Cas9. So now we have two types of cell, one is the mutant and the other is the gene corrected. Both are pretty much identical apart from the disease variant. It could be either a point mutation, [or] an expansion repeat, etc. This allows us to nail different phenotypes for motor neurone disorders.

“So generally we generate motor neurones from these two lines and model the disease in a dish, which also helps us to understand the mechanism of the disease.”

Bhuvaneish’s lab also generates different knock outs, which is highly efficient with the CRISPR technique.

Challenges with CRISPR Cas9

With Bhuvaneish leading the use of this technique in the lab, he encountered various challenges regarding the delivery system into the stem cells.

These challenges include off target effects and the efficiency of CRISPR Cas9.

On the latter point, he explains: “Although people say that the efficiency of CRISPR is much better than other gene editing systems like TAL effectors or zinc fingers, it is still pretty low. I mean, the efficiencies you are talking about is 2%, so it is still low.

He continues:  “These are the two challenges which we have and I think it’s a challenge the entire world has at the moment with this technology. And we’ve been trying to increase efficiencies with certain drugs, which has also been published recently. I haven’t got any data to back it up myself but looks promising, though.”

“So that itself is a really good thing because now I can dissect the disease causing phenotypes which we see in our culture and that has been reversed after gene correction. You can completely reverse the phenotype. So that itself is proof of concept that the disease causing the mutation is causing this phenotype.”

“In the research field it’s a really, really important tool but for gene therapy as a therapeutic we are still very behind because of the ethical issues.  The big challenge is in how to deliver these Cas9 proteins and the guide RNAs to the required donor. It could be that the disease has affected only one particular organ rather than the whole body so you would try to target those particular organs. And it’s a challenge in delivering those Cas9 and the guide RNAs to the particular organ because it’s quite a huge protein compared to conventional proteins which have been used for gene therapy.

“Although it’s highly efficient when compared to the others, for therapeutics we need precise targeting with very, very minimal off target mutations. So that would be CRISPR’s bottleneck coming into the medicine field as a therapeutic.

“For the research it is great at the moment. It has enabled most of the researchers to do the genome editing in human stem cells, which was virtually impossible before.”

Read Full Post »

Treatment of Lymphomas [2.4.4C]

Larry H. Bernstein, MD, FCAP, Author, Curator, Editor

http://pharmaceuticalinnovation.com/2015/8/11/larryhbern/Treatment-of-Lymphomas-[2.4.4C]

 

Lymphoma treatment

Overview

http://www.emedicinehealth.com/lymphoma/page8_em.htm#lymphoma_treatment

The most widely used therapies are combinations of chemotherapyand radiation therapy.

  • Biological therapy, which targets key features of the lymphoma cells, is used in many cases nowadays.

The goal of medical therapy in lymphoma is complete remission. This means that all signs of the disease have disappeared after treatment. Remission is not the same as cure. In remission, one may still have lymphoma cells in the body, but they are undetectable and cause no symptoms.

  • When in remission, the lymphoma may come back. This is called recurrence.
  • The duration of remission depends on the type, stage, and grade of the lymphoma. A remission may last a few months, a few years, or may continue throughout one’s life.
  • Remission that lasts a long time is called durable remission, and this is the goal of therapy.
  • The duration of remission is a good indicator of the aggressiveness of the lymphoma and of the prognosis. A longer remission generally indicates a better prognosis.

Remission can also be partial. This means that the tumor shrinks after treatment to less than half its size before treatment.

The following terms are used to describe the lymphoma’s response to treatment:

  • Improvement: The lymphoma shrinks but is still greater than half its original size.
  • Stable disease: The lymphoma stays the same.
  • Progression: The lymphoma worsens during treatment.
  • Refractory disease: The lymphoma is resistant to treatment.

The following terms to refer to therapy:

  • Induction therapy is designed to induce a remission.
  • If this treatment does not induce a complete remission, new or different therapy will be initiated. This is usually referred to as salvage therapy.
  • Once in remission, one may be given yet another treatment to prevent recurrence. This is called maintenance therapy.

Chemotherapy

Many different types of chemotherapy may be used for Hodgkin lymphoma. The most commonly used combination of drugs in the United States is called ABVD. Another combination of drugs, known as BEACOPP, is now widely used in Europe and is being used more often in the United States. There are other combinations that are less commonly used and not listed here. The drugs that make up these two more common combinations of chemotherapy are listed below.

ABVD: Doxorubicin (Adriamycin), bleomycin (Blenoxane), vinblastine (Velban, Velsar), and dacarbazine (DTIC-Dome). ABVD chemotherapy is usually given every two weeks for two to eight months.

BEACOPP: Bleomycin, etoposide (Toposar, VePesid), doxorubicin, cyclophosphamide (Cytoxan, Neosar), vincristine (Vincasar PFS, Oncovin), procarbazine (Matulane), and prednisone (multiple brand names). There are several different treatment schedules, but different drugs are usually given every two weeks.

The type of chemotherapy, number of cycles of chemotherapy, and the additional use of radiation therapy are based on the stage of the Hodgkin lymphoma and the type and number of prognostic factors.

Adult Non-Hodgkin Lymphoma Treatment (PDQ®)

http://www.cancer.gov/cancertopics/pdq/treatment/adult-non-hodgkins/Patient/page1

Key Points for This Section

Adult non-Hodgkin lymphoma is a disease in which malignant (cancer) cells form in the lymph system.

Because lymph tissue is found throughout the body, adult non-Hodgkin lymphoma can begin in almost any part of the body. Cancer can spread to the liver and many other organs and tissues.

Non-Hodgkin lymphoma in pregnant women is the same as the disease in nonpregnant women of childbearing age. However, treatment is different for pregnant women. This summary includes information on the treatment of non-Hodgkin lymphoma during pregnancy

Non-Hodgkin lymphoma can occur in both adults and children. Treatment for children, however, is different than treatment for adults. (See the PDQ summary on Childhood Non-Hodgkin Lymphoma Treatment for more information.)

There are many different types of lymphoma.

Lymphomas are divided into two general types: Hodgkin lymphoma and non-Hodgkin lymphoma. This summary is about the treatment of adult non-Hodgkin lymphoma. For information about other types of lymphoma, see the following PDQ summaries:

Age, gender, and a weakened immune system can affect the risk of adult non-Hodgkin lymphoma.

If cancer is found, the following tests may be done to study the cancer cells:

  • Immunohistochemistry : A test that uses antibodies to check for certain antigens in a sample of tissue. The antibody is usually linked to a radioactive substance or a dye that causes the tissue to light up under a microscope. This type of test may be used to tell the difference between different types of cancer.
  • Cytogenetic analysis : A laboratory test in which cells in a sample of tissue are viewed under a microscope to look for certain changes in the chromosomes.
  • Immunophenotyping : A process used to identify cells, based on the types of antigens ormarkers on the surface of the cell. This process is used to diagnose specific types of leukemia and lymphoma by comparing the cancer cells to normal cells of the immune system.

Certain factors affect prognosis (chance of recovery) and treatment options.

The prognosis (chance of recovery) and treatment options depend on the following:

  • The stage of the cancer.
  • The type of non-Hodgkin lymphoma.
  • The amount of lactate dehydrogenase (LDH) in the blood.
  • The amount of beta-2-microglobulin in the blood (for Waldenström macroglobulinemia).
  • The patient’s age and general health.
  • Whether the lymphoma has just been diagnosed or has recurred (come back).

Stages of adult non-Hodgkin lymphoma may include E and S.

Adult non-Hodgkin lymphoma may be described as follows:

E: “E” stands for extranodal and means the cancer is found in an area or organ other than the lymph nodes or has spread to tissues beyond, but near, the major lymphatic areas.

S: “S” stands for spleen and means the cancer is found in the spleen.

Stage I adult non-Hodgkin lymphoma is divided into stage I and stage IE.

  • Stage I: Cancer is found in one lymphatic area (lymph node group, tonsils and nearby tissue, thymus, or spleen).
  • Stage IE: Cancer is found in one organ or area outside the lymph nodes.

Stage II adult non-Hodgkin lymphoma is divided into stage II and stage IIE.

  • Stage II: Cancer is found in two or more lymph node groups either above or below the diaphragm (the thin muscle below the lungs that helps breathing and separates the chest from the abdomen).
  • Stage IIE: Cancer is found in one or more lymph node groups either above or below the diaphragm. Cancer is also found outside the lymph nodes in one organ or area on the same side of the diaphragm as the affected lymph nodes.

Stage III adult non-Hodgkin lymphoma is divided into stage III, stage IIIE, stage IIIS, and stage IIIE+S.

  • Stage III: Cancer is found in lymph node groups above and below the diaphragm (the thin muscle below the lungs that helps breathing and separates the chest from the abdomen).
  • Stage IIIE: Cancer is found in lymph node groups above and below the diaphragm and outside the lymph nodes in a nearby organ or area.
  • Stage IIIS: Cancer is found in lymph node groups above and below the diaphragm, and in the spleen.
  • Stage IIIE+S: Cancer is found in lymph node groups above and below the diaphragm, outside the lymph nodes in a nearby organ or area, and in the spleen.

In stage IV adult non-Hodgkin lymphoma, the cancer:

  • is found throughout one or more organs that are not part of a lymphatic area (lymph node group, tonsils and nearby tissue, thymus, or spleen), and may be in lymph nodes near those organs; or
  • is found in one organ that is not part of a lymphatic area and has spread to organs or lymph nodes far away from that organ; or
  • is found in the liver, bone marrow, cerebrospinal fluid (CSF), or lungs (other than cancer that has spread to the lungs from nearby areas).

Adult non-Hodgkin lymphomas are also described based on how fast they grow and where the affected lymph nodes are in the body.  Indolent & aggressive.

The treatment plan depends mainly on the following:

  • The type of non-Hodgkin’s lymphoma
  • Its stage (where the lymphoma is found)
  • How quickly the cancer is growing
  • The patient’s age
  • Whether the patient has other health problems
  • If there are symptoms present such as fever and night sweats (see above)

Read Full Post »

Treatment for Chronic Leukemias [2.4.4B]

Larry H. Bernstein, MD, FCAP, Author, Curator, Editor

http://pharmaceuticalintelligence.com/2015/8/11/larryhbern/Treatment-for-Chronic-Leukemias-[2.4.4B]

2.4.4B1 Treatment for CML

Chronic Myelogenous Leukemia Treatment (PDQ®)

http://www.cancer.gov/cancertopics/pdq/treatment/CML/Patient/page4

Treatment Option Overview

Key Points for This Section

There are different types of treatment for patients with chronic myelogenous leukemia.

Six types of standard treatment are used:

  1. Targeted therapy
  2. Chemotherapy
  3. Biologic therapy
  4. High-dose chemotherapy with stem cell transplant
  5. Donor lymphocyte infusion (DLI)
  6. Surgery

New types of treatment are being tested in clinical trials.

Patients may want to think about taking part in a clinical trial.

Patients can enter clinical trials before, during, or after starting their cancer treatment.

Follow-up tests may be needed.

There are different types of treatment for patients with chronic myelogenous leukemia.

Different types of treatment are available for patients with chronic myelogenous leukemia (CML). Some treatments are standard (the currently used treatment), and some are being tested in clinical trials. A treatment clinical trial is a research study meant to help improve current treatments or obtain information about new treatments for patients with cancer. When clinical trials show that a new treatment is better than the standard treatment, the new treatment may become the standard treatment. Patients may want to think about taking part in a clinical trial. Some clinical trials are open only to patients who have not started treatment.

Six types of standard treatment are used:

Targeted therapy

Targeted therapy is a type of treatment that uses drugs or other substances to identify and attack specific cancer cells without harming normal cells. Tyrosine kinase inhibitors are targeted therapy drugs used to treat chronic myelogenous leukemia.

Imatinib mesylate, nilotinib, dasatinib, and ponatinib are tyrosine kinase inhibitors that are used to treat CML.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Chemotherapy

Chemotherapy is a cancer treatment that uses drugs to stop the growth of cancer cells, either by killing the cells or by stopping them from dividing. When chemotherapy is taken by mouth or injected into a vein or muscle, the drugs enter the bloodstream and can reach cancer cells throughout the body (systemic chemotherapy). When chemotherapy is placed directly into the cerebrospinal fluid, an organ, or a body cavity such as the abdomen, the drugs mainly affect cancer cells in those areas (regional chemotherapy). The way the chemotherapy is given depends on the type and stage of the cancer being treated.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Biologic therapy

Biologic therapy is a treatment that uses the patient’s immune system to fight cancer. Substances made by the body or made in a laboratory are used to boost, direct, or restore the body’s natural defenses against cancer. This type of cancer treatment is also called biotherapy or immunotherapy.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

High-dose chemotherapy with stem cell transplant

High-dose chemotherapy with stem cell transplant is a method of giving high doses of chemotherapy and replacing blood-forming cells destroyed by the cancer treatment. Stem cells (immature blood cells) are removed from the blood or bone marrow of the patient or a donor and are frozen and stored. After the chemotherapy is completed, the stored stem cells are thawed and given back to the patient through an infusion. These reinfused stem cells grow into (and restore) the body’s blood cells.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Donor lymphocyte infusion (DLI)

Donor lymphocyte infusion (DLI) is a cancer treatment that may be used after stem cell transplant.Lymphocytes (a type of white blood cell) from the stem cell transplant donor are removed from the donor’s blood and may be frozen for storage. The donor’s lymphocytes are thawed if they were frozen and then given to the patient through one or more infusions. The lymphocytes see the patient’s cancer cells as not belonging to the body and attack them.

Surgery

Splenectomy

What`s new in chronic myeloid leukemia research and treatment?

http://www.cancer.org/cancer/leukemia-chronicmyeloidcml/detailedguide/leukemia-chronic-myeloid-myelogenous-new-research

Combining the targeted drugs with other treatments

Imatinib and other drugs that target the BCR-ABL protein have proven to be very effective, but by themselves these drugs don’t help everyone. Studies are now in progress to see if combining these drugs with other treatments, such as chemotherapy, interferon, or cancer vaccines (see below) might be better than either one alone. One study showed that giving interferon with imatinib worked better than giving imatinib alone. The 2 drugs together had more side effects, though. It is also not clear if this combination is better than treatment with other tyrosine kinase inhibitors (TKIs), such as dasatinib and nilotinib. A study going on now is looking at combing interferon with nilotinib.

Other studies are looking at combining other drugs, such as cyclosporine or hydroxychloroquine, with a TKI.

New drugs for CML

Because researchers now know the main cause of CML (the BCR-ABL gene and its protein), they have been able to develop many new drugs that might work against it.

In some cases, CML cells develop a change in the BCR-ABL oncogene known as a T315I mutation, which makes them resistant to many of the current targeted therapies (imatinib, dasatinib, and nilotinib). Ponatinib is the only TKI that can work against T315I mutant cells. More drugs aimed at this mutation are now being tested.

Other drugs called farnesyl transferase inhibitors, such as lonafarnib and tipifarnib, seem to have some activity against CML and patients may respond when these drugs are combined with imatinib. These drugs are being studied further.

Other drugs being studied in CML include the histone deacetylase inhibitor panobinostat and the proteasome inhibitor bortezomib (Velcade).

Several vaccines are now being studied for use against CML.

2.4.4.B2 Chronic Lymphocytic Leukemia

Chronic Lymphocytic Leukemia Treatment (PDQ®)

General Information About Chronic Lymphocytic Leukemia

Key Points for This Section

  1. Chronic lymphocytic leukemia is a type of cancer in which the bone marrow makes too many lymphocytes (a type of white blood cell).
  2. Leukemia may affect red blood cells, white blood cells, and platelets.
  3. Older age can affect the risk of developing chronic lymphocytic leukemia.
  4. Signs and symptoms of chronic lymphocytic leukemia include swollen lymph nodes and tiredness.
  5. Tests that examine the blood, bone marrow, and lymph nodes are used to detect (find) and diagnose chronic lymphocytic leukemia.
  6. Certain factors affect treatment options and prognosis (chance of recovery).
  7. Chronic lymphocytic leukemia is a type of cancer in which the bone marrow makes too many lymphocytes (a type of white blood cell).

Chronic lymphocytic leukemia (also called CLL) is a blood and bone marrow disease that usually gets worse slowly. CLL is one of the most common types of leukemia in adults. It often occurs during or after middle age; it rarely occurs in children.

http://www.cancer.gov/images/cdr/live/CDR755927-750.jpg

Anatomy of the bone; drawing shows spongy bone, red marrow, and yellow marrow. A cross section of the bone shows compact bone and blood vessels in the bone marrow. Also shown are red blood cells, white blood cells, platelets, and a blood stem cell.

Anatomy of the bone. The bone is made up of compact bone, spongy bone, and bone marrow. Compact bone makes up the outer layer of the bone. Spongy bone is found mostly at the ends of bones and contains red marrow. Bone marrow is found in the center of most bones and has many blood vessels. There are two types of bone marrow: red and yellow. Red marrow contains blood stem cells that can become red blood cells, white blood cells, or platelets. Yellow marrow is made mostly of fat.

Leukemia may affect red blood cells, white blood cells, and platelets.

Normally, the body makes blood stem cells (immature cells) that become mature blood cells over time. A blood stem cell may become a myeloid stem cell or a lymphoid stem cell.

A myeloid stem cell becomes one of three types of mature blood cells:

  1. Red blood cells that carry oxygen and other substances to all tissues of the body.
  2. White blood cells that fight infection and disease.
  3. Platelets that form blood clots to stop bleeding.

A lymphoid stem cell becomes a lymphoblast cell and then one of three types of lymphocytes (white blood cells):

  1. B lymphocytes that make antibodies to help fight infection.
  2. T lymphocytes that help B lymphocytes make antibodies to fight infection.
  3. Natural killer cells that attack cancer cells and viruses.

Blood cell development. CDR526538-750

Blood cell development. CDR526538-750

http://www.cancer.gov/images/cdr/live/CDR526538-750.jpg

Blood cell development; drawing shows the steps a blood stem cell goes through to become a red blood cell, platelet, or white blood cell. A myeloid stem cell becomes a red blood cell, a platelet, or a myeloblast, which then becomes a granulocyte (the types of granulocytes are eosinophils, basophils, and neutrophils). A lymphoid stem cell becomes a lymphoblast and then becomes a B-lymphocyte, T-lymphocyte, or natural killer cell.

Blood cell development. A blood stem cell goes through several steps to become a red blood cell, platelet, or white blood cell.

In CLL, too many blood stem cells become abnormal lymphocytes and do not become healthy white blood cells. The abnormal lymphocytes may also be called leukemia cells. The lymphocytes are not able to fight infection very well. Also, as the number of lymphocytes increases in the blood and bone marrow, there is less room for healthy white blood cells, red blood cells, and platelets. This may cause infection, anemia, and easy bleeding.

This summary is about chronic lymphocytic leukemia. See the following PDQ summaries for more information about leukemia:

  • Adult Acute Lymphoblastic Leukemia Treatment.
  • Childhood Acute Lymphoblastic Leukemia Treatment.
  • Adult Acute Myeloid Leukemia Treatment.
  • Childhood Acute Myeloid Leukemia/Other Myeloid Malignancies Treatment.
  • Chronic Myelogenous Leukemia Treatment.
  • Hairy Cell Leukemia Treatment

Older age can affect the risk of developing chronic lymphocytic leukemia.

Anything that increases your risk of getting a disease is called a risk factor. Having a risk factor does not mean that you will get cancer; not having risk factors doesn’t mean that you will not get cancer. Talk with your doctor if you think you may be at risk. Risk factors for CLL include the following:

  • Being middle-aged or older, male, or white.
  • A family history of CLL or cancer of the lymph system.
  • Having relatives who are Russian Jews or Eastern European Jews.

Signs and symptoms of chronic lymphocytic leukemia include swollen lymph nodes and tiredness.

Usually CLL does not cause any signs or symptoms and is found during a routine blood test. Signs and symptoms may be caused by CLL or by other conditions. Check with your doctor if you have any of the following:

  • Painless swelling of the lymph nodes in the neck, underarm, stomach, or groin.
  • Feeling very tired.
  • Pain or fullness below the ribs.
  • Fever and infection.
  • Weight loss for no known reason.

Tests that examine the blood, bone marrow, and lymph nodes are used to detect (find) and diagnose chronic lymphocytic leukemia.

The following tests and procedures may be used:

Physical exam and history : An exam of the body to check general signs of health, including checking for signs of disease, such as lumps or anything else that seems unusual. A history of the patient’s health habits and past illnesses and treatments will also be taken.

Complete blood count (CBC) with differential : A procedure in which a sample of blood is drawn and checked for the following:

The number of red blood cells and platelets.

The number and type of white blood cells.

The amount of hemoglobin (the protein that carries oxygen) in the red blood cells.

The portion of the blood sample made up of red blood cells.

Results from the Phase 3 Resonate™ Trial

Significantly improved progression free survival (PFS) vs ofatumumab in patients with previously treated CLL

  • Patients taking IMBRUVICA® had a 78% statistically significant reduction in the risk of disease progression or death compared with patients who received ofatumumab1
  • In patients with previously treated del 17p CLL, median PFS was not yet reached with IMBRUVICA® vs 5.8 months with ofatumumab (HR 0.25; 95% CI: 0.14, 0.45)1

Significantly prolonged overall survival (OS) with IMBRUVICA® vs ofatumumab in patients with previously treated CLL

  • In patients with previously treated CLL, those taking IMBRUVICA® had a 57% statistically significant reduction in the risk of death compared with those who received ofatumumab (HR 0.43; 95% CI: 0.24, 0.79; P<0.05)1

Typical treatment of chronic lymphocytic leukemia

http://www.cancer.org/cancer/leukemia-chroniclymphocyticcll/detailedguide/leukemia-chronic-lymphocytic-treating-treatment-by-risk-group

Treatment options for chronic lymphocytic leukemia (CLL) vary greatly, depending on the person’s age, the disease risk group, and the reason for treating (for example, which symptoms it is causing). Many people live a long time with CLL, but in general it is very difficult to cure, and early treatment hasn’t been shown to help people live longer. Because of this and because treatment can cause side effects, doctors often advise waiting until the disease is progressing or bothersome symptoms appear, before starting treatment.

If treatment is needed, factors that should be taken into account include the patient’s age, general health, and prognostic factors such as the presence of chromosome 17 or chromosome 11 deletions or high levels of ZAP-70 and CD38.

Initial treatment

Patients who might not be able to tolerate the side effects of strong chemotherapy (chemo), are often treated with chlorambucil alone or with a monoclonal antibody targeting CD20 like rituximab (Rituxan) or obinutuzumab (Gazyva). Other options include rituximab alone or a corticosteroid like prednisione.

In stronger and healthier patients, there are many options for treatment. Commonly used treatments include:

  • FCR: fludarabine (Fludara), cyclophosphamide (Cytoxan), and rituximab
  • Bendamustine (sometimes with rituximab)
  • FR: fludarabine and rituximab
  • CVP: cyclophosphamide, vincristine, and prednisone (sometimes with rituximab)
  • CHOP: cyclophosphamide, doxorubicin, vincristine (Oncovin), and prednisone
  • Chlorambucil combined with prednisone, rituximab, obinutuzumab, or ofatumumab
  • PCR: pentostatin (Nipent), cyclophosphamide, and rituximab
  • Alemtuzumab (Campath)
  • Fludarabine (alone)

Other drugs or combinations of drugs may also be also used.

If the only problem is an enlarged spleen or swollen lymph nodes in one region of the body, localized treatment with low-dose radiation therapy may be used. Splenectomy (surgery to remove the spleen) is another option if the enlarged spleen is causing symptoms.

Sometimes very high numbers of leukemia cells in the blood cause problems with normal circulation. This is calledleukostasis. Chemo may not lower the number of cells until a few days after the first dose, so before the chemo is given, some of the cells may be removed from the blood with a procedure called leukapheresis. This treatment lowers blood counts right away. The effect lasts only for a short time, but it may help until the chemo has a chance to work. Leukapheresis is also sometimes used before chemo if there are very high numbers of leukemia cells (even when they aren’t causing problems) to prevent tumor lysis syndrome (this was discussed in the chemotherapy section).

Some people who have very high-risk disease (based on prognostic factors) may be referred for possible stem cell transplant (SCT) early in treatment.

Second-line treatment of CLL

If the initial treatment is no longer working or the disease comes back, another type of treatment may help. If the initial response to the treatment lasted a long time (usually at least a few years), the same treatment can often be used again. If the initial response wasn’t long-lasting, using the same treatment again isn’t as likely to be helpful. The options will depend on what the first-line treatment was and how well it worked, as well as the person’s health.

Many of the drugs and combinations listed above may be options as second-line treatments. For many people who have already had fludarabine, alemtuzumab seems to be helpful as second-line treatment, but it carries an increased risk of infections. Other purine analog drugs, such as pentostatin or cladribine (2-CdA), may also be tried. Newer drugs such as ofatumumab, ibrutinib (Imbruvica), and idelalisib (Zydelig) may be other options.

If the leukemia responds, stem cell transplant may be an option for some patients.

Some people may have a good response to first-line treatment (such as fludarabine) but may still have some evidence of a small number of leukemia cells in the blood, bone marrow, or lymph nodes. This is known as minimal residual disease. CLL can’t be cured, so doctors aren’t sure if further treatment right away will be helpful. Some small studies have shown that alemtuzumab can sometimes help get rid of these remaining cells, but it’s not yet clear if this improves survival.

Treating complications of CLL

One of the most serious complications of CLL is a change (transformation) of the leukemia to a high-grade or aggressive type of non-Hodgkin lymphoma called diffuse large cell lymphoma. This happens in about 5% of CLL cases, and is known as Richter syndrome. Treatment is often the same as it would be for lymphoma (see our document called Non-Hodgkin Lymphoma for more information), and may include stem cell transplant, as these cases are often hard to treat.

Less often, CLL may transform to prolymphocytic leukemia. As with Richter syndrome, these cases can be hard to treat. Some studies have suggested that certain drugs such as cladribine (2-CdA) and alemtuzumab may be helpful.

In rare cases, patients with CLL may have their leukemia transform into acute lymphocytic leukemia (ALL). If this happens, treatment is likely to be similar to that used for patients with ALL (see our document called Leukemia: Acute Lymphocytic).

Acute myeloid leukemia (AML) is another rare complication in patients who have been treated for CLL. Drugs such as chlorambucil and cyclophosphamide can damage the DNA of blood-forming cells. These damaged cells may go on to become cancerous, leading to AML, which is very aggressive and often hard to treat (see our document calledLeukemia: Acute Myeloid).

CLL can cause problems with low blood counts and infections. Treatment of these problems were discussed in the section “Supportive care in chronic lymphocytic leukemia.”

Read Full Post »

Treatments other than Chemotherapy for Leukemias and Lymphomas

Author, Curator, Editor: Larry H. Bernstein, MD, FCAP

2.5.1 Radiation Therapy 

http://www.lls.org/treatment/types-of-treatment/radiation-therapy

Radiation therapy, also called radiotherapy or irradiation, can be used to treat leukemia, lymphoma, myeloma and myelodysplastic syndromes. The type of radiation used for radiotherapy (ionizing radiation) is the same that’s used for diagnostic x-rays. Radiotherapy, however, is given in higher doses.

Radiotherapy works by damaging the genetic material (DNA) within cells, which prevents them from growing and reproducing. Although the radiotherapy is directed at cancer cells, it can also damage nearby healthy cells. However, current methods of radiotherapy have been improved upon, minimizing “scatter” to nearby tissues. Therefore its benefit (destroying the cancer cells) outweighs its risk (harming healthy cells).

When radiotherapy is used for blood cancer treatment, it’s usually part of a treatment plan that includes drug therapy. Radiotherapy can also be used to relieve pain or discomfort caused by an enlarged liver, lymph node(s) or spleen.

Radiotherapy, either alone or with chemotherapy, is sometimes given as conditioning treatment to prepare a patient for a blood or marrow stem cell transplant. The most common types used to treat blood cancer are external beam radiation (see below) and radioimmunotherapy.
External Beam Radiation

External beam radiation is the type of radiotherapy used most often for people with blood cancers. A focused radiation beam is delivered outside the body by a machine called a linear accelerator, or linac for short. The linear accelerator moves around the body to deliver radiation from various angles. Linear accelerators make it possible to decrease or avoid skin reactions and deliver targeted radiation to lessen “scatter” of radiation to nearby tissues.

The dose (total amount) of radiation used during treatment depends on various factors regarding the patient, disease and reason for treatment, and is established by a radiation oncologist. You may receive radiotherapy during a series of visits, spread over several weeks (from two to 10 weeks, on average). This approach, called dose fractionation, lessens side effects. External beam radiation does not make you radioactive.

2.5.2  Bone marrow (BM) transplantation

http://www.nlm.nih.gov/medlineplus/ency/article/003009.htm

There are three kinds of bone marrow transplants:

Autologous bone marrow transplant: The term auto means self. Stem cells are removed from you before you receive high-dose chemotherapy or radiation treatment. The stem cells are stored in a freezer (cryopreservation). After high-dose chemotherapy or radiation treatments, your stems cells are put back in your body to make (regenerate) normal blood cells. This is called a rescue transplant.

Allogeneic bone marrow transplant: The term allo means other. Stem cells are removed from another person, called a donor. Most times, the donor’s genes must at least partly match your genes. Special blood tests are done to see if a donor is a good match for you. A brother or sister is most likely to be a good match. Sometimes parents, children, and other relatives are good matches. Donors who are not related to you may be found through national bone marrow registries.

Umbilical cord blood transplant: This is a type of allogeneic transplant. Stem cells are removed from a newborn baby’s umbilical cord right after birth. The stem cells are frozen and stored until they are needed for a transplant. Umbilical cord blood cells are very immature so there is less of a need for matching. But blood counts take much longer to recover.

Before the transplant, chemotherapy, radiation, or both may be given. This may be done in two ways:

Ablative (myeloablative) treatment: High-dose chemotherapy, radiation, or both are given to kill any cancer cells. This also kills all healthy bone marrow that remains, and allows new stem cells to grow in the bone marrow.

Reduced intensity treatment, also called a mini transplant: Patients receive lower doses of chemotherapy and radiation before a transplant. This allows older patients, and those with other health problems to have a transplant.

A stem cell transplant is usually done after chemotherapy and radiation is complete. The stem cells are delivered into your bloodstream usually through a tube called a central venous catheter. The process is similar to getting a blood transfusion. The stem cells travel through the blood into the bone marrow. Most times, no surgery is needed.

Donor stem cells can be collected in two ways:

  • Bone marrow harvest. This minor surgery is done under general anesthesia. This means the donor will be asleep and pain-free during the procedure. The bone marrow is removed from the back of both hip bones. The amount of marrow removed depends on the weight of the person who is receiving it.
  • Leukapheresis. First, the donor is given 5 days of shots to help stem cells move from the bone marrow into the blood. During leukapheresis, blood is removed from the donor through an IV line in a vein. The part of white blood cells that contains stem cells is then separated in a machine and removed to be later given to the recipient. The red blood cells are returned to the donor.

Why the Procedure is Performed

A bone marrow transplant replaces bone marrow that either is not working properly or has been destroyed (ablated) by chemotherapy or radiation. Doctors believe that for many cancers, the donor’s white blood cells can attach to any remaining cancer cells, similar to when white cells attach to bacteria or viruses when fighting an infection.

Your doctor may recommend a bone marrow transplant if you have:

Certain cancers, such as leukemia, lymphoma, and multiple myeloma

A disease that affects the production of bone marrow cells, such as aplastic anemia, congenital neutropenia, severe immunodeficiency syndromes, sickle cell anemia, thalassemia

Had chemotherapy that destroyed your bone

2.5.3 Autologous stem cell transplantation

Phase II trial of 131I-B1 (anti-CD20) antibody therapy with autologous stem cell transplantation for relapsed B cell lymphomas

O.W Press,  F Appelbaum,  P.J Martin, et al.
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(95)92225-3/abstract

25 patients with relapsed B-cell lymphomas were evaluated with trace-labelled doses (2·5 mg/kg, 185-370 MBq [5-10 mCi]) of 131I-labelled anti-CD20 (B1) antibody in a phase II trial. 22 patients achieved 131I-B1 biodistributions delivering higher doses of radiation to tumor sites than to normal organs and 21 of these were treated with therapeutic infusions of 131I-B1 (12·765-29·045 GBq) followed by autologous hemopoietic stem cell reinfusion. 18 of the 21 treated patients had objective responses, including 16 complete remissions. One patient died of progressive lymphoma and one died of sepsis. Analysis of our phase I and II trials with 131I-labelled B1 reveal a progression-free survival of 62% and an overall survival of 93% with a median follow-up of 2 years. 131I-anti-CD20 (B1) antibody therapy produces complete responses of long duration in most patients with relapsed B-cell lymphomas when given at maximally tolerated doses with autologous stem cell rescue.

Autologous (Self) Transplants

http://www.leukaemia.org.au/treatments/stem-cell-transplants/autologous-self-transplants

An autologous transplant (or rescue) is a type of transplant that uses the person’s own stem cells. These cells are collected in advance and returned at a later stage. They are used to replace stem cells that have been damaged by high doses of chemotherapy, used to treat the person’s underlying disease.

In most cases, stem cells are collected directly from the bloodstream. While stem cells normally live in your marrow, a combination of chemotherapy and a growth factor (a drug that stimulates stem cells) called Granulocyte Colony Stimulating Factor (G-CSF) is used to expand the number of stem cells in the marrow and cause them to spill out into the circulating blood. From here they can be collected from a vein by passing the blood through a special machine called a cell separator, in a process similar to dialysis.

Most of the side effects of an autologous transplant are caused by the conditioning therapy used. Although they can be very unpleasant at times it is important to remember that most of them are temporary and reversible.

Procedure of Hematopoietic Stem Cell Transplantation

Hematopoietic stem cell transplantation (HSCT) is the transplantation of multipotent hematopoietic stem cells, usually derived from bone marrow, peripheral blood, or umbilical cord blood. It may be autologous (the patient’s own stem cells are used) or allogeneic (the stem cells come from a donor).

Hematopoietic Stem Cell Transplantation

Author: Ajay Perumbeti, MD, FAAP; Chief Editor: Emmanuel C Besa, MD
http://emedicine.medscape.com/article/208954-overview

Hematopoietic stem cell transplantation (HSCT) involves the intravenous (IV) infusion of autologous or allogeneic stem cells to reestablish hematopoietic function in patients whose bone marrow or immune system is damaged or defective.

The image below illustrates an algorithm for typically preferred hematopoietic stem cell transplantation cell source for treatment of malignancy.

An algorithm for typically preferred hematopoietic stem cell transplantation cell source for treatment of malignancy: If a matched sibling donor is not available, then a MUD is selected; if a MUD is not available, then choices include a mismatched unrelated donor, umbilical cord donor(s), and a haploidentical donor.

Supportive Therapies

2.5.4  Blood transfusions – risks and complications of a blood transfusion

  • Allogeneic transfusion reaction (acute or delayed hemolytic reaction)
  • Allergic reaction
  • Viruses Infectious Diseases

The risk of catching a virus from a blood transfusion is very low.

HIV. Your risk of getting HIV from a blood transfusion is lower than your risk of getting killed by lightning. Only about 1 in 2 million donations might carry HIV and transmit HIV if given to a patient.

Hepatitis B and C. The risk of having a donation that carries hepatitis B is about 1 in 205,000. The risk for hepatitis C is 1 in 2 million. If you receive blood during a transfusion that contains hepatitis, you’ll likely develop the virus.

Variant Creutzfeldt-Jakob disease (vCJD). This disease is the human version of Mad Cow Disease. It’s a very rare, yet fatal brain disorder. There is a possible risk of getting vCJD from a blood transfusion, although the risk is very low. Because of this, people who may have been exposed to vCJD aren’t eligible blood donors.

  • Fever
  • Iron Overload
  • Lung Injury
  • Graft-Versus-Host Disease

Graft-versus-host disease (GVHD) is a condition in which white blood cells in the new blood attack your tissues.

2.5.5 Erythropoietin

Erythropoietin, (/ɨˌrɪθrɵˈpɔɪ.ɨtɨn/UK /ɛˌrɪθr.pˈtɪn/) also known as EPO, is a glycoprotein hormone that controls erythropoiesis, or red blood cell production. It is a cytokine (protein signaling molecule) for erythrocyte (red blood cell) precursors in the bone marrow. Human EPO has a molecular weight of 34 kDa.

Also called hematopoietin or hemopoietin, it is produced by interstitial fibroblasts in the kidney in close association with peritubular capillary and proximal convoluted tubule. It is also produced in perisinusoidal cells in the liver. While liver production predominates in the fetal and perinatal period, renal production is predominant during adulthood. In addition to erythropoiesis, erythropoietin also has other known biological functions. For example, it plays an important role in the brain’s response to neuronal injury.[1] EPO is also involved in the wound healing process.[2]

Exogenous erythropoietin is produced by recombinant DNA technology in cell culture. Several different pharmaceutical agents are available with a variety ofglycosylation patterns, and are collectively called erythropoiesis-stimulating agents (ESA). The specific details for labelled use vary between the package inserts, but ESAs have been used in the treatment of anemia in chronic kidney disease, anemia in myelodysplasia, and in anemia from cancer chemotherapy. Boxed warnings include a risk of death, myocardial infarction, stroke, venous thromboembolism, and tumor recurrence.[3]

2.5.6  G-CSF (granulocyte-colony stimulating factor)

Granulocyte-colony stimulating factor (G-CSF or GCSF), also known as colony-stimulating factor 3 (CSF 3), is a glycoprotein that stimulates the bone marrow to produce granulocytes and stem cells and release them into the bloodstream.

There are different types, including

  • Lenograstim (Granocyte)
  • Filgrastim (Neupogen, Zarzio, Nivestim, Ratiograstim)
  • Long acting (pegylated) filgrastim (pegfilgrastim, Neulasta) and lipegfilgrastim (Longquex)

Pegylated G-CSF stays in the body for longer so you have treatment less often than with the other types of G-CSF.

2.5.7  Plasma Exchange (plasmapheresis)

http://emedicine.medscape.com/article/1895577-overview

Plasmapheresis is a term used to refer to a broad range of procedures in which extracorporeal separation of blood components results in a filtered plasma product.[1, 2] The filtering of plasma from whole blood can be accomplished via centrifugation or semipermeable membranes.[3] Centrifugation takes advantage of the different specific gravities inherent to various blood products such as red cells, white cells, platelets, and plasma.[4] Membrane plasma separation uses differences in particle size to filter plasma from the cellular components of blood.[3]

Traditionally, in the United States, most plasmapheresis takes place using automated centrifuge-based technology.[5] In certain instances, in particular in patients already undergoing hemodialysis, plasmapheresis can be carried out using semipermeable membranes to filter plasma.[4]

In therapeutic plasma exchange, using an automated centrifuge, filtered plasma is discarded and red blood cells along with replacement colloid such as donor plasma or albumin is returned to the patient. In membrane plasma filtration, secondary membrane plasma fractionation can selectively remove undesired macromolecules, which then allows for return of the processed plasma to the patient instead of donor plasma or albumin. Examples of secondary membrane plasma fractionation include cascade filtration,[6] thermofiltration, cryofiltration,[7] and low-density lipoprotein pheresis.

The Apheresis Applications Committee of the American Society for Apheresis periodically evaluates potential indications for apheresis and categorizes them from I to IV based on the available medical literature. The following are some of the indications, and their categorization, from the society’s 2010 guidelines.[2]

  • The only Category I indication for hemopoietic malignancy is Hyperviscosity in monoclonal gammopathies

2.5.8  Platelet Transfusions

Indications for platelet transfusion in children with acute leukemia

Scott Murphy, Samuel Litwin, Leonard M. Herring, Penelope Koch, et al.
Am J Hematol Jun 1982; 12(4): 347–356
http://onlinelibrary.wiley.com/doi/10.1002/ajh.2830120406/abstract;jsessionid=A6001D9D865EA1EBC667EF98382EF20C.f03t01
http://dx.doi.org:/10.1002/ajh.2830120406

In an attempt to determine the indications for platelet transfusion in thrombocytopenic patients, we randomized 56 children with acute leukemia to one of two regimens of platelet transfusion. The prophylactic group received platelets when the platelet count fell below 20,000 per mm3 irrespective of clinical events. The therapeutic group was transfused only when significant bleeding occurred and not for thrombocytopenia alone. The time to first bleeding episode was significantly longer and the number of bleeding episodes were significantly reduced in the prophylactic group. The survival curves of the two groups could not be distinguished from each other. Prior to the last month of life, the total number of days on which bleeding was present was significantly reduced by prophylactic therapy. However, in the terminal phase (last month of life), the duration of bleeding episodes was significantly longer in the prophylactic group. This may have been due to a higher incidence of immunologic refractoriness to platelet transfusion. Because of this terminal bleeding, comparison of the two groups for total number of days on which bleeding was present did not show a significant difference over the entire study period.

Clinical and Laboratory Aspects of Platelet Transfusion Therapy
Yuan S, Goldfinger D
http://www.uptodate.com/contents/clinical-and-laboratory-aspects-of-platelet-transfusion-therapy

INTRODUCTION — Hemostasis depends on an adequate number of functional platelets, together with an intact coagulation (clotting factor) system. This topic covers the logistics of platelet use and the indications for platelet transfusion in adults. The approach to the bleeding patient, refractoriness to platelet transfusion, and platelet transfusion in neonates are discussed elsewhere.

Pooled Platelets – A single unit of platelets can be isolated from every unit of donated blood, by centrifuging the blood within the closed collection system to separate the platelets from the red blood cells (RBC). The number of platelets per unit varies according to the platelet count of the donor; a yield of 7 x 1010 platelets is typical [1]. Since this number is inadequate to raise the platelet count in an adult recipient, four to six units are pooled to allow transfusion of 3 to 4 x 1011 platelets per transfusion [2]. These are called whole blood-derived or random donor pooled platelets.

Advantages of pooled platelets include lower cost and ease of collection and processing (a separate donation procedure and pheresis equipment are not required). The major disadvantage is recipient exposure to multiple donors in a single transfusion and logistic issues related to bacterial testing.

Apheresis (single donor) Platelets – Platelets can also be collected from volunteer donors in the blood bank, in a one- to two-hour pheresis procedure. Platelets and some white blood cells are removed, and red blood cells and plasma are returned to the donor. A typical apheresis platelet unit provides the equivalent of six or more units of platelets from whole blood (ie, 3 to 6 x 1011 platelets) [2]. In larger donors with high platelet counts, up to three units can be collected in one session. These are called apheresis or single donor platelets.

Advantages of single donor platelets are exposure of the recipient to a single donor rather than multiple donors, and the ability to match donor and recipient characteristics such as HLA type, cytomegalovirus (CMV) status, and blood type for certain recipients.

Both pooled and apheresis platelets contain some white blood cells (WBC) that were collected along with the platelets. These WBC can cause febrile non-hemolytic transfusion reactions (FNHTR), alloimmunization, and transfusion-associated graft-versus-host disease (ta-GVHD) in some patients.

Platelet products also contain plasma, which can be implicated in adverse reactions including transfusion-related acute lung injury (TRALI) and anaphylaxis. (See ‘Complications of platelet transfusion’ .)

Read Full Post »

Hematological Cancer Classification

Author and Curator: Larry H. Bernstein, MD, FCAP

 

 

Introduction to leukemias and lymphomas

 

2.4.1 Ontogenesis of the blood elements: hematopoiesis

http://www.britannica.com/EBchecked/topic/69747/blood-cell-formation

Blood cells are divided into three groups: the red blood cells (erythrocytes), the white blood cells (leukocytes), and the blood platelets (thrombocytes). The white blood cells are subdivided into three broad groups: granulocytes, lymphocytes, and monocytes.

Blood cells do not originate in the bloodstream itself but in specific blood-forming organs, notably the marrow of certain bones. In the human adult, the bone marrow produces all of the red blood cells, 60–70 percent of the white cells (i.e., the granulocytes), and all of the platelets. The lymphatic tissues, particularly the thymus, the spleen, and the lymph nodes, produce the lymphocytes (comprising 20–30 percent of the white cells). The reticuloendothelial tissues of the spleen, liver, lymph nodes, and other organs produce the monocytes (4–8 percent of the white cells). The platelets, which are small cellular fragments rather than complete cells, are formed from bits of the cytoplasm of the giant cells (megakaryocytes) of the bone marrow.

In the human embryo, the first site of blood formation is the yolk sac. Later in embryonic life, the liver becomes the most important red blood cell-forming organ, but it is soon succeeded by the bone marrow, which in adult life is the only source of both red blood cells and the granulocytes. Both the red and white blood cells arise through a series of complex, gradual, and successive transformations from primitive stem cells, which have the ability to form any of the precursors of a blood cell. Precursor cells are stem cells that have developed to the stage where they are committed to forming a particular kind of new blood cell.

In a normal adult the red cells of about half a liter (almost one pint) of blood are produced by the bone marrow every week. Almost 1 percent of the body’s red cells are generated each day, and the balance between red cell production and the removal of aging red cells from the circulation is precisely maintained.

Cells-in-the-Bone-Marrow-1024x747

http://interactive-biology.com/wp-content/uploads/2012/07/Cells-in-the-Bone-Marrow-1024×747.png

Erythropoiesis

http://www.interactive-biology.com/3969/erythropoiesis-formation-of-red-blood-cells/

Erythropoiesis – Formation of Red Blood Cells

Because of the inability of erythrocytes (red blood cells) to divide to replenish their own numbers, the old ruptured cells must be replaced by totally new cells. They meet their demise because they don’t have the usual specialized intracellular machinery, which controls cell growth and repair, leading to a short life span of 120 days.

This short life span necessitates the process erythropoiesis, which is the formation of red blood cells. All blood cells are formed in the bone marrow. This is the erythrocyte factory, which is soft, highly cellar tissue that fills the internal cavities of bones.

Erythrocyte differentiation takes place in 8 stages. It is the pathway through which an erythrocyte matures from a hemocytoblast into a full-blown erythrocyte. The first seven all take place within the bone marrow. After stage 7 the cell is then released into the bloodstream as a reticulocyte, where it then matures 1-2 days later into an erythrocyte. The stages are as follows:

  1. Hemocytoblast, which is a pluripotent hematopoietic stem cell
  2. Common myeloid progenitor, a multipotent stem cell
  3. Unipotent stem cell
  4. Pronormoblast
  5. Basophilic normoblast also called an erythroblast.
  6. Polychromatophilic normoblast
  7. Orthochromatic normoblast
  8. Reticulocyte

These characteristics can be seen during the course of erythrocyte maturation:

  • The size of the cell decreases
  • The cytoplasm volume increases
  • Initially there is a nucleus and as the cell matures the size of the nucleus decreases until it vanishes with the condensation of the chromatin material.

Low oxygen tension stimulates the kidneys to secrete the hormone erythropoietin into the blood, and this hormone stimulates the bone marrow to produce erythrocytes.

Rarely, a malignancy or cancer of erythropoiesis occurs. It is referred to as erythroleukemia. This most likely arises from a common myeloid precursor, and it may occur associated with a myelodysplastic syndrome.

Summary of erythrocyte maturation

White blood cell series: myelopoiesis

http://www.nlm.nih.gov/medlineplus/ency/presentations/100151_3.htm

http://www.nlm.nih.gov/medlineplus/ency/images/ency/fullsize/15220.jpg

There are various types of white blood cells (WBCs) that normally appear in the blood: neutrophils (polymorphonuclear leukocytes; PMNs), band cells (slightly immature neutrophils), T-type lymphocytes (T cells), B-type lymphocytes (B cells), monocytes, eosinophils, and basophils. T and B-type lymphocytes are indistinguishable from each other in a normal slide preparation. Any infection or acute stress will result in an increased production of WBCs. This usually entails increased numbers of cells and an increase in the percentage of immature cells (mainly band cells) in the blood. This change is referred to as a “shift to the left” People who have had a splenectomy have a persistent mild elevation of WBCs. Drugs that may increase WBC counts include epinephrine, allopurinol, aspirin, chloroform, heparin, quinine, corticosteroids, and triamterene. Drugs that may decrease WBC counts include antibiotics, anticonvulsants, antihistamine, antithyroid drugs, arsenicals, barbiturates, chemotherapeutic agents, diuretics and sulfonamides.   (Updated by: David C. Dugdale, III, MD)

https://www.med-ed.virginia.edu/courses/path/innes/nh/wcbmaturation.cfm

Note that the mature forms of the myeloid series (neutrophils, eosinophils, basophils), all have lobed (segmented) nuclei. The degree of lobation increases as the cells mature.

The earliest recognizable myeloid cell is the myeloblast (10-20m dia) with a large round to oval nucleus. There is fine diffuse immature chromatin (without clumping) and a prominant nucleolus.

The cytoplasm is basophilic without granules. Although one may see a small golgi area adjacent to the nucleus, granules are not usually visible by light microscopy. One should not see blast cells in the peripheral blood.

myeloblast x100b

https://www.med-ed.virginia.edu/courses/path/innes/images/nhjpeg/nh%20myeloblast%20x100b.jpeg

The promyelocyte (10-20m) is slightly larger than a blast. Its nucleus, although similar to a myeloblast shows slight chromatin condensation and less prominent nucleoli. The cytoplasm contains striking azurophilic granules or primary granules. These granules contain myeloperoxidase, acid phosphatase, and esterase enzymes. Normally no promyelocytes are seen in the peripheral blood.

At the point in development when secondary granules can be recognized, the cell becomes a myelocyte.

promyelocyte x100

https://www.med-ed.virginia.edu/courses/path/innes/images/nhjpeg/nh%20promyelocyte%20×100%20a.jpeg

Myelocytes (10-18m) are not normally found in the peripheral blood. Nucleoli may not be seen in the late myelocyte. Primary azurophilic granules are still present, but secondary granules predominate. Secondary granules (neut, eos, or baso) first appear adjacent to the nucleus. In neutrophils this is the “dawn” of neutrophilia.

Metamyelocytes (10-18m) have kidney shaped indented nuclei and dense chromatin along the nuclear membrane. The cytoplasm is faintly pink, and they have secondary granules (neutro, eos, or baso). Zero to one percent of the peripheral blood white cells may be metamyelocytes (juveniles).

metamyelocyte x100

https://www.med-ed.virginia.edu/courses/path/innes/images/nhjpeg/nh%20metamyelocyte%20×100.jpeg

Bands, slightly smaller than juveniles, are marked by a U-shaped or deeply indented nucleus.

band neutrophilx100a

https://www.med-ed.virginia.edu/courses/path/innes/images/nhjpeg/nh%20band%20x100a.jpeg

Segmented (segs) or polymorphonuclear (PMN) leukocytes (average 14 m dia) are distinguished by definite lobation with thin thread-like filaments of chromatin joining the 2-5 lobes. 45-75% of the peripheral blood white cells are segmented neutrophils.

https://www.med-ed.virginia.edu/courses/path/innes/images/nhjpeg/nh%20neutrophil%20×100%20d.jpeg

Thrombocytogenesis

The incredible journey: From megakaryocyte development to platelet formation

Kellie R. Machlus1,2 and Joseph E. Italiano Jr
JCB 2013; 201(6): 785-796
http://dx.doi.org:/10.1083/jcb.201304054

Large progenitor cells in the bone marrow called megakaryocytes (MKs) are the source of platelets. MKs release platelets through a series of fascinating cell biological events. During maturation, they become polyploid and accumulate massive amounts of protein and membrane. Then, in a cytoskeletal-driven process, they extend long branching processes, designated proplatelets, into sinusoidal blood vessels where they undergo fission to release platelets.

megakaryocyte production of platelets

http://dm5migu4zj3pb.cloudfront.net/manuscripts/26000/26891/medium/JCI0526891.f4.jpg

platelets and the immune continuum nri2956-f3

http://www.nature.com/nri/journal/v11/n4/images/nri2956-f3.jpg

2.4.2 Classification of hematological malignancies
Practical Diagnosis of Hematologic Disoreders. 4th edition. Vol 2.
Kjeldsberg CR, Ed.  ASCP Press.  2006. Chicago, IL.

2.4.2.1 Primary Classification

Acute leukemias

Myelodysplastic syndromes

Acute myeloid leukemia

Acute lymphoblastic leukemia

Myeloproliferative Disorders

Chronic myeloproliferative disorders

Chronic myelogenous leukemia and related disorders

Myelofibrosis, including chronic idiopathic

Polycythemia, including polycythemia rubra vera

Thrombocytosis, including essential thrombocythemia

Chronic lymphoid leukemia and other lymphoid leukemias

Lymphomas

Non-Hodgkin Lymphoma

Hodgkin lymphoma

Lymphoproliferative disorders associated with immunodeficiency

Plasma Cell dyscrasias

Mast cell disease and Histiocytic neoplasms

2.4.2.2 Secondary Classification

2.4.2.3 Nuance – PathologyOutlines
Nat Pernick, Ed.

Leukemia – Acute

Primary referencesacute leukemia-generalAML generalAML classificationtransient abnormal myelopoiesis

Recurrent genetic abnormalities: AML with t(6;9)AML with t(8;21)AML with 11q23 abnormalitiesAML with inv(16) or t(16;16)AML with Down syndromeAML with FLT3 mutationsAML with myelodysplastic related changesAML therapy relatedAPL microgranular variantAPL with t(15;17)APL with t(V;17)APL therapy related

AML not otherwise categorized: minimally differentiated (M0)without maturation (M1)with maturation (M2)M3myelomonocyticmonoblastic and monocyticerythroidmegakaryoblasticCD13/CD33 negativebasophilicmyeloid sarcomaacute panmyelosis with myelofibrosiswith Philadelphia chromosomewith pseudo Chediak-Higashi anomalyhypocellular

ALL: generalWHO classificationwith eosinophilia

PreB ALL: generalt(9;22)t(v;11q23)t(1;19)t(5;14)t(12;21)hyperdiploidyhypodiploidymature B ALL/Burkitt

Other ALL: T ALLambiguous lineagemixed phenotype

AML and related malignancies

Acute myeloid leukemias with recurrent genetic abnormalities:

  • AML with t(8;21)(q22;q22); RUNX1-RUNX1T1
  • AML with inv(16)(p13.1;q22) or t(16;16)(p13.1;q22); CBF&beta-MYH11
  • Acute promyelocytic leukemia with t(15;17)(q22;q12); PML/RAR&alpha and variants
  • AML with t(9;11)(p22;q23); MLLT3-MLL
  • AML with t(6;9)(p23;q34); DEK-NUP214
  • AML with inv(3)(q21q26.2) or t(3;3)(q21;q26.2); RPN1-EVI1
  • AML (megakaryoblastic) with t(1;22)(p13;q13); RBM15-MKL1
  • AML with mutated NPM1*
  • AML with mutated CEBPA*

* provisional

Acute myeloid leukemia with myelodysplasia related changes

Therapy related acute myeloid leukemia

  • Alkylating agent related
  • Topoisomerase II inhibitor related (some maybe lymphoid)

Acute myeloid leukemia not otherwise categorized:

  • AML minimally differentiated (M0)
  • AML without maturation (M1)
  • AML with maturation (M2)
  • Acute myelomonocytic leukemia (M4)
  • Acute monoblastic and monocytic leukemia (M5a, M5b)
  • Acute erythroid leukemia (M6)
  • Acute megakaryoblastic leukemia (M7)
  • Acute basophilic leukemia
  • Acute panmyelosis with myelofibrosis

Myeloid Sarcoma

Myeloid proliferations related to Down syndrome:

  • Transient abnormal myelopoeisis
  • Myeloid leukemia associated with Down syndrome

Blastic plasmacytoid dentritic cell neoplasm:

Acute leukemia of ambiguous lineage:

  • Acute undifferentiated leukemia
  • Mixed phenotype acute leukemia with t(9;22)(q34;q11.2); BCR-ABL1
  • Mixed phenotype acute leukemia with t(v;11q23); MLL rearranged
  • Mixed phenotype acute leukemia, B/myeloid, NOS
  • Mixed phenotype acute leukemia, T/myeloid, NOS
  • Mixed phenotype acute leukemia, NOS, rare types
  • Other acute leukemia of ambiguous lineage
  • References: WHO Classification of Tumours of Haematopoietic and Lymphoid Tissue (IARC, 2008), Discovery Medicine 2010, eMedicine

Acute lymphocytic leukemia

General
=================================================================

  • WHO classification system includes former FAB classifications ALL-L1 and L2
    ● FAB L3 is now considered Burkitt lymphoma

WHO classification of acute lymphoblastic leukemia
=================================================================

Precursor B lymphoblastic leukemia / lymphoblastic lymphoma:
● ALL with t(9;22)(q34;q11.2); BCR-ABL (Philadelphia chromosome)
● ALL with t(v;11q23) (MLL rearranged)
● ALL with t(1;19)(q23;p13.3); TCF3-PBX1 (E2A-PBX1)
● ALL with t(12;21)(p13;q22); ETV6-RUNX1 (TEL-AML1)
● Hyperdiploid > 50
● Hypodiploid
● t(5;14)(q31;q32); IL3-IGH

Precursor T lymphoblastic leukemia / lymphoma

Additional references
=================================================================

Mixed phenotype acute leukemia (MPAL)

General
=================================================================

  • De novo acute leukemia containing separate populations of blasts of more than one lineage (bilineal or bilineage), or a single population of blasts co-expressing antigens of more than one lineage (biphenotypic)Excludes:
    ● Acute myeloid leukemia (AML) with recurrent translocations t(8;21), t(15;17) or inv(16)
    ● Leukemias with FGFR1 mutations
    ● Chronic myelogenous leukemia (CML) in blast crisis
    ● Myelodysplastic syndrome (MDS)-related AML and therapy-related AML, even if they have MPAL immunophenotypeCriteria for biphenotypic leukemia:
    ● Score of 2 or more for each of two separate lineages:The European Group for the Immunological Classification of Leukemias (EGIL) scoring system2008 WHO classification of acute leukemias of ambiguous lineage 

Prognosis
=================================================================

  • Poor, overall survival of 18 months
    ● Young age, normal karyotype and ALL induction therapy are associated with favorable survival
    ● Ph+ is a predictor for poor prognosis
    ● Bone marrow transplantation should be considered in first remission

Major Categories

MPAL with t(9;22)(q34;q11.2); BCR-ABL1
=================================================================

  • 20% of all MPAL
    ● Blasts with t(9;22)(q34;q11.2) translocation or BCR-ABL1 rearrangement (Ph+) without history of CML
    ● Majority in adults
    ● High WBC counts● Most of the cases B/myeloid phenotype
    ● Rare T/myeloid, B and T lineage, or trilineage leukemiasMorphology:
    ● Many cases show a dimorphic blast population, one resembling myeloblasts and the other lymphoblastsCytogenetic abnormalities:
    ● Conventional karyotyping for t(9;22), FISH or PCR for BCR-ABL1 translocation
    ● Additional complex karyotypes
    ● Ph+ is a poor prognostic factor for MPAL, with a reported median survival of 8 months
    ● Worse than patients of all other types of MPAL

MPAL with t(v;11q23); MLL rearranged
=================================================================

  • Meeting the diagnostic criteria for MPAL with blasts bearing a translocation involving the 11q23 breakpoint (MLL gene)
    ● MPAL with MLL rearranged rare
    ● More often seen in children and relatively common in infancy
    ● High WBC counts
    ● Poor prognosis
    ● Dimorphic blast population, with one resembling monoblasts and the other resembling lymphoblasts
    ● Lymphoblast population often shows a CD19+, CD10- B precursor immunophenotype, frequently CD15+
    ● Expression of other B markers usually weak
    ● Translocations involving MLL gene include t(4;11)(q21;q23), t(11;19)(q23;p13), and t(9;11)(p22;q23)
    ● Cases with chromosome 11q23 deletion should not be classified in this category

B cell acute lymphoblastic leukemia (ALL) / lymphoblastic lymphoma (LBL)

General

=================================================================

  • Current 2008 WHO classification: B lymphoblastic leukemia / lymphoma, NOS or B lymphoblastic leukemia / lymphoma with recurrent genetic abnormalities
  • See also lymphomas: B cell chapter
  • Also called B cell acute lymphoblastic leukemia / lymphoblastic lymphoma, pre B ALL / LBL
  • Usually children
  • B acute lymphoblastic leukemia presents with pancytopenia due to extensive marrow involvement, stormy onset of symptoms, bone pain due to marrow expansion, hepatosplenomegaly due to neoplastic infiltration, CNS symptoms due to meningeal spread and testicular involvement
  • B acute lymphoblastic lymphoma often presents with cutaneous nodules, bone or nodal involvement, < 25% lymphoblasts in bone marrow and peripheral blood; aleukemic cases are usually asymptomatic
  • Depending on specific leukemia, arises in either hematopoietic stem cell or B-cell progenitor
  • Tumors are derived from pre-germinal center naive B cells with unmutated VH region genes
  • Have multiple immunophenotyping aberrancies relative to normal B cell precursors (hematogones); at relapse, 73% show loss of 1+ aberrance and 60% show new aberrancies (Am J Clin Pathol 2007;127:39)

Prognostic features

=================================================================

  • Favorable prognosis: age 1-10 years, female, white; preB phenotype, hyperdiploidy>50, t(12,21), WBC count at presentation <50×108/L, non-traumatic tap with no blasts in CNS, rapid response to chemotherapy < 5% blasts on morphology on day 15, remission status after induction <5% blasts on morphology and <0.01% blast on flow or PCR, CD10+
  • Intermediate prognosis: hyperdiploidy 47-50, diploid, 6q- and rearrangements of 8q24
  • Unfavorable prognosis: under age 1 (usually have 11q23 translocations) or over age 10; t(9;22) (but not if age 59+ years, Am J Clin Pathol 2002;117:716); male, > 50×108/L WBC at presentation, hypodiploidy, near tetraploidy, 17p- and MLL rearrangements t(v;11q23); CD10-; non-traumatic tap with > 5% blasts or traumatic tap (7%); also increased microvessel staining using CD105 in children (Leuk Res 2007;31:1741), MDR1 expression in children (Oncol Rep 2004;12:1201) and adults (Blood 2002;100:974), 25%+ blasts on morphology on day 15, remission status after induction ≥ 5% blasts on morphology and ≥ 0.1% blasts on flow or PCR

Case reports

=================================================================

  • 12 month old girl and 13 month old boy with mature phenotype but no translocations (Arch Pathol Lab Med 2003;127:1340)
  • 56 year old man with ALL arising from follicular lymphoma (Arch Pathol Lab Med 2002;126:997)
  • 76 year old man with basal cell carcinoma (Diagn Pathol 2007;2:32)
  • With hemophagocytic lymphohistiocytosis (Pediatr Blood Cancer 2008;50:381)

Treatment

================================================================

  • Chemotherapy cures more children than adults; adolescents benefit from intensive regimens (Hematology Am Soc Hematol Educ Program 2005:123)

Micro description

=================================================================

  • Bone marrow smears: small to intermediate blast-like cells with scant, variably basophilic cytoplasm, round / oval or convoluted nuclei, fine chromatin and indistinct nucleoli; frequent mitotic figures; may have “starry sky” appearance similar to Burkitt lymphoma; may have large lymphoblasts with 1-4 prominent nucleoli resembling myeloblasts; usually no sclerosis
  • Bone marrow biopsy: usually markedly hypercellular with reduction of trilinear maturation; cells have minimal cytoplasm, medium sized nuclei that are often convoluted, moderately dense chromatin and indistinct nucleoli, brisk mitotic activity
  • Other tissues: may have “starry sky” appearance similar to Burkitt lymphoma; collagen dissection, periadipocyte growth pattern and single cell linear filing

Chronic Leukemia

Chronic Myeloid Neoplasms

Myelodysplastic syndromes (MDS): general, WHO classification, childhood, refractory anemia, refractory anemia with ringed sideroblasts, refractory cytopenia with multilineage dysplasia, refractory anemia with excess blasts, 5q-syndrome, therapy related, unclassified, arsenic toxicity

Myeloproliferative neoplasms (MPN): general, WHO classification, chronic eosinophilic leukemia, chronic myelogenous leukemia, chronic neutrophilic leukemia, essential thrombocythemia, hypereosinophilic syndrome, mast cell disease, polycythemia vera, primary myelofibrosis, unclassifiable

MDS/MPN: general, WHO classification, atypical CML, chronic myelomonocytic leukemia (CMML), chronic myelomonocytic leukemia with eosinophilia, juvenile myelomonocytic leukemia, unclassifiable

Myeloid neoplasms associated with eosinophilia and abnormalities of PDGFRA, PDGFRB, or FGFR1: PDGFRA, PDGFRB, FGFR1

Miscellaneous: transient myeloproliferative disorder of Downís syndrome

Lymphoma and plasma cell neoplasms

Lymph nodes: normal development-generalB cellsT cellsNK cellsnormal histologygrossing lymph nodesfeatures to report

Molecular testing: theoryFISHnorthern blotPCRsouthern blot

Non-Hodgkin lymphoma: generalcytogeneticsstagingstaging-pediatricmorphologic clueshemophagocytic syndromechemotherapeutic atypia

B cell disorders: generalpost-rituximabbone marrow biopsyclassification-historicalWHO classification

B cell lymphoma subtypes: age-related EBV-associatedALK positive large cellBurkittunclassifiable-intermediate between Burkitt and diffuse large B cell lymphomaCLL
diffuse large B cell: 
diffuse-NOSCD5+T cell / histiocyte richprimary cutaneous-generalprimary cutaneous-legprimary sites-other
follicular: 
generalchildhoodcutaneousGI
hairy cell leukemiaHCL variantintravascular large B celllymphomatoid granulomatosislymphoplasmacyticmantle cell-classicmantle cell-blastoidmarginal zone-generalmarginal zone-MALTMALT-primary sitesmarginal zone-nodalmediastinal (thymic)plasmablasticpre B lymphoblastic leukemia/lymphomaprimary effusionprolymphocytic leukemiapyothorax associatedSLLsplenic marginal zonesplenic lymphoma with villous lymphocytes

Plasma cell neoplasms: generalmyelomaplasmacytomaheavy chain diseaseprimary amyloidosisMGUSOsteosclerotic myeloma (POEMS)cryoglobulinemia

T/NK cell disorders: generalWHO classificationadult T cellaggressive NK cell leukemiaanaplastic large cell ALK+ALK-angioimmunoblastic T cellblastic plasmacytoidchronic lymphoproliferative disorders of NK cellscutaneous CD4+ small/medium sized T cell lymphomacutaneous CD30 positive T cell lymphoproliferative disorderscutaneous gamma delta T cell lymphomaenteropathyepidermotropic CD8+ T cell lymphomahepatosplenicindolent T cell proliferationsmycosis fungoidesNK/T cell lymphoma-nasal typenodal CD8+ cytotoxic T cellnonB nonT lymphoblasticperipheral T cell lymphoma, NOSprimary effusion lymphomaSezary syndromestagingsubcutaneous panniculitis-likeT cell large granular lymphocytic leukemiaT cell lymphoblastic leukemia/lymphomaT cell prolymphocytic leukemia

Hodgkin lymphoma: general/stagingclassiclymphocyte depletedlymphocyte rich classicalmixed cellularitynodular lymphocyte predominantnodular sclerosis

Post-transplantation: generalWHO classificationplasmacytic hyperplasia/IM-like lesionspolymorphic B cell lymphoproliferative disordersmonomorphic B cell lymphoproliferative disordersothergraft versus host disease

Other: AIDS associated-generalAIDS associated-examplesEBV+ T cell lymphoproliferative disorders of childhoodprimary immune disorders related

Myeloproliferative neoplasms (MPN)

WHO 2008 – Myeloproliferative neoplasms (MPN) 

General
=================================================================

  • Chronic myelogenous leukemia
    ● Polycythemia vera
    ● Essential thrombocythemia
    ● Primary myelofibrosis
    ● Chronic neutrophilic leukemia
    ● Chronic eosinophilic leukemia, not otherwise categorized
    ● Mast cell disease
    ● MPNs, unclassifiable

WHO 2001 – Chronic myeloproliferative diseases 

Definition
=================================================================

  • Chronic myelogenous leukemia (Philadelphia chromosome, t(9;22)(q34;q11), BCR-ABL positive)
    ● Chronic neutrophilic leukemia
    ● Chronic eosinophilic leukemia (and the hypereosinophilic syndrome)
    ● Polycythemia vera
    ● Chronic idiopathic myelofibrosis (with extramedullary hematopoiesis)
    ● Essential thrombocythemia
    ● Chronic myeloproliferative disease, unclassifiable

Additional references
=================================================================

The World Health Organization (WHO) classification of the myeloid neoplasms  James W. Vardiman, Nancy Lee Harris, and Richard D. Brunning
Blood 2002; 100(7)  http://dx.doi.org/10.1182/blood-2002-04-1199

Lymphoma – Non B cell neoplasms

T/NK cell disorders/WHO classification (2008)

Principles of classification
=================================================================

  • Based on all available information (morphology, immunophenotype, genetics, clinical)
    ● No one antigenic marker is specific for any neoplasm (except ALK1)
    ● Immune profiling less helpful in subclassification of T cell lymphomas then B cell lymphomas
    ● Certain antigens commonly associated with specific disease entities but not entirely disease specific
    ● CD30: common in anaplastic large cell lymphoma but also classic Hodgkin lymphoma and other B and T cell lymphomas
    ● CD56: characteristic for nasal NK/T cell lymphoma, but also other T cell neoplasms and plasma cell disorders
    ● Variation of immunophenotype within a given disease (hepatosplenic T cell lymphoma: usually γδ but some are αβ)
    ● Recurrent genetic alterations have been identified for many B cell lymphomas but not for most T cell lymphomas
    ● No attempt to stratify lymphoid malignancies by grade
    ● Recognize the existence of grey zone lymphomas
    ● This multiparameter approach has been validated in international studies as highly reproducible

WHO 2008 classification of tumors of hematopoietic and lymphoid tissues (T/NK)
=================================================================

Precursor T-lymphoid neoplasms
● T lymphoblastic leukemia/lymphoma, 9837/3

Mature T cell and NK cell neoplasms
● T cell prolymphocytic leukemia, 9834/3
● T cell large granular lymphocytic leukemia, 9831/3
● Chronic lymphoproliferative disorder of NK cells, 9831/3
● Aggressive NK cell leukemia, 9948/3
● Systemic EBV-positive T cell lymphoproliferative disease of childhood, 9724/3
● Hydroa vacciniforme-like lymphoma, 9725/3
● Adult T cell leukemia/lymphoma, 9827/3
● Extranodal NK/T cell lymphoma, nasal type, 9719/3
● Enteropathy-associated T cell lymphoma, 9717/3
● Hepatosplenic T cell lymphoma, 9716/3
● Subcutaneous panniculitis-like T cell lymphoma, 9708/3
● Mycosis fungoides, 9700/3
● Sézary syndrome, 9701/3
● Primary cutaneous CD30-positive T cell lymphoproliferative disorders
● Lymphomatoid papulosis, 9718/1
● Primary cutaneous anaplastic large cell lymphoma, 9718/3
● Primary cutaneous gamma-delta T cell lymphoma, 9726/3
● Primary cutaneous CD8-positive aggressive epidermotropic cytotoxic T cell lymphoma, 9709/3
● Primary cutaneous CD4-positive small/medium T cell lymphoma, 9709/3
● Peripheral T cell lymphoma, NOS, 9702/3
● Angioimmunoblastic T cell lymphoma, 9705/3
● Anaplastic large cell lymphoma, ALK-positive, 9714/3
● Anaplastic large cell lymphoma, ALK-negative, 9702/3

Chronic Lymphocytic Leukemia

Chronic Lymphocytic Leukemia Staging
Author: Sandy D Kotiah, MD; Chief Editor: Jules E Harris, MD
Medscape Sep 6, 2013
http://emedicine.medscape.com/article/2006578-overview

General considerations in the staging of chronic lymphocytic leukemia (CLL) and the revised Rai (United States) and Binet (Europe) staging systems for CLL are provided below.[1, 2, 3]

See Chronic Leukemias: 4 Cancers to Differentiate, a Critical Images slideshow, to help detect chronic leukemias and determine the specific type present.

General considerations

  • CLL and small lymphocytic lymphoma (SLL) are different manifestations of the same disease; SLL is diagnosed when the disease is mainly nodal, and CLL is diagnosed when the disease is seen in the blood and bone marrow
  • CLL is diagnosed by > 5000 monoclonal lymphocytes/mm3 for longer than 3mo; the bone marrow usually has more than 30% monoclonal lymphocytes and is either normocellular or hypercellular
  • Monoclonal B lymphocytosis is a precursor form of CLL that is defined by a monoclonal B cell lymphocytosis < 5000 monoclonal lymphocytes/mm3; all lymph nodes smaller than 1.5 cm; no anemia; and no thrombocytopenia

Revised Rai staging system (United States)

Low risk (formerly stage 0)[1] :

  • Lymphocytosis, lymphocytes in blood > 15000/mcL, and > 40% lymphocytes in the bone marrow

Intermediate risk (formerly stages I and II):

  • Lymphocytosis as in low risk with enlarged node(s) in any site, or splenomegaly or hepatomegaly or both

High risk (formerly stages III and IV):

  • Lymphocytosis as in low risk and intermediate risk with disease-related anemia (hemoglobin level < 11.0 g/dL or hematocrit < 33%) or platelets < 100,000/mcL

Binet staging system (Europe)

Stage A:

  • Hemoglobin ≥ 10 g/dL, platelets ≥ 100,000/mm3, and < 3 enlarged areas

Stage B:

  • Hemoglobin ≥ 10 g/dL, platelets ≥ 100,000/mm3, and ≥ 3 enlarged areas

Stage C:

  • Hemoglobin < 10 g/dL, platelets < 100,000/mm3, and any number of enlarged areas

Read Full Post »

Nanotechnology Used for Prevention of Bone Infection

Larry H Bernstein, MD, FCAP, Reporter

http://pharmaceuticalintelligence.com/2013/06/03/lhbern/Nanotechnology_Used_for_Prevention_of_Bone_Infection

source:

Functionalised nanoscale coatings using layer-by-layer assembly for imparting antibacterial properties to polylactide-co-glycolide surfaces

Piergiorgio GentileMaria E. FrongiaMar CardellachCheryl A. Mille

In order to achieve high local biological activity and reduce the risk of side effects of antibiotics in the treatment of periodontal and bone infections, a localised and temporally controlled delivery system is desirable. The aim of this research was to develop a functionalised and resorbable surface to contact soft tissues to improve the antibacterial behaviour during the first week after its implantation in the treatment of periodontal and bone infections. Solvent-cast poly(d,l-lactide-co-glycolide acid) (PLGA) films were aminolysed and then modified by Layer-by-Layer technique to obtain a nano-layered coating using poly(sodium4-styrenesulfonate) (PSS) and poly(allylamine hydrochloride) (PAH) as polyelectrolytes. The water-soluble antibiotic, metronidazole (MET), was incorporated from the ninth layer. Infrared spectroscopy showed that the PSS and PAH absorption bands increased with the layer number. The contact angle values had a regular alternate behaviour from the ninth layer. X-ray Photoelectron Spectroscopy evidenced two distinct peaks, N1s and S2p, indicating PAH and PSS had been introduced. Atomic Force Microscopy showed the presence of polyelectrolytes on the surface with a measured roughness about 10 nm after 20 layers’ deposition. The drug release was monitored by Ultraviolet–visible spectroscopy showing 80% loaded-drug delivery in 14 days. Finally, the biocompatibility was evaluated in vitro with L929 mouse fibroblasts and the antibacterial properties were demonstrated successfully against the keystone periodontal bacteria Porphyromonas gingivalis, which has an influence on implant failure, without compromising in vitro biocompatibility. In this study, PLGA was successfully modified to obtain a localised and temporally controlled drug delivery system, demonstrating the potential value of LbL as a coating technology for the manufacture of medical devices with advanced functional properties.

functionalized coating at nanoscale dimension

functionalized coating at nanoscale dimension

 

 

Read Full Post »

Natural Products Chemistry

Writer and Curator: Larry H. Bernstein, MD, FCAP 

 

 

Natural products chemistry or pharmacognosy, the study of the physical, chemical, biochemical and biological properties of drugs, drug substances or potential drugs

or drug substances of natural origin as well as the search for new drugs from natural
sources, is an a tradition in medicine that reaches to a tradition thousands of years
old.  It has to some extent been supplanted by structural organic chemistry, metallo-organic chemistry, and synthetic organic chemistry of families of drugs.  In some
cases, drug failures may be attributed to the inherent failure in a family, and in others
there has been substitution of a drug compound by another with eaqual or greater
potency and less toxicity. A serious confounder has been that medications intended
for a specific effect has either an unfavorable interaction with another class of drugs,
or it has a metabolic reaction with another organ or pathway than the use intended.
That has been the huge impediment to pharmaceutical development.

However, it is important to remember that many of the medications in common use
were originally plant or natural derivatives, e.g., digoxin, Warfarin.

Thymoquinone, an extract of nigella sativa seed oil, blocked pancreatic cancer cell
growth and killed the cells by enhancing the process of programmed cell death
Steve Benowitz  steven.benowitz@jefferson.edu

Researchers at the Kimmel Cancer at Jefferson in Philadelphia have found that
thymoquinone, an extract of nigella sativa seed oil, blocked pancreatic cancer cell
growth and killed the cells by enhancing the process of programmed cell death.
According to Hwyda Arafat, M.D., Ph.D., associate professor of Surgery at
Jefferson Medical College of Thomas Jefferson University, nigella sativa helps treat
a broad array of diseases, including some immune and inflammatory disorders.
Previous studies also have shown anticancer activity in prostate and colon cancers,
as well as antioxidant and anti-inflammatory effects.

Using a human pancreatic cancer cell line, she and her team found that adding
thymoquinone killed approximately 80 percent of the cancer cells. They demonstrated
that thymoquinone triggered programmed cell death in the cells, and that a number of
important genes, including p53, Bax, bcl-2 and p21, were affected. The researchers
found that expression of p53, a tumor suppressor gene, and Bax, a gene that promotes
programmed cell death, was increased, while bcl-2, which blocks such cell death,
was decreased. The p21 gene, which is involved in the regulation of different phases
of the cell cycle, was substantially increased.

In addition, adding thymoquinone to pancreatic cancer cells reduced the production
and activity of enzymes called histone deacetylases (HDACs), which remove the
acetyl groups from the histone proteins, halting the gene transcription process.
Dr. Arafat notes that HDAC inhibitors are a “hot” new class of drugs that interfere
with the function of histone deacetylases, and is being studied as a treatment for
cancer and neurodegenerative diseases.

Extra Virgin Olive Oil Improves Learning and Memory in SAMP8 Mice
SA Farra, TO Price, LJ Dominguez, A Motisi, F Saianoe, et al.
Journal of Alzheimer’s Disease 28 (2012) 81–92
http://dx.doi.org/10.3233/JAD-2011-110662

Polyphenols are potent antioxidants found in extra virgin olive oil (EVOO);
antioxidants have been shown to reverse age- and disease-related learning and
memory deficits. We examined the effects of EVOO on learning and memory
in SAMP8 mice, an age-related learning/memory impairment model
associated with increased amyloid- protein and brain oxidative damage.
We administered EVOO, coconut oil, or butter to 11 month old SAMP8
mice for 6 weeks. Mice were tested in T-maze foot shock avoidance
and one-trial novel object recognition with a 24 h delay. Mice which
received EVOO had improved acquisition in the T-maze and spent
more time with the novel object in one-trial novel object recognition
versus mice which received coconut oil or butter. Mice that received
EVOO had improve T-maze retention compared to the mice that received
butter. EVOO increased brain glutathione levels suggesting reduced
oxidative stress as a possible mechanism. These effects plus increased
glutathione reductase activity, superoxide dismutase activity, and
decreased tissue levels of 4-hydroxynoneal and 3-nitrotyrosine were
enhanced with enriched EVOO (3× and 5× polyphenols concentration).
Our findings suggest that EVOO has beneficial effects on learning
and memory deficits found in aging and diseases, such as those related
to the overproduction of amyloid- protein, by reversing oxidative damage
in the brain, effectsthat are augmented with increasing concentrations
of polyphenols in EVOO.

Synthetic analogues of flavonoids with improved activity against platelet activation
and aggregation as novel prototypes of food supplements
S Del Turco, S Sartini, G Cigni, C Sentieri, S Sbrana, et al.
Food Chemistry 175 (2015) 494–499 http://dx.doi.org/10.1016/j.foodchem.2014.12.005

We investigated the ability of quercetin and apigenin to modulate platelet activation
and aggregation, and compared the observed efficacy with that displayed by their
synthetic analogues 2-phenyl-4H-pyrido[1,2-a]pyrimidin-4-ones, 1–4, and 2,3-
diphenyl-4H-pyrido[1,2-a]pyrimidin-4-ones, 5–7. Platelet aggregation was
explored through a spectrophotometric assay on platelet-rich plasma (PRP)
treated with the thromboxane A2 mimetic U46619, collagen and thrombin in
presence/absence of various bioisosteres of flavonoids (12.5–25–50–100 lM).
The platelet density, (mean platelet component, MPC), was measured by the
Advia 120 Hematology System as a marker surrogate of platelet activation. The
induced P-selectin expression, which reflects platelet degranulation/activation,
was quantified by flow cytometry on PRP. Our synthetic compounds modulated
significantly both platelet activation and aggregation, thus turning out to be more
effective than the analogues quercetin and apigenin when tested at a
concentration fully consistent with their use in vivo. Accordingly, they might
be used as food supplements to increase the efficacy of natural flavonoids.

Polysaccharide Extracts From Sargassum Siliquosum J.G. Agardh Modulates
Production Of Pro-Inflammatory Cytokines In Lps-Induced Pbmc And Delays
Coagulation Time In-Vitro
RD Vasquez, RSP Garcia-Meim and JDA Ramos
Jour. Harmo. Res. Pharm., 2014, 3(3), 101-112  www.johronline.com

Sulfated polysaccharides from brown seaweeds exhibit various biological activities,
structural diversity, and are potential reagents for the development of therapeutic
drugs. This study aimed to determine the effect of aqueous and fucoidan extracts from
Sargassum siliquosum J. G. Agardh on viability of peripheral blood mononuclear
cells, production of pro-inflammatory cytokines and plasma coagulation using
in vitro
assays. Sulfate contents of the polysaccharides were quantified using Acid-Ashing Digestion Ion chromatography. Effect on viability of the extracts on
peripheral blood mononuclear cells was determined by MTT Assay. Estimation
of pro-inflammatory cytokines concentrations was done through Enzyme-Linked
Immunosorbent Assay, while anticoagulant activity was measured by Prothrombin
Time and Activated Partial Thromboplastin Time. Results revealed that both
extracts were non-cytotoxic to PBMCs, reduced significantly the production of
IL-1, IL-6,TNF-α and exhibited normal anticoagulant activity in PT assays and
prolonged APTT remarkably in dose-dependent manner. In conclusion, extracts
of the Sargassum siliquosum J.G. Agardh is a potential alternative source in
producing anti-inflammatory and anticoagulant substances in the future.

Purple corn anthocyanins inhibit diabetes-associated glomerular monocyte
activation and macrophage infiltration
Min-Kyung Kang, J Li, Jung-Lye Kim, Ju-Hyun Gong, Su-Nam Kwak, JHY Park, et al.
Am J Physiol Renal Physiol 303: F1060–F1069
http://dx.doi.org:/10.1152/ajprenal.00106.2012

Purple corn anthocyanins inhibit diabetes-associated glomerular monocyte activation
and macrophage infiltration. Diabetic nephropathy  (DN) is one of the major diabetic
complications and the leading cause of end- stage renal disease. In early DN, renal
injury and macrophage accumulation take place in the pathological environment
of glomerular vessels adjacent to renal mesangial cells expressing proinflammatory
mediators. Purple corn utilized as a daily food is rich in anthocyanins exerting
disease-preventive activities as a functional food. This study elucidated whether
anthocyanin-rich purple corn extract (PCA) could suppress monocyte activation and
macrophage infiltration. In the in vitro study, human endothelial cells and THP-1 monocytes were cultured in conditioned media of human mesangial cells exposed
to 33 mM glucose (HG-HRMC). PCA decreased the HG-HRMC-conditioned, media-induced expression of endothelial vascular cell adhesion molecule-1, E-selectin,
and monocyte integrins- and -2 through blocking the mesangial Tyk2 pathway. In the
in vivo animal study, db/db mice were treated with 10 mg/kg PCA daily for 8 wk. PCA
attenuated CXCR2 induction and the activation of Tyk2 and STAT1/3 in db/db mice.
Periodic acid-Schiff staining showed that PCA alleviated mesangial expansion-elicited renal injury in diabetic kidneys. In glomeruli, PCA attenuated the induction
of intracellular cell adhesion molecule-1 and CD11b. PCA diminished monocyte
chemoattractant protein-1 expression and macrophage inflammatory protein 2
transcription in the diabetic kidney, inhibiting the induction of the macrophage
markers CD68 and F4/80. These results demonstrate that PCA antagonized
the infiltration and accumulation of macrophages in diabetic kidneys through
disturbing the mesangial IL-8-Tyk-STAT signaling pathway. Therefore, PCA may
be a potential renoprotective agent treating diabetes-associated glomerulosclerosis.

Proximate analysis, phytochemical screening, and total phenolic and flavonoid
contentof Philippine bamboo Schizostachyum lumampao
JVV Tongco, RM Aguda and RA Razal.
Journal of Chemical and Pharmaceutical Research, 2014, 6(1):709-713
www.jocpr.com

In Asia, bamboo has been widely cultivated as a fast growing non-timber forest
species. Flavonoids and phenolics were shown to reduce inflammation, promote
overall cardiovascular health and circulation, and even protect against certain kinds
of cancer. These studies necessitate the chemical characterization (e.g., proximate
analysis) and qualitative identification of phenolics.

The chemical composition of the leaves of Schizostachyum lumampao, known as
“buho” in the Philippines, was determined for its potential use as herbal tea with
potential health benefits, such as antioxidant properties. Proximate analysis using
standard AOAC methods showed that the air-dried leaves contain 10 % moisture, 30.5 % ash, 22.1 % crude protein, 1.6 % crude
fat, 28.7 % crude fiber, and 7.2 % total sugar (by difference). Using a variety of
reagents for qualitative phytochemical screening, saponins, diterpenes, triterpenes,
phenols, tannins, and flavonoids were detected in both the ethanolic and aqueous
leaf extracts, while phytosterols were only detected in the ethanolic extract. Using
UV-Vis spectrophotometry, the total phenolic content (in GAE) were 76.7 and
13.5 gallic acid equivalents per 100 g air-dried sample for the ethanolic and
aqueous extracts, respectively. The total flavonoid content were 70.2 and 17.86 mg
quercetin equivalents per 100 g air-dried sample for the ethanolic and aqueous
extracts, respectively. This preliminary study showed the total amount of phenolics
and flavonoids present in buho, the phytochemicals present, and its proximate
analysis.

Ophiopogonin D: A new herbal agent against osteoporosis
Q Huang, B Gao, L Wang, Hong-Yang Zhang, Xiao-Jie Li, J Shi, Z Wang, et al.
Bone 74 (2015) 18–28
http://dx.doi.org/10.1016/j.bone.2015.01.002

Excessive reactive oxygen species (ROS) play an important role in the development
of osteoporosis. Ophiopogonin D (OP-D), isolated from the traditional Chinese
herbal agent Radix Ophiopogon japonicus, is a potent anti-oxidative agent. We
hypothesized that OP-D demonstrates anti-osteoporosis effects via decreasing
ROS generation in mouse pre-osteoblast cell line MC3T3-E1 subclone 4 cells
and a macrophage cell line RAW264.7 cells. We investigated OP-D on osteogenic
and osteoclastic differentiation under oxidative status. Hydrogen peroxide (H2O2)
was used to establish an oxidative damage model. In vivo, we established a murine
ovariectomized (OVX) osteoporosis model. Then, we searched the molecular
mechanism of OP-D against osteoporosis. Our results revealed that OP-D
significantly promoted the proliferation of MC3T3-E1 cells and improved some
osteogenic markers. Moreover, OP-D reduced TRAP activity and the mRNA
expressions of osteoclastic genes in RAW264.7 cells. OP-D suppressed ROS
generation in both MC3T3-E1 and RAW264.7 cells. OP-D treatment reduced
the activity of serum bone degradation markers, including CTX-1 and TRAP.
Further research showed that OP-D displayed anti-osteoporosis effects via
reducing ROS through the FoxO3a-β-catenin signaling pathway. In summary,
our results indicated that the protective effects of OP-D against osteoporosis
are linked to a reduction in oxidative stress via the FoxO3a-β-catenin signaling
pathway, suggesting that OP-D may be a beneficial herbal agent in bone-related
disorders, such as osteoporosis.

Revealing the macromolecular targets of complex natural products
D Reker, AM Perna, T Rodrigues, P Schneider, M Reutlinger, et al.
Nature Chemistry Dec  2014; 6: 1072 – 1078
http://dx.doi.org:/10.1038/NCHEM.2095

Natural products have long been a source of useful biological activity for the
development of new drugs. Their macromolecular targets are, however, largely
unknown, which hampers rational drug design and optimization. Here we present
the development and experimental validation of a computational method for the
discovery of such targets. The technique does not require three-dimensional
target models and may be applied to structurally complex natural products. The
algorithm dissects the natural products into fragments and infers potential
pharmacological targets by comparing the fragments to synthetic reference drugs
with known targets. We demonstrate that this approach results in confident
predictions. In a prospective validation, we show that fragments of the potent
antitumour agent archazolid A, a macrolide from the myxobacterium Archangium
gephyra, contain relevant information regarding its polypharmacology.
Biochemical and biophysical evaluation confirmed the predictions. The results
obtained corroborate the practical applicability of the computational approach to
natural product ‘de-orphaning’.

In vitro activity of Inula helenium against clinical Staphylococcus aureus strains
including MRSA
O’Shea S, Lucey B, Cotter L.
Br J Biomed Sci. 2009;66(4):186-9.

The present study aims to investigate the bactericidal activity (specifically
antistaphylococcal) of Inula helenium. The antimicrobial activity of the extract is
tested against 200 clinically significant Irish Staphylococcus aureus isolates
consisting of methicillin-resistant (MRSA) and -sensitive (MSSA) S. aureus
using a drop test method and a microbroth dilution method. The antibacterial
effect is evaluated by measuring the area of the inhibition zone against the
isolates. Results proved I. helenium to be 100% effective against the 200
staphylococci tested, with 93% of isolates falling within the ++ and +++ groups.
The minimum bactericidal concentration of I. helenium was examined on a subset
of isolates and values ranged from 0.9 mg/mL to 9.0 mg/mL. The extract was
equally effective against antibiotic-resistant and -sensitive strains. This plant
therefore possesses compounds with potent antistaphylococcal properties, which
in the future could be used to complement infection control policies and prevent
staphylococcal infection and carriage. This research supports other studies
wherein herbal plants exhibiting medicinal properties are being examined to
overcome the problems of antibiotic resistance and to offer alternatives in the
treatment and control of infectious diseases.

Inhibition of Proliferation of Breast Cancer Cells MCF7 and MDA-MB-231 by Lipophilic Extracts of Papaya (Carica papaya L. var. Maradol) Fruit
LE Gayosso-García Sancho, EM Yahia, P García-Solís, GA González-Aguilar
Food and Nutrition Sciences, 2014, 5, 2097-2103
http://dx.doi.org/10.4236/fns.2014.521222

Several epidemiological studies have suggested that carotenoids have
antineoplasic activities. The objective of this study was to determine the
antiproliferative effect of rich carotenoid lipophilic extracts of papaya fruit
pulp (Carica papaya L., cv Maradol) in breast cancer cells, MCF-7 (estrogen
receptor positive) and MDA-MB-231 (estrogen receptor negative), and in
non-tumoral mammary epithelial cells MCF-12F. Antiproliferative effect
was evaluated using the methyl-thiazolydiphenyl-tetrazolium bromide
(MTT) assay and testing lipophilic extracts from different papaya fruit
ripening stages (RS1, RS2, RS3, RS4), at different times (24, 48 and
72 h). Papaya lipophilic extracts do not inhibit cell proliferation of MCF-12F
and MDA-MB-231 cells. However, MCF-7 cells showed a significant
reduction in proliferation at 72 h with the RS4 papaya extract. Results
suggested that lipophilic extracts had different action mechanisms on
each type of cells and therefore, more studies were required to elucidate
such mechanisms.

In vitro cytotoxic activity of silver nano particle biosynthesized from Colpomenia
sinuosa and Halymenia poryphyroides using DLA and EAC cell lines
Vishnu Kiran M and Murugesan S
World J Pharm Sci 2014; 2(9): 926-930.

This study was conducted to investigate the invitro cytotoxic activity of silver
nanoparticles biosynthesized

from Colpomenia sinuosa and Halymenia poryphyroides using DLA and EAC
cell lines by tryphan blue dye  exclusion technique and MTT assay using Mouse L929 cell lines (Lungs fibroblast). The results of the trypan blue dye exclusion assay indicates that the silver nano particles biosynthesized from
Colpomenia sinuosa and Halymenia poryphyroides inhibits the growth of DLA
and EAC cell lines in a dose dependent manner against the standard drug
Curcumin where the silver nano particle biosynthesized from Colpomenia sinuosa
showed 61.57 % and silver nano particle biosynthesized from Halymenia poryphyroides showed 89.36 % in DLA cell line similarly the silver nanoparticle biosynthesized
from Colpomenia sinuosa showed 81.96 % and silver nanoparticle biosynthesized
from Halymenia poryphyroides 91.45 % in EAC cell line. The results of the MTT
assay indicated the silver nanoparticles biosynthesized from Colpomenia sinuosa
and Halymenia poryphyroides significantly inhibited the proliferation of L929 cells
in dose dependent manner where the silver nanoparticle biosynthesized from
Colpomenia sinuosa showed 37.06 % and silver nanoparticle biosynthesized from
Halymenia poryphyroides showed 100 % against the standard drug Curcumin.

Garlic compound fights source of food-borne illness better than antibiotics
·Better than antibiotics: Garlic compound fights source of food-borne illness
(http://www.wsunews.wsu.edu)

Researchers at Washington State University have found that a compound in garlic
is 100 times more effective than two popular antibiotics at fighting the Campylobacter
bacterium, one of the most common causes of intestinal illness. Their work was
recently published in the Journal of Antimicrobial Chemotherapy.  The discovery
opens the door to new treatments for raw and processed meats and food preparation
surfaces. Most infections stem from eating raw or undercooked poultry or foods
that have been cross-contaminated via surfaces or utensils used to prepare poultry.

Lu and his colleagues looked at the ability of the garlic-derived compound, diallyl
sulfide, to kill the bacterium when it is protected by a slimy biofilm that makes it
,000 times more resistant to antibiotics than the free floating bacterial cell. They
found the compound can easily penetrate the protective biofilm and kill bacterial
cells by combining with a sulfur-containing enzyme, subsequently changing
the enzyme’s function and effectively shutting down cell metabolism. The
researchers found the diallyl sulfide was as effective as 100 times as much
of the antibiotics erythromycin and ciprofloxacin and would often work in a
fraction of the time.

Two previous works published last year by Lu and WSU colleagues in Applied
and Environmental Microbiology and Analytical Chemistry found diallyl sulfide
and other organosulfur compounds effectively kill important foodborne pathogens,
such as Listeria monocytogenes and Escherichia coli O157:H7.

“Diallyl sulfide could make many foods safer to eat”, says Barbara Rasco, a
co-author on all three recent papers and Lu’s advisor for his doctorate in food
science. “It can be used to clean food preparation surfaces and as a preservative
in packaged foods like potato and pasta salads, coleslaw and deli meats”.

Effect of tree nuts on metabolic syndrome criteria: a systematic review and
meta-analysis of randomized controlled trials

SB Mejia, CWC Kendall, E Viguiliouk, LS Augustin, V Ha, AI Cozma, A Mirrahimi, et al.
BMJ Open 2014;4:e004660.  http://dx.doi.org:/10.1136/bmjopen-2013-004660

Objective: To provide a broader evidence summary to inform dietary guidelines of the
effect of tree nuts on criteria of the metabolic syndrome (MetS).
Design: We conducted a systematic review and metaanalysis of the effect of
tree nuts on criteria of the MetS.
Data sources: We searched MEDLINE, EMBASE, CINAHL and the Cochrane Library
(through 4 April 2014).
Eligibility criteria for selecting studies: We included relevant randomized controlled
trials (RCTs) of ≥3 weeks reporting at least one criterion of the MetS.
Data extraction: Two or more independent reviewers extracted all relevant data. Data
were pooled using the generic inverse variance method using random effects models
and expressed as mean differences (MD) with 95% CIs. Heterogeneity was assessed
by the Cochran Q statistic and quantified by the I2 statistic. Study quality and risk of
bias were assessed.
Results: Eligibility criteria were met by 49 RCTs including 2226 participants who
were otherwise healthy or had dyslipidemia, MetS or type 2 diabetes mellitus.
Tree nut interventions lowered triglycerides (MD=−0.06 mmol/L (95% CI −0.09
to −0.03 mmol/L)) and fasting blood glucose (MD=−0.08 mmol/L (95% CI −0.16
to −0.01 mmol/L)) compared with control diet interventions. There was no effect
on waist circumference, high-density lipoprotein cholesterol or blood pressure with
the direction of effect favoring tree nuts for waist circumference. There was
evidence of significant unexplained heterogeneity in all analyses (p<0.05).
Conclusions: Pooled analyses show a MetS benefit of tree nuts through modest
decreases in triglycerides and fasting blood glucose with no adverse effects
on other criteria across nut types. As our conclusions are limited by the short
duration and poor quality of the majority of trials, as well as significant
unexplained between-study heterogeneity, there remains a need for larger,
longer, high-quality trials.

DPPH free radical scavenging activity of phenolics and flavonoids in some medicinal
plants of India
R Patel, Y Patel, P Kunjadia and A Kunjadia
Int.J.Curr.Microbiol.App.Sci (2015) 4(1): 773-780 http://www.ijcmas.com

Methanolic extracts of Gymnema sylvestre (leaf), Holarrhena antidysenterica (bark),
Vernonia anthelmintica(seeds) Enicostemma littorale (leaf), Momordica charantia
(fruit), Swertia chirata (leaf), Azadirachta indica (leaf), Caesalpinia bonducella (leaf)
used in Ayurvedic medicines for number of ailments were evaluated for their
antioxidant activity.The free radical-scavenging activity of the extracts was measured
as decolorizing activity followed by the trapping of the unpaired electron by 1, 1-
diphenyl-2-picryl hydrazyl radical (DPPH). The percentage decrease of DPPH
was recorded maximum in A. indica followed by M. charantia, C. bonducella,
E.littorale, V. anthelmintica, S.chirata, H.antidysenterica, G.sylvestre. The
antioxidant activity of medicinal plants was at par with the commercial antioxidant
like L-Ascorbic acid. Phytochemical analysis revealed the presence of major
phytocompounds like terpenoids, alkaloids, glycosides, phenolics and tannins.
Moreover, total flavonoid concentration equivalents to gallic acid was found in
the range of 326 μg to 1481μg/g of plant extracts and that of total phenolic
concentration equivalents to phenol was found in the range of 23.50 μg to
89.82 μg/g of plant extracts. The findings indicated promising antioxidant
activity of crude extracts of the above plants and needs further exploration
for their effective use in both modern and traditional system of medicines.

Cyanobacterial natural products as antimicrobial agents
V.D. Pandey
Int.J.Curr.Microbiol.App.Sci (2015) 4(1): 310-317 http://www.ijcmas.com

Cyanobacteria (blue-green algae) constitute a morphologically diverse and
widely distributed group of Gram-negative photosynthetic prokaryotes. Possessing
tremendous adaptability to varying environmental conditions, effective protective
mechanisms against various abiotic stresses and metabolic versatility, they colonize
and grow in different types of terrestrial and aquatic habitats. In addition to
the potential applications of cyanobacteria in various fields, such as agriculture,
aquaculture, pollution control, bioenergy and nutraceuticals, they produce chemically
diverse and pharmacologically important novel bioactive compounds, including
antimicrobial compounds (antibacterial, antifungal and antiviral). The emergence
and spread of antibiotic resistance in pathogenic microbes against commonly used
antibiotics necessitated the search for new antimicrobial agents from sources other
than the traditional microbial sources (streptomycetes and fungi). Various features
of cyanobacteria, including their capability of producing antimicrobial compounds,
make them suitable candidates for their exploitation as a natural source
of antimicrobial agents.
Determination of nutritional value and antioxidant from bulbs of different onion
(Allium cepa) variety: A comparative study
Kandoliya, U.K.*, Bodar, N.P., Bajaniya, V.K., Bhadja N.V. and Golakiya, B.A.
Int.J.Curr.Microbiol.App.Sci (2015) 4(1): 635-641 http://www.ijcmas.com

Onion (Allium cepa) is one of the most economically important vegetable crops
consumed for their ability to enhance the added flavor and typical taste in other
foods. It is a good source of antioxidants as well as some phytonutrients.
So the experiment was conducted to study the nutritional quality along with
various parameters contributing antioxidant activity from onion of different red and
white type local varieties. The findings revealed from all the variety studied,
shows 58.14 to 77.67 % DPPH value, comparable amount of flavanoids
(0.422 to 1.232 mg.g-1) and anthocyanine content along with total phenol
(8.96-18.23 mg.100 g-1), Pyruvic acid (1.09 to 1.33 mg.g-1), ascorbic acid
(1.18 to 3.89 mg.100g-1) , protein (0.79 to 1.27%) and titrable acidity
(0.34 0.75%).These results reveal that JDRO-07-13 of Red variety and
GWO-1 of white nutritionally found better due to its higher antioxidant
property, proteins, carbohydrates, reducing sugar and should be included in diets to supplement our daily allowance needed by the body.

Curcumin: New Weapon against Cancer
Fayez Hamam
Food and Nutrition Sciences, 2014, 5, 2257-2264
http://dx.doi.org/10.4236/fns.2014.522239

All the evidences point out to the fact that the incidence, mortality and number of
persons living with cancer are on the rise and, thus, this will impose a significant
burden on health care resources. The considerable number of deaths from cancer
necessitates the need to developing novel alternative cures that are efficient, safe,
cheap and easy to use. In the search for new therapies for tumors, naturally-derived compounds have been considered as a good source of novel anticancer
drugs. The challenge here is to find products that are pharmacologically active
against tumor cells with suitable toxicity profile and least damage to normal cells.
Curcumin is a spice widely used in many countries especially in South Asia and
it has gained importance for its anticancer function and low toxicity toward normal
tissues in a range of biological systems. In spite of significant research works, many
difficulties hinder its oral use in the therapy of different kind of tumors, such as
extreme low solubility in water, quick break down and excretion after being absorbed
in the human body. Low bioavailability due to enhanced metabolism and rapid
system elimination is another problem that hinders oral use of curcumin as
anticancer agent. Therefore, the previously mentioned poor pharmacokinetics
characteristics inhibit curcumin from reaching its site of action and, thus,
lessen its effectiveness against tumors. This article reviews the latest global
cancer statistics with special attention to be directed toward ovarian cancer.
It sheds light on many research works that investigated the protective and
therapeutic functions of different curcumin preparations against different
sites of cancer using animal models. It also summarizes recent
research works concerning the antitumor effects of curcumin alone and/or
loaded into a range of delivery devices in many types of ovarian cancer cell lines.

Cinnamon is lethal weapon against E. coli O157:H7

When cinnamon is in, Escherichia coli O157:H7 is out.  That’s what researchers
at Kansas State University discovered in laboratory tests with cinnamon and
apple juice heavily tainted with the bacteria.  Presented at the Institute of Food
Technologists’ 1999 Annual Meeting in Chicago on July 27, the study findings
revealed that cinnamon is a lethal weapon against  E. coli O157:H7 and may be
able to help control it in unpasteurized juices.

Lead researcher Erdogan Ceylan, M.S., reported that in apple juice samples
inoculated with about one million E. coli O157:H7 bacteria, about one teaspoon
(0.3 percent) of cinnamon killed 99.5 percent of the bacteria in three days at room
temperature (25 C).  When the same amount of cinnamon was combined with
either 0.1 percent sodium benzoate or potassium sorbate, preservatives approved
by the Food and Drug Administration, the E. coli were knocked out to an
undetectable level.  The number of bacteria added to the test samples was
100 times the number typically found in contaminated food.

“If cinnamon can knock out E. coli O157:H7, one of the most virulent foodborne
microorganisms that exists today, it will certainly have antimicrobial effects on other
common foodborne bacteria, such as Salmonella and Campylobacter,” noted Daniel
Y.C. Fung, Ph.D., professor of Food Science in the Department of Animal Sciences
and Industry at K-State, who oversaw the research.

Last year, Fung and Ceylan researched the antimicrobial effects of various spices
on  E. coli O157:H7 in raw ground beef and sausage and found that cinnamon,
clove, and garlic were the most powerful.  This research led to their recent studies
on cinnamon in apple juice, which proved to be a more effective medium than meat
for the spice to kill the bacteria.

“In liquid, the E. coli have nowhere to hide,” Fung noted, “whereas in a solid structure,
such as ground meat, the bacteria can get trapped in the fat or other cells and
avoid contact with the cinnamon.  But this cannot happen in a free-moving environment.”

For a copy of the study presented at IFT’s Annual Meeting, contact Angela Dansby at
312-82-8424 x127 or via e-mail at aldansby@ift.org
Anti-inflammatory, anti-proliferative and anti-atherosclerotic effects of quercetin in
human in vitro and in vivo models
R Kleemann, Lars Verschuren, M Morrison, S Zadelaar, MJ van Erk, PY Wielinga, & T  Kooistra
Atherosclerosis 218 (2011) 44– 52
http://dx.doi.org:/10.1016/j.atherosclerosis.2011.04.023

Objective: Polyphenols such as quercetin may exert several beneficial effects,
including those resulting from anti-inflammatory activities, but their impact on
cardiovascular health is debated. We investigated the effect of quercetin on
cardiovascular risk markers including human C-reactive protein (CRP) and on
atherosclerosis using transgenic humanized models of cardiovascular disease.
Methods: After evaluating its anti-oxidative and anti-inflammatory effects in
cultured human cells, quercetin (0.1%, w/w in diet) was given to human CRP
transgenic mice, a humanized inflammation model, and ApoE*3Leiden transgenic
mice, a humanized atherosclerosis model. Sodium salicylate was used as an
anti-inflammatory reference. Results: In cultured human endothelial cells,
quercetin protected against H2O2-induced lipid peroxidation and reduced the
cytokine-induced cell-surface expression of VCAM-1 and E-selectin. Quercetin
also reduced the transcriptional activity of NFB in human hepatocytes. In human
CRP transgenic mice (quercetin plasma concentration: 12.9 ± 1.3 M), quercetin
quenched IL1-induced CRP expression, as did sodium salicylate. In ApoE*3 Leiden mice, quercetin (plasma concentration: 19.3 ± 8.3 M) significantly attenuated
atherosclerosis by 40% (sodium salicylate by 86%). Quercetin did not affect
atherogenic plasma lipids or lipoproteins but it significantly lowered the circulating
inflammatory risk factors SAA and fibrinogen. Combined histological and microarray
analysis of aortas revealed that quercetin affected vascular cell proliferation thereby
reducing atherosclerotic lesion growth. Quercetin also reduced the gene expression
of specific factors implicated in local vascular inflammation including IL-1R, Ccl8, IKK,
and STAT3.
Conclusion: Quercetin reduces the expression of human CRP and cardiovascular risk
factors (SAA, fibrinogen) in mice in vivo. These systemic effects together with local
anti-proliferative and anti-inflammatory effects in the aorta may contribute to the
attenuation of atherosclerosis.
Natural products to drugs: natural product derived compounds in clinical trials
Mark S. Butler
Nat  Prod  Rep  2005; 22 : 162 – 195 http://dx.doi.org:/10.1039/b402985m

Natural product and natural product-derived compounds that are being
evaluated in clinical trials or in registration (current 31 December 2004)
have been reviewed. Natural product derived drugs launched in the
United States of America, Europe and Japan since 1998 and new
natural product templates discovered since 1990 are discussed.

Natural Products (NPs) traditionally have played an important role in drug discovery
and were the basis of most early medicines. Over the last 10 to 15 years advances
in X-ray crystallography and NMR, and alternative drug discovery methods such as
rational drug design and combinatorial chemistry have placed great pressure upon
NP drug discovery programs and during this period most major pharmaceutical
companies have terminated or considerably scaled down their NP operations.
However, despite the promise of these alternative drug discovery methods, there is
still a shortage of lead compounds progressing into clinical trials. This is especially
the case in therapeutic areas such as oncology, immunosuppression and metabolic
diseases where NPs have played a central role in lead discovery. In a recent review,
Newman,Cragg and Snader analysed the number of NP-derived drugs present in
the total drug launches from 1981 to 2002 and found that NPs were a significant
source of these new drugs, especially in the oncological and antihypertensive
therapeutic areas. In addition to providing many new drug leads, NPs and NP-derived drugs were well represented in the top 35 worldwide selling ethical drugs
in 2000, 2001 and 2002.

Antibacterial activity of green tea (Camellia sinensis) Extract against dental
caries and other pathogens
P. Lavanya and M. Sri priya
Int.J.Adv. Res.Biol.Sci.2014; 1(5):58-70

The present study has however, revealed that the herbal plant Camellia sinensis (green tea) possess antimicrobial properties. The isolated strains were confirmed by performing staining and biochemical techniques. Aqueous extract of green tea were taken and used for the study of inhibition effect against dental caries and
other pathogens. The zone of inhibition was performed using agar well diffusion techniques different concentration of green tea extracts were studied for their
antibacterial activity. The overall results showed that the microorganisms
were susceptible to different concentration of aqueous extracts of Camellia
sinensis which is a function of their antimicrobial properties. The effectiveness of active principle was studied and compared with the previous one. The nature
of the chemicals present as active principle of the extract was studied using
Paper chromatography and Thin layer chromatography. The chemicals involved in
antimicrobial activity are commonly belonging to any one of the group such as flavanoids, alkaloids, saponins and polyphenols. It could be concluded
that flavonoid in a potential natural, antimicrobial agent against dental
caries and other pathogens.

Antibacterial activity of Mangrove Medicinal Plants against Gram positive
Bacterial pathogens
K. A. Selvam* and K. Kolanjinathan
Int. J. Adv. Res. Biol.Sci. 1(8): (2014): 234–241

Ten mangrove medicinal plants viz., Avicennia marina, Rhizophora mucuronata, Rhizophora mangle, Asparagus officinalis, Ceriops decandra, Aegiceras
corniculatum, Acanthus ilicifolius, Bruguiera cylindrica, Rhizophora apiculata and Xylocarpus grantum were collected from mangrove forest of Pichavaram, Tamil
Nadu, India. The antibacterial activity of mangrove plant extracts (150 mg/ml and
300 mg/ml) were determined by Disc diffusion method. The zone of inhibition was more at 300 mg/ml of extracts when compared to 150 mg/ml of extracts. The
antibacterial activity of selected mangrove plant leaf extracts was determined
against pathogenic bacterial isolates. The methanol extract of Ceriops decandra showed maximum zone of inhibition against all the bacterial isolates followed
by Avicennia marina, Rhizophora mucronata, Aegiceras corniculatum, Rhizophora apiculata, Rhizophora mangle, Acanthus ilicifolius, Asparagus officinalis, Xylocarpus grantum and Bruguiera cylindrica at 300 mg/ml. The hexane extract of mangrove plants showed minimum inhibition zone against bacterial pathogens
when compared to the other solvent extracts. The DMSO was used as a blind
control and the antibiotic Ampicillin (300 mg/ml) was used as a positive control. Minimum inhibitory concentration (MIC) of the mangrove plant extracts against bacterial isolates was tested in Mueller Hinton broth by Broth macro dilution
method. The MIC of mangrove plants against bacterial pathogens was ranged
between 20 mg/ml to 640 mg/ml.

Antioxidant and antibacterial activity of Berberis tinctoria root
Karthikkumar Va, Sharanya R , Allegendiran R, Sasikumar J.M
Int. J. Adv. Res. Biol.Sci. 1(9): (2014): 292–297
Herbs have always been the principle form of medicine in developing nations
and presently they are becoming popular throughout the developed world as
people strive to stay healthy in the face of chronic stress and to treat illness with medicines that work in concert with body’s own defences. The aim of the present study was to evaluate the antioxidant and antibacterial potential of Bereris
tinctoria root. Plant material collected and extracted with various solvents. Different concentrations of extracts were used to evaluate the potential. Bereberis tinctoria
root at a concentration of 1000μg/ml shows high antioxidant activity and relatively
all extracts possessing strong to moderate antibacterial activity. In addition, during phytochemical screening, we got saponins and sterols from its root, when extracting with organic solvents. Thus, root extract of Berberis tinctoria might be good
candidate for the synthesis of antibacterial drugs in the future.

Biological Activities of Soybean Galactomannan Oligosaccharides and
Their Sulfated Derivatives
MMI Helal, SA Ismail, MOI Ghobashy, SS Elgazar, et al.
Int.J.Adv. Res.Biol.Sci.2014; 1(6):113-121

Galactomanno-oligosaccharieds (GMO) and their sulfated derivatives
(SGMO) were prepared from soybean hulls and evaluated for their biological
activities as anticoagulant; antimicrobial; antitumor; fibrinolytic and prebiotics.
The results indicated that the sulfating process has positive effect on the
anticoagulation and fibrinolytic activities of the galactomanno-oligosaccharides.
The SGMO have prolonged clotting time more than 24h at concentration resemble that of the standard heparin. It was also found that the SGMO have fibrinolytic
activity as that of the standard hemoclar and 3 times higher than that of the native GMO oligosaccharides. The prepared oligosaccharides also preformed anti-tumor
activity against human colon carcinoma cell line and the percentage of the dead cells increase from 28% to 72% by increase the concentration of the oligosaccharides from 0.005 to 0.02 mg/ml. The tested galactomanno-oligosaccharides also act as good source for prebiotic as they have the ability to grow the beneficial bacteria
4 to 8 times higher than the pathogenic one. To our knowledge this is the first
time someone report anticoagulation; fibrinolytic and direct antitumor activities for galactomanno-oligosaccharides not to mention soybean galactomanno-oligosaccharides.

Biotechnological Application of Production β-Lactamase Inhibitory Protein
(BLIP) By Actinomycetes Isolates from Al-Khurmah Governorate
HM Atta;  RA Bayoumi and  MH El-Sehrawi
Int. J. Adv. Res. Biol.Sci. 1(7): (2014): 144–154

Many pathogenic bacteria secrete β-lactamase enzymes as a mechanism of
defense against β-lactam antibiotics. Sixty-nine unrepeated actinomycetes
isolates were isolated from different localities in Al-Khurmah governorate, Saudi Arabia kingdom. Actinomycetes isolates were screened for producing β-lactamase inhibitory effect against amoxicillin –resistant bacteria. There were eleven isolates (15.94 %) which had β-lactamase inhibitory protein (BLIP) effect against amoxicillin –resistant Staphylococcus aureus, pseudomonas aeruginosa and Klebsiella
pneumonia. The KH-3201-144 isolate has been considered the most potent, this
was identified by biochemical, chemotaxonomic, morphological and physiological properties consistent with classification in the genus Streptomyces, with the
nearest species being Streptomyces rimosus. Furthermore, a phylogenetic
analysis of the 16S rDNA gene sequence and ribosomal database project
consistent with conventional taxonomy confirmed that strain KH-3201-144
was most similar to Streptomyces rimosus (96%). The highest amount of
β-lactamase inhibitory protein was precipitated at 40% of saturated ammonium sulphate. The purification was carried out by using both diethyl-aminoethyl-cellulose G-25 and sephadex G-200 column chromatography, respectively.
The β-lactamase inhibitory protein was separated at 40 KDa. The minimum
inhibition concentrations “MICs” of the purified β-lactamase inhibitory protein
(BLIP) effect against amoxicillin –resistant Staphylococcus aureus, pseudomonas aeruginosa and Klebsiella pneumonia were also determined.

Bioactive compounds from marine Microbes
P.Sudhasupriya and M.Rajalakshmi
Int.J.Adv. Res.Biol.Sci.2014; 1(6):232-236

Natural compounds isolated from marine organisms have been found to be
a very rich source of bioactive molecules. Reported biological effects of these compounds include anti‐tumor, anti-inflammatory and anti‐viral activities as
well as immunomodulatory and analgesic properties. Pharmaceutical market is growing rapidly and continuously. But, still the demand for new drug discovery
is encouraged. The reason behind this motivation can be the growing number
of drug–resistant infectious diseases and more and more upcoming disorders. Pharmaceutical market is growing rapidly and continuously. But, still the demand
for new drug discovery is encouraged. The reason behind this motivation can
be the growing number of drug–resistant infectious diseases and more and more upcoming disorders.

The Discovery and Properties of Avemar – Fermented Wheat Germ
Extract: Carcinogenesis Suppressor
Larry H Bernstein, MD, FCAP, Contributor
http://pharmaceuticalintelligence.com/2014/06/07/the-discovery-
and-properties-of-avemar-fermented-wheat-germ-extract-
carcinogenesis-suppressor/

Read Full Post »

The Vibrant Philly Biotech Scene: Focus on Vaccines and Philimmune, LLC

Curator: Stephen J. Williams, Ph.D

Article ID #163: The Vibrant Philly Biotech Scene: Focus on Vaccines and Philimmune, LLC. Published on 12/10/2014

WordCloud Image Produced by Adam Tubman

I am intending to do a series of posts highlighting interviews with Philadelphia area biotech startup CEO’s and show how a vibrant biotech startup scene is evolving in the city as well as the Delaware Valley area. Philadelphia has been home to some of the nation’s oldest biotechs including Cephalon, Centocor, hundreds of spinouts from a multitude of universities as well as home of the first cloned animal (a frog), the first transgenic mouse, and Nobel laureates in the field of molecular biology and genetics. Although some recent disheartening news about the fall in rankings of Philadelphia as a biotech hub and recent remarks by CEO’s of former area companies has dominated the news, biotech incubators like the University City Science Center and Bucks County Biotechnology Center as well as a reinvigorated investment community (like PCCI and MABA) are bringing Philadelphia back. And although much work is needed to bring the Philadelphia area back to its former glory days (including political will at the state level) there are many bright spots such as the innovative young companies as outlined in these posts.

First up I got to talk with Florian Schodel, M.D., Ph.D., CEO of Philimmune, which provides expertise in medicine, clinical and regulatory development and analytical sciences to support successful development and registration of vaccines and biologics. Before founding Philimmune, Dr. Schodel was VP in Vaccines Clinical Research of Merck Research Laboratories and has led EU vaccine clinical trials and the clinical development of rotavirus, measles, mumps, hepatitis B, and rubella vaccines. In addition Dr. Schodel and Philimmune consult on several vaccine development efforts at numerous biotech companies including:

 

\His specialties and services include: vaccines and biologics development strategy, clinical development, clinical operations, strategic planning and alliances, international collaborations, analytical and assay development, project and portfolio integration and leadership.

Successful development of vaccines and biologics poses some unique challenges: including sterile manufacturing and substantial early capital investment before initiated clinical trials, assay development for clinical trial support, and unique trail design. Therefore vaccines and biologics development is a highly collaborative process between several disciplines.

The Philadelphia area has a rich history in vaccine development including the discovery and development of the rubella, cytomegaolovirus, a rabies, and the oral polio vaccine at the Wistar Institute. Dr. Schodel answered a few questions on the state of vaccine development and current efforts in the Philadelphia area, including recent efforts by companies such as GSK’s efforts and Inovio’s efforts developing an Ebola vaccine.

In his opinion, Dr. Schodel believes our biggest hurdle in vaccine development in a societal issue, not a preclinic development issue. Great advances have been made to speed the discovery process and enhance quality assurance of manufacture capabilities like

however there has not been a great history or support for developing vaccines for the plethora of infectious diseases seen in the developing world. As Dr. Schodel pointed out, there are relatively few players in the field, and tough to get those few players excited for investing in new targets.

 

However, some companies are rapidly expanding their vaccine portfolios including

 

 

Why haven’t 3rd world countries developed their own vaccine programs?

 

  1. Hard to find partners willing to invest and support development
  2. Developing nations don’t have the money or infrastructure to support health programs
  3. Doctors in these countries need to be educated on how to conduct trials, conduct vaccine programs like Gates Foundation does. For more information see Nature paper on obstacles to vaccine introduction in third world countries.

 

Lastly, Dr. Schodel touched on a growing area, cancer vaccine development. Recent advances in bladder cancer vaccine, cervical, and promising results in an early phase metastatic breast cancer vaccine trial and phase I oral cancer vaccine trial have reinvigorated this field of cancer vaccinology.

 

Historic Timeline of Vaccine Development

vaccine development timeline

Graphic from http://en.pedaily.cn/Item.aspx?id=194125

 

Other posts on this site related to Biotech Startups in Philadelphia and some additional posts on infectious disease include:

 

RAbD Biotech Presents at 1st Pitch Life Sciences-Philadelphia

LytPhage Presents at 1st Pitch Life Sciences-Philadelphia

Hastke Inc. Presents at 1st Pitch Life Sciences-Philadelphia

1st Pitch Life Science- Philadelphia- What VCs Really Think of your Pitch

The History of Infectious Diseases and Epidemiology in the late 19th and 20th Century

 

 

 

Read Full Post »

The History of Infectious Diseases and Epidemiology in the late 19th and 20th Century

Curator: Larry H Bernstein, MD, FCAP

 

Infectious diseases are a part of the history of English, French, and Spanish Colonization of the Americas, and of the Slave Trade.  The many plagues in the new and old world that have effected the course of history from ancient to modern times were known to the Egyptians, Greeks, Chinese, crusaders, explorers, Napoleon, and had familiar ties of war, pestilence, and epidemic. Our coverage is mainly concerned with the scientific and public health consequences of these events that preceded WWI and extended to the Vietnam War, and is highlighted by the invention of a public health system world wide.

The Armed Forces Institute of Pathology (AFIP) closed its’ doors on September 15, 2011. It was founded as the Army Medical Museum on May 21, 1862, to collect pathological specimens along with their case histories.

The information from the case files of the pathological specimens from the Civil War was compared with Army pensions records and compiled into the six-volume Medical and Surgical History of the War of the Rebellion, an early study of wartime medicine.

In 1900, museum curator Walter Reed led the commission which proved that a mosquito was the vector for Yellow Fever, beginning the mosquito eradication campaigns throughout most of the twentieth century.

WalterReed

WalterReed

Another museum curator, Frederick Russell, conducted clinical trials on the typhoid vaccine in 1907, resulting in the U.S. Army to be the first Army vaccinated against typhoid.

Increased emphasis on pathology during the twentieth century turned the museum, renamed the Armed Forces Institute of Pathology in 1949, into an international resource for pathology and the study of disease. AFIP’s pathological collections have been used, for example, in the characterization of the 1918-influenza virus in 1997.

Prior to moving to the Walter Reed Army Medical Center, the AFIP was located at the Army Medical Museum and Library on the Mall (1887-1969), and earlier as Army Medical Museum in Ford’s Theatre (1867-1886).

Army Medical Museum and Library on the Mall

Army Medical Museum and Library on the Mall

This institution, originally the Library of the Surgeon General’s Office (U.S. Army), gained its present name and was transferred from the Army to the Public Health Service in 1956. In 1962, it moved to its own Bethesda site after sharing space for nearly 100 years with other Army units, first at the former Ford’s Theatre building and then at the Army Medical Museum and Library on the Mall. Rare books and other holdings that had been sent to Cleveland for safekeeping during World War II were also reunited with the main collection at that time.

The National Museum of Health and Medicine, established in 1862, inspires interest in and promotes the understanding of medicine — past, present, and future — with a special emphasis on tri-service American military medicine. As a National Historic Landmark recognized for its ongoing value to the health of the military and to the nation, the Museum identifies, collects, and preserves important and unique resources to support a broad agenda of innovative exhibits, educational programs, and scientific, historical, and medical research. NMHM is a headquarters element of the U.S. Army Medical Research and Materiel Command. NMHM’s newest exhibit installations showcase the institution’s 25-million object collection, focusing on topics as diverse as innovations in military medicine, traumatic brain injury, anatomy and pathology, military medicine during the Civil War, the assassination of Abraham Lincoln (including the bullet that killed him), human identification and a special exhibition on the Museum’s own major milestone—the 150th anniversary of the founding of the Army Medical Museum. Objects on display will include familiar artifacts and specimens: the bullet that killed Lincoln and a leg showing the effects of elephantiasis, as well as recent finds in the collection—all designed to astound visitors to the new Museum.

Today, the National Library of Medicine houses the largest collection of print and non-print materials in the history of the health sciences in the United States, and maintains an active program of exhibits and public lectures. Most of the archival and manuscript material dates from the 17th century; however, the Library owns about 200 pre-1601 Western and Islamic manuscripts. Holdings include pre-1914 books, pre-1871 journals, archives and modern manuscripts, medieval and Islamic manuscripts, a collection of printed books, manuscripts, and visual material in Japanese, Chinese, and Korean; historical prints, photographs, films, and videos; pamphlets, dissertations, theses, college catalogs, and government documents.

The oldest item in the Library is an Arabic manuscript on gastrointestinal diseases from al-Razi’s The Comprehensive Book on Medicine (Kitab al-Hawi fi al-tibb) dated 1094. Significant modern collections include the papers of U.S. Surgeons General, including C. Everett Koop, and the papers of Nobel Prize-winning scientists, particularly those connected with NIH.

As part of its Profiles in Science project, the National Library of Medicine has collaborated with the Churchill Archives Centre to digitize and make available over the World Wide Web a selection of the Rosalind Franklin Papers for use by educators and researchers. This site provides access to the portions of the Rosalind Franklin Papers, which range from 1920 to 1975. The collection contains photographs, correspondence, diaries, published articles, lectures, laboratory notebooks, and research notes.

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

“Science and everyday life cannot and should not be separated. Science, for me, gives a partial explanation of life. In so far as it goes, it is based on fact, experience, and experiment. . . . I agree that faith is essential to success in life, but I do not accept your definition of faith, i.e., belief in life after death. In my view, all that is necessary for faith is the belief that by doing our best we shall come nearer to success and that success in our aims (the improvement of the lot of mankind, present and future) is worth attaining.”

–Rosalind Franklin in a letter to Ellis Franklin, ca. summer 1940

Smallpox

Although some disliked mandatory smallpox vaccination measures, coordinated efforts against smallpox went on in the United States after 1867, and the disease continued to diminish in the wealthy countries. By 1897, smallpox had largely been eliminated from the United States. In Northern Europe a number of countries had eliminated smallpox by 1900, and by 1914, the incidence in most industrialized countries had decreased to comparatively low levels. Vaccination continued in industrialized countries, until the mid to late 1970s as protection against reintroduction. Australia and New Zealand are two notable exceptions; neither experienced endemic smallpox and never vaccinated widely, relying instead on protection by distance and strict quarantines.

In 1966 an international team, the Smallpox Eradication Unit, was formed under the leadership of an American, Donald Henderson. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort, and adopted the new disease surveillance method promoted by Czech epidemiologist Karel Raška. Two-year old Rahima Banu of Bangladesh (pictured) was the last person infected with naturally occurring Variola major, in 1975

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

—World Health Organization, Resolution WHA33.3

Anthrax

Anthrax is an acute disease caused by the bacterium Bacillus anthracis. Most forms of the disease are lethal, and it affects both humans and other animals. Effective vaccines against anthrax are now available, and some forms of the disease respond well to antibiotic treatment.

Like many other members of the genus Bacillus, B. anthracis can form dormant endospores (often referred to as “spores” for short, but not to be confused with fungal spores) that are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, even Antarctica. When spores are inhaled, ingested, or come into contact with a skin lesion on a host, they may become reactivated and multiply rapidly.

Anthrax commonly infects wild and domesticated herbivorous mammals that ingest or inhale the spores while grazing. Ingestion is thought to be the most common route by which herbivores contract anthrax. Carnivores living in the same environment may become infected by consuming infected animals. Diseased animals can spread anthrax to humans, either by direct contact (e.g., inoculation of infected blood to broken skin) or by consumption of a diseased animal’s flesh.

Anthrax does not spread directly from one infected animal or person to another; it is spread by spores. These spores can be transported by clothing or shoes. The body of an animal that had active anthrax at the time of death can also be a source of anthrax spores. Owing to the hardiness of anthrax spores, and their ease of production in vitro, they are extraordinarily well suited to use (in powdered and aerosol form) as biological weapons.

Bacillus anthracis is a rod-shaped, Gram-positive, aerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria and put them into a mouse. The bacterium normally rests in endospore form in the soil, and can survive for decades in this state. Once ingested or placed in an open wound, the bacterium begins multiplying inside the animal or human and typically kills the host within a few days or weeks. The endospores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.

Robert Koch

Robert Koch

Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. One spore forms per one vegetative bacterium. Once formed, these spores are very hard to eradicate.

The lethality of the anthrax disease is due to the bacterium’s two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.

To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore.

Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol.

Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling. LF inactivates neutrophils so they cannot phagocytose bacteria. Anthrax causes vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.

Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere. Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. The body’s natural defenses presumably can destroy low levels of exposure. These people usually contract cutaneous anthrax if they catch anything.

Throughout history, the most dangerous form of inhalational anthrax was called woolsorters’ disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare, as almost no infected animals remain. The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. Gastrointestinal anthrax is exceedingly rare in the United States, with only one case on record, reported in 1942, according to the Centers for Disease Control and Prevention.

Various techniques are used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.

All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2.

  1. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a “medusa head” appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.

Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine, from Pasteur’s pioneering 19th-century work with cattle (the second effective vaccine ever) to the controversial 20th century use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.

If a person is suspected as having died from anthrax, every precaution should be taken to avoid skin contact with the potentially contaminated body and fluids exuded through natural body openings. The body should be put in strict quarantine and then incinerated. A blood sample should then be collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though culture of the organism is still the gold standard for diagnosis.

Full isolation of the body is important to prevent possible contamination of others. Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations should be used when handling the body. Disposable personal protective equipment and filters should be autoclaved, and/or burned and buried.

Anyone working with anthrax in a suspected or confirmed victim should wear respiratory equipment capable of filtering this size of particle or smaller. The US National Institute for Occupational Safety and Health – and Mine Safety and Health Administration-approved high-efficiency respirator, such as a half-face disposable respirator with a high-efficiency particulate air filter, is recommended.

All possibly contaminated bedding or clothing should be isolated in double plastic bags and treated as possible biohazard waste. The victim should be sealed in an airtight body bag. Dead victims who are opened and not burned provide an ideal source of anthrax spores. Cremating victims is the preferred way of handling body disposal.

Until the 20th century, anthrax infections killed hundreds of thousands of animals and people worldwide each year. French scientist Louis Pasteur developed the first effective vaccine for anthrax in 1881.

louis-pasteur

louis-pasteur

As a result of over a century of animal vaccination programs, sterilization of raw animal waste materials, and anthrax eradication programs in United States, Canada, Russia, Eastern Europe, Oceania, and parts of Africa and Asia, anthrax infection is now relatively rare in domestic animals. Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001.

Anthrax outbreaks occur in some wild animal populations with some regularity. The disease is more common in countries without widespread veterinary or human public health programs. In the 21st century, anthrax is still a problem in less developed countries.

  1. anthracis bacterial spores are soil-borne. Because of their long lifespan, spores are present globally and remain at the burial sites of animals killed by anthrax for many decades. Disturbed grave sites of infected animals have caused reinfection over 70 years after the animal’s interment.

Cholera

This is an acute diarrheal infection that can kill within a matter of hours if untreated. Oral rehydration therapy — drinking water mixed with salts and sugar. But researchers at EPFL — the Swiss Federal Institute of Technology in Lausanne — say using rice starch instead of sugar with the rehydration salts could reduce bacterial toxicity by almost 75 percent. That would make the microbe less likely to infect a patient’s family and friends if they are exposed to any body fluids.

The World Health Organization says cholera, a water-borne bacterium, infects three to five million people every year, and the severe dehydration it causes leads to as many as 120,000 deaths.

Cholera is an acute diarrheal disease caused by the water borne bacteria Vibrio cholerae O1 or O139 (V. cholerae). Infection is mainly through ingestion of contaminated water or food. The V cholerae passes through the stomach, colonizes the upper part of the small intestine, penetrates the mucus layer, and secretes cholera toxin which affects the small intestine.

Clinically, the majority of cholera episodes are characterized by a sudden onset of massive diarrhea and vomiting accompanied by the loss of profuse amounts of protein-free fluid with electrolytes. The resulting dehydration produces tachycardia, hypotension, and vascular collapse, which can lead to sudden death. The diagnosis of cholera is commonly established by isolating the causative organism from the stools of infected individuals

There are an estimated 3–5 million cholera cases and 100 000–120 000 deaths due to cholera every year.

Up to 80% of cases can be successfully treated with oral rehydration salts.

Effective control measures rely on prevention, preparedness and response.

Provision of safe water and sanitation is critical in reducing the impact of cholera and other waterborne diseases.

Oral cholera vaccines are considered an additional means to control cholera, but should not replace conventional control measures.

During the 19th century, cholera spread across the world from its original reservoir in the Ganges delta in India. Six subsequent pandemics killed millions of people across all continents. The current (seventh) pandemic started in South Asia in 1961, and reached Africa in 1971 and the Americas in 1991. Cholera is now endemic in many countries.

INDIA-ENVIRONMENT-POLUTION

INDIA-ENVIRONMENT-POLUTION

In its extreme manifestation, cholera is one of the most rapidly fatal infectious illnesses known. Within 3–4 hours of onset of symptoms, a previously healthy person may become severely dehydrated and if not treated may die within 24 hours (WHO, 2010). The disease is one of the most researched in the world today; nevertheless, it is still an important public health problem despite more than a century of study, especially in developing tropical countries. Cholera is currently listed as one of three internationally quarantinable diseases by the World Health Organization (WHO), along with plague and yellow fever (WHO, 2000a).

Two serogroups of V. cholerae – O1 and O139 – cause outbreaks. V. cholerae O1 causes the majority of outbreaks, while O139 – first identified in Bangladesh in 1992 – is confined to South-East Asia.

Non-O1 and non-O139 V. cholerae can cause mild diarrhoea but do not generate epidemics.

The main reservoirs of V. cholerae are people and aquatic sources such as brackish water and estuaries, often associated with algal blooms. Recent studies indicate that global warming creates a favorable environment for the bacteria.

Socioeconomic and demographic factors enhance the vulnerability of a population to infection and contribute to epidemic spread. Such factors also mandate the extent to which the disease will reach epidemic proportions and also modulate the size of the epidemic.Known population level (local-level) risk factors of cholera include poverty, lack of development, high population density, low education, and lack of previous exposure. Cholera diffuses rapidly in environments that lack basic infrastructure with regard to access to safe water and proper sanitation. The cholera vibrios can survive and multiply outside the human body and can spread rapidly in environments where living conditions are overcrowded and where there is no safe disposal of solid waste, liquid waste, and human feces.

Mapping the locations of cholera victims, John Snow was able to trace the cause of the disease to a contaminated water source. Surprisingly, this was done 20 years before Koch and Pasteur established the beginnings of microbiology (Koch, 1884).

John Snow's  map

John Snow’s map

Yellow Fever

Yellow fever virus was probably introduced into the New World via ships carrying slaves from West Africa. Throughout the 18th and 19th centuries, regular and devastating epidemics of yellow fever occurred across the Caribbean, Central and South America, the southern United States and Europe. The Yellow Fever Commission, founded as a consequence of excessive disease mortality during the Spanish– American War (1898), concluded that the best way to control the disease was to control the mosquito. William Gorgas successfully eradicated yellow fever from Havana by destroying larval breeding sites and this strategy of source reduction was then successfully used to reduce disease problems and thus finally permit the construction of the Panama Canal in 1904. Success was due largely to a top-down, military approach involving strict supervision and discipline (Gorgas, 1915). In 1946, an intensive Aedes aegypti eradication campaign was initiated in the Americas, which succeeded in reducing vector populations to undetectable levels throughout most of its range.

The production of an effective vaccine in the 1930s led to a change of emphasis from vector control to vaccination for the control of yellow fever. Vaccination campaigns almost eliminated urban yellow fever but incomplete coverage, as with incomplete anti-vectorial measures previously, meant the disease persisted, and outbreaks occurred in remote forest areas.

It was acknowledged by the Health Organization of the League of Nations (the forerunner to the World Health Organization (WHO)) that yellow fever was a severe burden on endemic countries. The work of Soper and the Brazilian Cooperative Yellow Fever Service (Soper, 1934, 1935a, b) began to determine the geographical extent of the disease, specifically in Brazil. Regional maps of disease outbreaks were published by Sawyer (1934), but it was not until after the formation of the WHO that a global map of yellow fever endemicity was first constructed (van Rooyen and Rhodes, 1948). This map was based on expert opinion (United Nations Relief and Rehabilitation Administration/Expert Commission on Quarantine) and serological surveys. The present-day distribution map for yellow fever is still essentially a modified version of this map.

global yellow fever risk map

global yellow fever risk map

Yellow fever is conspicuously absent from Asia. Although there is some evidence that other flaviviruses may offer cross-protection against yellow fever (Gordon-Smith et al., 1962), why yellow fever does not occur in Asia is still unexplained.

It has been estimated that the currently circulating strains of YFV arose in Africa within the last 1,500 years and emerged in the Americas following the slave trade approximately 300–400 years ago. These viruses then spread westwards across the continent and persist there to this day in the jungles of South America.

The 17D live-attenuated vaccine still in use today was developed in 1936, and a single dose confers immunity for at least ten years in 95% of the cases. In a bid to contain the spread of the disease, travellers to countries within endemic areas or those thought to be ‘at risk’ require a certificate of vaccination. The yellow fever certificate is the only internationally regulated certification supported by the WHO. The effectiveness of the vaccine reduces the need for anti-vectorial campaigns directed specifically against yellow fever. As the same major vector is involved, control of Aedes aegypti for dengue reduction will also reduce yellow fever transmission where both diseases co-occur, especially within urban settings.

Dengue

Probable epidemics of dengue fever have been recorded from Africa, Asia, Europe and the Americas since the early 19th century (Armstrong, 1923). Although it is rarely fatal, up to 90% of the

population of an infected area can be incapacitated during the course of an epidemic (Armstrong, 1923; Siler et al., 1926). Widespread movements of troops and refugees during and after World War II introduced vectors and viruses into many new areas. Dengue fever has unsurprisingly been mistaken for yellow fever as well as other diseases including influenza, measles, typhoid and malaria. It is rarely fatal and survivors appear to have lifelong immunity to the homologous serotype.

Far more serious is dengue haemorrhagic fever (DHF), where additional symptoms develop, including haemorrhaging and shock. The mortality from DHF can exceed 30% if appropriate care is unavailable. The most significant risk factor for DHF is when secondary infection with a different serotype occurs in people who have already had, and recovered from, a primary dengue infection.

Dengue has adapted to changes in human demography very effectively. The main vector of dengue is the anthropophilic Aedes aegypti, which is found in close association with human settlements throughout the tropics, breeding mainly in containers in and around, and feeding almost exclusively on humans. As a result, dengue is essentially a disease of tropical urban areas. Before 1970, only nine countries had experienced DHF epidemics, but by 1995 this number had increased fourfold (WHO, 2001). Dengue case numbers have increased considerably since the 1960s; by the end of the 20th century an estimated 50 million cases of dengue fever and 500 000 cases of DHF were occurring every year (WHO, 2001).

The appearance of DHF stimulated large amounts of dengue research, which established the existence of the four serotypes and the range of competent vectors, and led to the adoption of Aedes aegypti control programs in some areas (particularly South-East Asia) (Kilpatrick et al., 1970).

There have been several attempts to estimate the economic impact of dengue: the 1977 epidemic in Puerto Rico was thought to have cost between $6.1 and $15.6 million ($26–$31 per clinical case) (Von Allmen et al., 1979), while the 1981 Cuban epidemic (with a total of 344 203 reported cases) cost about $103 million (around $299 per case) (Kouri et al., 1989).

There is no cure for dengue fever or for DHF. Currently, the only treatment is symptomatic, but this can reduce mortality from DHF to less than 1% (WHO, 2002). Unfortunately, the extent of dengue epidemics means that local public health services are often overwhelmed by the demands for treatment.

Malaria

Malaria is a serious and sometimes fatal disease caused by a parasite that infects a mosquito. People who get malaria are typically very sick with high fevers, shaking chills, and flu-like illness. About 1,500 cases of malaria are diagnosed in the United States each year. The vast majority of cases in the United States are in travelers and immigrants returning from countries where malaria transmission occurs, many from sub-Saharan Africa and South Asia. Malaria has been noted for more than 4,000 years. It became widely recognized in Greece by the 4th century BCE, and it was responsible for the decline of many of the city-state populations. Hippocrates noted the principal symptoms. In the Susruta, a Sanskrit medical treatise, the symptoms of malarial fever were described and attributed to the bites of certain insects. A number of Roman writers attributed malarial diseases to the swamps.

Following their arrival in the New World, Spanish Jesuit missionaries learned from indigenous Indian tribes of a medicinal bark used for the treatment of fevers. With this bark, the Countess of Chinchón, the wife of the Viceroy of Peru, was cured of her fever. The bark from the tree was then called Peruvian bark and the tree was named Cinchona after the countess. The medicine from the bark is now known as the antimalarial, quinine. Along with artemisinins, quinine is one of the most effective antimalarial drugs available today.

quinquin acalisaya

quinquin acalisaya

Cinchona officinalis is a medicinal plant, one of several Cinchona species used for the production of quinine, which is an anti-fever agent. It is especially useful in the prevention and treatment of malaria. Cinchona calisaya is the tree most cultivated for quinine production.

There are a number of other alkaloids that are extracted from this tree. They include cinchonine, cinchonidine and quinidine  (Wikipedia)

Charles Louis Alphonse Laveran, a French army surgeon stationed in Constantine, Algeria, was the first to notice parasites in the blood of a patient suffering from malaria in 1880. Laveran was awarded the Nobel Prize in 1907.

Alphonse Laveran

Alphonse Laveran

Camillo Golgi, an Italian neurophysiologist, established that there were at least two forms of the disease, one with tertian periodicity (fever every other day) and one with quartan periodicity (fever every third day). He also observed that the forms produced differing numbers of merozoites (new parasites) upon maturity and that fever coincided with the rupture and release of merozoites into the blood stream. He was awarded a Nobel Prize in Medicine for his discoveries in neurophysiology in 1906.

malaria_lifecycle.

malaria_lifecycle.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

The Italian investigators Giovanni Batista Grassi and Raimondo Filetti first introduced the names Plasmodium vivax and P. malariae for two of the malaria parasites that affect humans in 1890. Laveran had believed that there was only one species, Oscillaria malariae. William H. Welch, reviewed the subject and, in 1897, he named the malignant tertian malaria parasite P. falciparum. In 1922, John William Watson Stephens described the fourth human malaria parasite, P. ovale. P. knowlesi was first described by Robert Knowles and Biraj Mohan Das Gupta in 1931 in a long-tailed macaque, but the first documented human infection with P. knowlesi was in 1965.

Anopheles mosquito

Anopheles mosquito

Ronald Ross, a British officer in the Indian Medical Service, was the first to demonstrate that malaria parasites could be transmitted from infected patients to mosquitoes in 1997. In further work with bird malaria, Ross showed that mosquitoes could transmit malaria parasites from bird to bird. This necessitated a sporogonic cycle (the time interval during which the parasite developed in the mosquito). Ross was awarded the Nobel Prize in 1902.

Ronald Ross_1899

Ronald Ross_1899

A team of Italian investigators led by Giovanni Batista Grassi, collected Anopheles claviger mosquitoes and fed them on malarial patients. The complete sporogonic cycle of Plasmodium falciparum, P. vivax, and P. malariae were demonstrated. Mosquitoes infected by feeding on a patient in Rome were sent to London in 1999, where they fed on two volunteers, both of whom developed malaria.

The construction of the Panama Canal was made possible only after yellow fever and malaria were controlled in the area. These two diseases were a major cause of death and disease among workers in the area. In 1906, there were over 26,000 employees working on the Canal. Of these, over 21,000 were hospitalized for malaria at some time during their work. By 1912, there were over 50,000 employees, and the number of hospitalized workers had decreased to approximately 5,600. Through the leadership and efforts of William Crawford Gorgas, Joseph Augustin LePrince, and Samuel Taylor Darling, yellow fever was eliminated and malaria incidence markedly reduced through an integrated program of insect and malaria control.

Gorgas-William-Crawford, MD

Gorgas-William-Crawford, MD

During the U.S. military occupation of Cuba and the construction of the Panama Canal at the turn of the 20th century, U.S. officials made great strides in the control of malaria and yellow fever. In 1914 Henry Rose Carter and Rudolph H. von Ezdorf of the USPHS requested and received funds from the U.S. Congress to control malaria in the United States. Various activities to investigate and combat malaria in the United States followed from this initial request and reduced the number of malaria cases in the United States. USPHS established malaria control activities around military bases in the malarious regions of the southern United States to allow soldiers to train year round.

U.S. President Franklin D. Roosevelt signed a bill that created the Tennessee Valley Authority (TVA) on May 18, 1933. The law gave the federal government a centralized body to control the Tennessee River’s potential for hydroelectric power and improve the land and waterways for development of the region. An organized and effective malaria control program stemmed from this new authority in the Tennessee River valley. Malaria affected 30 percent of the population in the region when the TVA was incorporated in 1933. The Public Health Service played a vital role in the research and control operations and by 1947, the disease was essentially eliminated. Mosquito breeding sites were reduced by controlling water levels and insecticide applications.

Chloroquine was discovered by a German, Hans Andersag, in 1934 at Bayer I.G. Farbenindustrie A.G. laboratories in Eberfeld, Germany. He named his compound resochin. Through a series of lapses and confusion brought about during the war, chloroquine was finally recognized and established as an effective and safe antimalarial in 1946 by British and U.S. scientists.

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

A German chemistry student, Othmer Zeidler, synthesized DDT in 1874, for his thesis. The insecticidal property of DDT was not discovered until 1939 by Paul Müller in Switzerland. Various militaries in WWII utilized the new insecticide initially for control of louse-borne typhus. DDT was used for malaria control at the end of WWII after it had proven effective against malaria-carrying mosquitoes by British, Italian, and American scientists. Müller won the Nobel Prize for Medicine in 1948.

Paul Muller

Paul Muller

Malaria Control in War Areas (MCWA) was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic. Many of the bases were established in areas where mosquitoes were abundant. MCWA aimed to prevent reintroduction of malaria into the civilian population by mosquitoes that would have fed on malaria-infected soldiers, in training or returning from endemic areas. During these activities, MCWA also trained state and local health department officials in malaria control techniques and strategies.

The National Malaria Eradication Program, a cooperative undertaking by state and local health agencies of 13 Southeastern states and the CDC, originally proposed by Louis Laval Williams, commenced operations on July 1, 1947. By the end of 1949, over 4,650,000 housespray applications had been made. In 1947, 15,000 malaria cases were reported. By 1950, only 2,000 cases were reported. By 1951, malaria was considered eliminated from the United States.

With the success of DDT, the advent of less toxic, more effective synthetic antimalarials, and the enthusiastic and urgent belief that time and money were of the essence, the World Health Organization (WHO) submitted at the World Health Assembly in 1955 an ambitious proposal for the eradication of malaria worldwide. Eradication efforts began and focused on house spraying with residual insecticides, antimalarial drug treatment, and surveillance, and would be carried out in 4 successive steps: preparation, attack, consolidation, and maintenance. Successes included elimination in nations with temperate climates and seasonal malaria transmission.

Some countries such as India and Sri Lanka had sharp reductions in the number of cases, followed by increases to substantial levels after efforts ceased, while other nations had negligible progress (such as Indonesia, Afghanistan, Haiti, and Nicaragua), and still others were excluded completely from the eradication campaign(sub-Saharan Africa). The emergence of drug resistance, widespread resistance to available insecticides, wars and massive population movements, difficulties in obtaining sustained funding from donor countries, and lack of community participation made the long-term maintenance of the effort untenable.

The goal of most current National Malaria Prevention and Control Programs and most malaria activities conducted in endemic countries is to reduce the number of malaria-related cases and deaths. To reduce malaria transmission to a level where it is no longer a public health problem is the goal of what is called malaria “control.”

The natural ecology of malaria involves malaria parasites infecting successively two types of hosts: humans and female Anopheles mosquitoes. In humans, the parasites grow and multiply first in the liver cells and then in the red cells of the blood. In the blood, successive broods of parasites grow inside the red cells and destroy them, releasing daughter parasites (“merozoites”) that continue the cycle by invading other red cells.

Anopheles mosquito

Anopheles mosquito

The blood stage parasites are those that cause the symptoms of malaria. When certain forms of blood stage parasites (“gametocytes”) are picked up by a female Anopheles mosquito during a blood meal, they start another, different cycle of growth and multiplication in the mosquito.

After 10-18 days, the parasites are found (as “sporozoites”) in the mosquito’s salivary glands. When the Anopheles mosquito takes a blood meal on another human, the sporozoites are injected with the mosquito’s saliva and start another human infection when they parasitize the liver cells.

Malaria. Wikipedia

Malaria. Wikipedia

A Plasmodium from the saliva of a female mosquito moving across a mosquito cell

Thus the mosquito carries the disease from one human to another (acting as a “vector”). Differently from the human host, the mosquito vector does not suffer from the presence of the parasites.

All the clinical symptoms associated with malaria are caused by the asexual erythrocytic or blood stage parasites. When the parasite develops in the erythrocyte, numerous known and unknown waste substances such as hemozoin pigment and other toxic factors accumulate in the infected red blood cell. These are dumped into the bloodstream when the infected cells lyse and release invasive merozoites. The hemozoin and other toxic factors such as glucose phosphate isomerase (GPI) stimulate macrophages and other cells to produce cytokines and other soluble factors which act to produce fever and rigors associated with malaria.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

Plasmodium falciparum-infected erythrocytes, particularly those with mature trophozoites, adhere to the vascular endothelium of venular blood vessel walls and when they become sequestered in the vessels of the brain it is a factor in causing the severe disease syndrome known as cerebral malaria, which is associated with high mortality.

Following the infective bite by the Anopheles mosquito, a period of time (the “incubation period”) goes by before the first symptoms appear. The incubation period in most cases varies from 7 to 30 days. The shorter periods are observed most frequently with P. falciparum and the longer ones with P. malariae.

malaria_lifecycle.

malaria_lifecycle.

Antimalarial drugs taken for prophylaxis by travelers can delay the appearance of malaria symptoms by weeks or months, long after the traveler has left the malaria-endemic area. (This can happen particularly with P. vivax and P. ovale, both of which can produce dormant liver stage parasites; the liver stages may reactivate and cause disease months after the infective mosquito bite.)

The Influenza Pandemic of 1918

The Nation’s Health

If you had lived in the early twentieth century, your life expectancy would
have been much shorter than it is today. Today, life expectancy for men is 75 years;
for women, it is 80 years. In 1918, life expectancy for men was only 53 years.

Women’s life expectancy at 54 was only marginally better.

Why was life expectancy so much shorter?

During the early twentieth century, communicable diseases—that is diseases
which can spread from person to person—were widespread. Influenza and
pneumonia along with tuberculosis and gastrointestinal infections such
as diarrhea killed Americans at an alarming rate but
non-communicable diseases such as cancer and heart disease also
exacted a heavy toll. Accidents, especially in the nation’s unregulated factories
and workshops, were also responsible for maiming and killing many workers.

High infant mortality further shortened life expectancy. In 1918, one in
five American children did not live beyond their fifth birthday. In some
cities, the situation was even worse, with thirty percent of all infants dying
before their first birthday. Childhood diseases such as diphtheria, measles,
scarlet fever and whooping cough contributed significantly to these high
death rates.

osler_at_a_bedside

osler_at_a_bedside

By 1900, an increasing number of physicians were receiving clinical
training. This training provided doctors with new insights into disease
and specific types of diseases. [Credit: National Library of Medicine]

scarlet_fever

scarlet_fever

Quarantine signs such as this one warned visitors away from homes
with scarlet fever and other infectious diseases. [Credit: National
Library of Medicine]

Rat Proofing

Cities often sponsored Clean-Up Days. Here, Public Health Service
employees clean up San Francisco’s streets in a campaign to
eradicate bubonic plague. [Credit: Office of the Public Health
Service Historian]

cleanup days

cleanup days

A young woman is seated with a baby on her lap in the center
of the photo.  On the right are two young children.  One child is
standing.  The other is seated in a crib.  A woman in a long
white apron stands by the stove on the left side of the photo.
She is pulling a bottle out of a pan on the stove.

nurse_helps_with_baby_formula

nurse_helps_with_baby_formula

A public health nurse teaches a young mother how to sterilize
a bottle. [Credit: National Library of Medicine]

Seeking Medical Care

Feeling Sick in 1918?

If you became sick in nineteenth-century America, you might consult
a doctor, a druggist, a midwife, a folk healer, a nurse or even
your neighbor. Most of these practitioners would visit you in your home.

By 1918, these attitudes toward health care were beginning to
change. Some physicians had begun to set up offices where patients
could receive medical care and hospitals, which emphasized sterilization
and isolation, were also becoming popular.

However, these changes were not yet universal and many Americans
still lived their entire lives without visiting a doctor.

How Did Ordinary People View Disease?

Folk Medicine:

In 1918, folk healers could be found all over America. Some of these
healers believed that diseases had a physical cause such as cold
weather but others believed it had a supernatural cause such as a curse.

Treatments advocated by these healers ran the gamut. Herbal remedies
were especially popular. Other popular remedies included cupping,
which entailed attaching a heated cup to the surface of the skin,
and acupuncture. Many people also wore magical objects which they
believed protected the wearer from illness.

During the influenza pandemic of 1918 when scientific medicine
failed to provide Americans with a cure or preventative, many people
turned to folk remedies and treatments.

Scientific Medicine

In the 1880s, building on developments which had been in the
making since the 1830s, a growing number of scientists and
physicians came to believe that disease was spread by
minute pathogenic organisms or germs.

Often called the bacteriological revolution, this new theory
radically transformed the practice of medicine. But while this was a
major step forward in understanding disease, doctors and scientists
continued to have only a rudimentary understanding of the differences
between different types of microbes. Many practicing physicians
did not understand the differences between bacteria and viruses
and this sharply limited their ability to understand disease
causation and disease prevention.

Drugs and Druggists:

Although the early twentieth century witnessed growing attempts
to regulate the practice of medicine, many druggists assumed
duties we associate today with physicians. Some druggists, for
example, diagnosed and prescribed treatments which they
then sold to the patient. Some of these treatments included opiates;
few actually cured diseases.

Desperate times called for desperate remedies and during the
influenza pandemic, many patients turned to these and other drugs
in the hopes that they would provide a cure.

Nurses:

Between 1890 and 1920, nursing schools multiplied and trained
nurses began to replace practical nurses. Isolation practices
sterility, and strict routines, practices associated with professionally
trained nurses, increasingly became standard during this period. In 1918, nurses served as the physician’s hand, assisting doctors as
they made the rounds. During the pandemic, many nurses acted
independently of doctors, treating and prescribing for patients.

Physicians:

Throughout the eighteenth and much of the nineteenth centuries,
pretty much anyone had the right to call oneself a physician. By the
late nineteenth century, growing calls for reform had begun to
transform the profession.

In 1900, every state in the Union had some type of medical registration
law with about half of all states requiring physicians to possess a
medical diploma and pass an exam before they received a license
to practice. However, grandfather clauses which exempted many older
physicians meant that many physicians who practiced in 1918
had been poorly trained.

quack_doctor

quack_doctor

Poor training and loose regulations meant that some doctors were
little more than quacks. [Credit: National Library of Medicine]

drug_ad

drug_ad

Drug advertisers routinely promised quick and painless cures.
[Credit: National Library of Medicine]

While access to the profession was tightening, women and minorities,
including African-Americans, entered the profession in growing
numbers during the early twentieth century.

What Did Doctors Really Know?

Growing understanding of bacteriology enabled early twentieth-
century physicians to diagnose diseases more effectively than their
predecessors but diagnosis continued to be difficult. Influenza was
especially tricky to diagnose and many physicians may have incorrectly
diagnosed their patients, especially in the early stages of the pandemic.

Bacteriology did not revolutionize the treatment of disease. In the
pre-antibiotic era of 1918, physicians continued to rely heavily
on traditional therapeutics. During the pandemic, many physicians
used traditional treatments such as sweating which had their
roots in humoral medicine.

Reflecting the uneven structure of medical education, the level and
quality of care which physicians provided varied wildly.

The Public Health Service

Founded in 1798, the Marine Hospital Service originally provided
health care for sick and disabled seaman. By the late nineteenth
century, the growth of trade, travel and immigration networks
had led the Service to expand its mission to include protecting
the health of all Americans.

In a nation where federal and state authorities had consistently
battled for supremacy, the powers of the Public Health Service
were limited. Viewed with suspicion by many state and local
authorities, PHS officers often found themselves fighting state
and local authorities as well as epidemics—even when they had
been called in by these authorities.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

A network of hospitals in the nation’s ports provided seamen with
access to healthcare. [Credit: Office of the Public Health Service Historian]

In 1918, there were fewer than 700 commissioned officers in the PHS.
Charged with the daunting task of protecting the health of some
106 million Americans, PHS officers were stationed in not only
the United States but also abroad.

Because few diseases could be cured, the prevention of disease
was central to the PHS mission. Under the leadership of Surgeon
General Rupert Blue, the PHS advocated the use of scientific
research, domestic and foreign quarantine, marine hospitals
and statistics to accomplish this mission. hen an epidemic emerged,
the Public Health Service’s epidemiologists tracked the disease,
house by house. The 1918 influenza pandemic occurred too
rapidly for the PHS to develop a detailed study of the pandemic.

typhoid_map

typhoid_map

This map was used to trace a smaller typhoid epidemic which erupted in
Washington, DC in 1906. [Credit: Office of the Public Health Service Historian]

The spread of disease within the US was a serious concern. However,
PHS officers were most concerned about the importation of disease into
the United States. To prevent this, ships could be, and often were,
quarantined by the PHS.

fever-quaranteen-station-1880

fever-quaranteen-station-1880

Travelers and immigrants to the United States were also required
to undergo a medical exam when entering the country. In 1918 alone,
700,000 immigrants underwent a medical exam at the hands of PHS
officers. Within the United States, PHS officers worked directly with
state and local departments of health to track, prevent and arrest
epidemics as they emerged. During 1918, PHS officers found themselves
battling not only influenza but also polio, typhus, typhoid, smallpox
and a range of other diseases. In 1918, the PHS operated research
laboratories stretching from Hamilton, Montana to Washington DC.
Scientific researchers at these laboratories ultimately discovered
both the causes and cures of diseases ranging from Rocky Mountain
Spotted Fever to pellagra.

Sewers and Sanitation:

In the nineteenth century, most physicians and public health experts
believed that disease was caused not by microorganisms but rather by dirt itself.

Sanitarians, as these people were called, argued that cleaning dirt-
infested cities and building better sewage systems would both prevent
and end many epidemics. At their urging, cities and towns across the United
States built better sewage systems and provided citizens with access to
clean water. By 1918, these improved water and sewage systems had greatly
contributed to a decline in gastrointestinal infections and a significant
reduction in mortality rates among infants, children and young adults.

But because diseases are caused by microorganisms, not dirt, these
tactics were not completely effective in ending all epidemics.

Sanitation: Controlling problems at source

Box 1: Sharing toilets in Uganda

A recent survey by the Ministry of Health in Uganda suggested that there is only one toilet for every 700 Ugandan pupils, compared to one for every 328 pupils in 1995. Out of 8000 schools surveyed, only 33% of the 8000 schools sampled have separate latrines for girls. The deterioration in sanitary conditions was attributed to increased enrolment in schools. UNICEF surveyed 90 primary schools in crisis-affected districts of north and west Uganda: only 2% had adequate latrine facilities (IRIN, 1999).

Box 2: Sanitation and diarrhoeal disease

Gwatkin and Guillot (1999) have claimed that diarrhoea accounts for 11% of all deaths in the poorest 20% of all countries. This toll could be reduced by key measures: better sanitation to reduce the cause of water linked diarrhoea; and more widespread use of oral rehydration therapy (ORT) to treat its effects. Improving water supplies, sanitation facilities and hygiene practices reduces diarrhoea incidence by 26%. Even more impressive, deaths due to diarrhoea are reduced by 65% with these same improvements (Esrey et al., 1991). Of the 2.2 million people that die from diarrhoea each year, many of those deaths are caused by one bacteria – Shigella. Simple hand washing with soap and water reduces Shigella and other diarrhoea transmission by 35% (Kotloff et al., 1999; Khan, 1982). ORT is effective in reducing deaths due to diarrhoea but does not prevent it.

http://www.who.int/water_sanitation_health/sanitproblems/en/index1.html

Garbage-A-polluted-creek

Garbage-A-polluted-creek

Influenza Strikes

Throughout history, influenza viruses have mutated and caused
pandemics or global epidemics. In 1890, an especially virulent influenza
pandemic struck, killing many Americans. Those who survived that
pandemic and lived to experience the 1918 pandemic tended to be
less susceptible to the disease.

From Kansas to Europe and back again, wave after wave, the
unfolding of the pandemic, mobilizing to fight influenza, the
pandemic hits, protecting yourself, communication, fading of
the pandemic.

Influenza ward

Influenza ward

When it came to treating influenza patients, doctors, nurses and
druggists were at a loss. [Credit: Office of the Public Health Service Historian]

The influenza pandemic of 1918-1919 killed more people than the
Great War, known today as World War I (WWI), at somewhere
between 20 and 40 million people. It has been cited as the most
devastating epidemic in recorded world history. More people died of
influenza in a single year than in four-years of the Black Death Bubonic
Plague from 1347 to 1351. Known as “Spanish Flu” or “La Grippe”
the influenza of 1918-1919 was a global disaster.

Grim Reaper

Grim Reaper

The Grim Reaper by Louis Raemaekers

In the fall of 1918 the Great War in Europe was winding down and
peace was on the horizon. The Americans had joined in the fight,
bringing the Allies closer to victory against the Germans. Deep within
the trenches these men lived through some of the most brutal conditions
of life, which it seemed could not be any worse. Then, in pockets
across the globe, something erupted that seemed as benign as the
common cold. The influenza of that season, however, was far more
than a cold. In the two years that this scourge ravaged the earth,
a fifth of the world’s population was infected. The flu was most deadly
for people ages 20 to 40. This pattern of morbidity was unusual for
influenza which is usually a killer of the elderly and young children.
It infected 28% of all Americans (Tice). An estimated 675,000
Americans died of influenza during the pandemic, ten times as
many as in the world war. Of the U.S. soldiers who died in Europe,
half of them fell to the influenza virus and not to the enemy (Deseret
News). An estimated 43,000 servicemen mobilized for WWI died
of influenza (Crosby). 1918 would go down as unforgettable year
of suffering and death and yet of peace. As noted in the Journal
of the American Medical Association final edition of 1918:   “The 1918
has gone: a year momentous as the termination of the most cruel war
in the annals of the human race; a year which marked, the end at
least for a time, of man’s destruction of man; unfortunately a year in
which developed a most fatal infectious disease causing the death
of hundreds of thousands of human beings. Medical science for
four and one-half years devoted itself to putting men on the firing
line and keeping them there. Now it must turn with its whole might to
combating the greatest enemy of all–infectious disease,” (12/28/1918).

From Kansas to Europe and Back Again:

scourge ravaged the earth

scourge ravaged the earth

Where did the 1918 influenza come from? And why was it so lethal?

In 1918, the Public Health Service had just begun to require state
and local health departments to provide them with reports about
diseases in their communities. The problem? Influenza wasn’t
a reportable disease.

But in early March of 1918, officials in Haskell County in Kansas
sent a worrisome report to the Public Health Service.Although
these officials knew that influenza was not a reportable disease,
they wanted the federal government to know that “18 cases
of influenza of a severe type” had been reported there.

By May, reports of severe influenza trickled in from Europe. Young
soldiers, men in the prime of life, were becoming ill in large
numbers. Most of these men recovered quickly but some developed
a secondary pneumonia of “a most virulent and deadly type.”

Within two months, influenza had spread from the military to the
civilian population in Europe. From there, the disease spread outward—to Asia, Africa, South America and, back again, to North America.

Wave After Wave:

In late August, the influenza virus probably mutated again and
epidemics now erupted in three port cities: Freetown, Sierra
Leone; Brest, France, and Boston, Massachusetts. In Boston,
dockworkers at Commonwealth Pier reported sick in massive
numbers during the last week in August. Suffering from fevers
as high as 105 degrees, these workers had severe muscle and
joint pains. For most of these men, recovery quickly followed. But
5 to 10% of these patients developed severe and massive
pneumonia. Death often followed.

Public health experts had little time to register their shock at the
severity of this outbreak. Within days, the disease had spread
outward to the city of Boston itself. By mid-September, the epidemic
had spread even further with states as far away as California, North
Dakota, Florida and Texas reporting severe epidemics.

The Unfolding of the Pandemic:

The pandemic of 1918-1919 occurred in three waves. The first
wave had occurred when mild influenza erupted in the late
spring and summer of 1918. The second wave occurred with an
outbreak of severe influenza in the fall of 1918 and the final wave
occurred in the spring of 1919.

In its wake, the pandemic would leave about twenty million dead
across the world. In America alone, about 675,000 people in
a population of 105 million would die from the disease.

Find out what happened in your state during the Pandemic

Mobilizing to Fight Influenza:

Although taken unaware by the pandemic, federal, state and local
authorities quickly mobilized to fight the disease.

On September 27th, influenza became a reportable disease. However,
influenza had become so widespread by that time that most states
were unable to keep accurate records. Many simply failed to
report to the Public Health Service during the pandemic, leaving
epidemiologists to guess at the impact the disease may have
had in different areas.

World War I had left many communities with a shortage of trained
medical personnel. As influenza spread, local officials urgently
requested the Public Health Service to send nurses and doctors.
With less than 700 officers on duty, the Public Health Service was
unable to meet most of these requests. On the rare occasions when
the PHS was able to send physicians and nurses, they often became
ill en route. Those who did reach their destination safely often found
themselves both unprepared and unable to provide real assistance.

In October, Congress appropriated a million dollars for the Public
Health Service. The money enabled the PHS to recruit and pay
for additional doctors and nurses. The existing shortage of doctors
and nurses, caused by the war, made it difficult for the PHS to locate and hire qualified practitioners. The virulence of the disease also meant that many nurses and doctors contracted influenza
within days of being hired.

Confronted with a shortage of hospital beds, many local officials
ordered that community centers and local schools be transformed
into emergency hospitals. In some areas, the lack of doctors meant
that nursing and medical students were drafted to staff these
makeshift hospitals.

The Pandemic Hits:

Entire families became ill. In Philadelphia, a city especially hard hit,
so many children were orphaned that the Bureau of Child Hygiene
found itself overwhelmed and unable to care for them.

As the disease spread, schools and businesses emptied. Telegraph
and telephone services collapsed as operators took to their
beds. Garbage went uncollected as garbage men reported sick.
The mail piled up as postal carriers failed to come to work.

State and local departments of health also suffered from high
absentee rates. No one was left to record the pandemic’s spread
and the Public Health Service’s requests for information went
unanswered.

As the bodies accumulated, funeral parlors ran out of caskets
and bodies went uncollected in morgues.

Protecting Yourself From Influenza:

In the absence of a sure cure, fighting influenza seemed an
impossible task.

In many communities, quarantines were imposed to prevent
the spread of the disease.Schools, theaters, saloons, pool
halls and even churches were all closed. As the bodies
mounted, even funerals were held out doors to protect mourners
against the spread of the disease.

Emergency Hospital for Influenza Patients

An Emergency Hospital for Influenza Patients

The effect of the influenza epidemic was so severe that the
average life span in the US was depressed by 10 years.
The influenza virus had a profound virulence, with a mortality
rate at 2.5% compared to the previous influenza epidemics, which
were less than 0.1%. The death rate for 15 to 34-year-olds of
influenza and pneumonia were 20 times higher in 1918 than in
previous years (Taubenberger). People were struck
with illness on the street and died rapid deaths.

One anecdote shared of 1918 was of four women playing bridge
together late into the night. Overnight, three of the women died
from influenza (Hoagg). Others told stories of people on their way
to work suddenly developing the flu and dying within hours
(Henig). One physician writes that patients with seemingly
ordinary influenza would rapidly “develop the most viscous
type of pneumonia that has ever been seen” and later when
cyanosis appeared in the patients, “it is simply a struggle for air
until they suffocate,” (Grist, 1979). Another physician recalls
that the influenza patients “died struggling to clear their airways
of a blood-tinged froth that sometimes gushed from their nose
and mouth,” (Starr, 1976). The physicians of the time were
helpless against this powerful agent of influenza. In 1918 children
would skip rope to the rhyme (Crawford):

I had a little bird,

Its name was Enza.

I opened the window,

And in-flu-enza.

schools inspected -

schools inspected –

The influenza pandemic circled the globe. Most of humanity felt the
effects of this strain of the influenza virus. It spread following
the path of its human carriers, along trade routes and shipping lines.
Outbreaks swept through North America, Europe, Asia, Africa, Brazil
and the South Pacific (Taubenberger). In India the mortality rate was
extremely high at around 50 deaths from influenza per 1,000
people (Brown). The Great War, with its mass movements of men
in armies and aboard ships, probably aided in its rapid diffusion
and attack. The origins of the deadly flu disease were unknown but
widely speculated upon. Some of the allies thought of the epidemic as a
biological warfare tool of the Germans. Many thought it was a result of
the trench warfare, the use of mustard gases and the generated “smoke
and fumes” of the war. A national campaign began using the ready
rhetoric of war to fight the new enemy of microscopic proportions. A
study attempted to reason why the disease had been so devastating
in certain localized regions, looking at the climate, the weather and
the racial composition of cities. They found humidity to be linked with
more severe epidemics as it “fosters the dissemination of the bacteria,”
(Committee on Atmosphere and Man, 1923). Meanwhile the new
sciences of the infectious agents and immunology were
racing to come up with a vaccine or therapy to stop the epidemics.

The experiences of people in military camps encountering the
influenza pandemic: An excerpt for the memoirs of a survivor at
Camp Funston of the pandemic Survivor A letter to a fellow physician
describing conditions during the influenza epidemic at Camp Devens.

A collection of letters of a soldier stationed in Camp Funston Soldier

The origins of this influenza variant is not precisely known. It is thought
to have originated in China in a rare genetic shift of the influenza virus.
The recombination of its surface proteins created a virus novel to
almost everyone and a loss of herd immunity. Recently the virus
has been reconstructed from the tissue of a dead soldier and is
now being genetically characterized.

The name of Spanish Flu came from the early affliction and large
mortalities in Spain (BMJ,10/19/1918) where it allegedly killed 8
million in May (BMJ, 7/13/1918). However, a first wave of influenza
appeared early in the spring of 1918 in Kansas and in military
camps throughout the US. Few noticed the epidemic in the midst of
the war. Wilson had just given his 14 point address. There was
virtually no response or acknowledgment to the epidemics in March
and April in the military camps. It was unfortunate that no steps were
taken to prepare for the usual recrudescence of the virulent influenza
strain in the winter. The lack of action was later criticized when the
epidemic could not be ignored in the winter of 1918 (BMJ, 1918).
These first epidemics at training camps were a sign of what was
coming in greater magnitude in the fall and winter of 1918 to the
entire world.

The war brought the virus back into the US for the second wave
of the epidemic. It first arrived in Boston in September of 1918
through the port busy with war shipments of machinery and supplies.
The war also enabled the virus to spread and diffuse. Men across
the nation were mobilizing to join the military and the cause. As they
came together, they brought the virus with them and to those they
contacted. The virus  killed almost 200,00 in October of 1918
alone. In November 11 of 1918 the end of the war enabled a resurgence.
As people celebrated Armistice Day with parades and large parties, a
complete disaster from the public health standpoint, a rebirth of
the epidemic occurred in some cities. The flu that winter was beyond
imagination as millions were infected and thousands died. Just as
the war had effected the course of influenza, influenza affected
the war. Entire fleets were ill with the disease and men on the front
were too sick to fight. The flu was devastating to both sides, killing
more men than their own weapons could.

With the military patients coming home from the war with battle wounds
and mustard gas burns, hospital facilities and staff were taxed
to the limit. This created a shortage of physicians, especially in the
civilian sector as many had been lost for service with the military.
Since the medical practitioners were away with the troops, only
the medical students were left to care for the sick. Third and forth
year classes were closed and the students assigned jobs as
interns or nurses (Starr,1976). One article noted that “depletion has
been carried to such an extent that the practitioners are brought
very near the breaking point,” (BMJ, 11/2/1918). The shortage was
further confounded by the added loss of physicians to the epidemic.
In the U.S., the Red Cross had to recruit more volunteers to contribute
to the new cause at home of fighting the influenza epidemic. To respond
with the fullest utilization of nurses, volunteers and medical supplies, the
Red Cross created a National Committee on Influenza. It was involved
in both military and civilian sectors to mobilize all forces to fight Spanish
influenza (Crosby, 1989). In some areas of the US, the nursing shortage
was so acute that the Red Cross had to ask local businesses to
allow workers to have the day off if they volunteer in the hospitals
at night (Deseret News). Emergency hospitals were created to
take in the patients from the US and those arriving sick from overseas.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

red_cross_public_health_nurse

red_cross_public_health_nurse

The pandemic affected everyone. With one-quarter of the US and
one-fifth of the world infected with the influenza, it was  impossible
to escape from the illness. Even President Woodrow Wilson suffered
from the flu in early 1919 while negotiating the crucial treaty of
Versailles to end the World War (Tice). Those who were
lucky enough to avoid infection had to deal with the public health
ordinances to restrain the spread of the disease.

The public health departments distributed gauze masks to be worn
in public. Stores could not hold sales, funerals were limited
to 15 minutes. Some towns required a signed certificate to
enter and railroads would not accept passengers without
them. Those who ignored the flu ordinances had to pay steep
fines enforced by extra officers (Deseret News). Bodies pilled up
as the massive deaths of the epidemic ensued. Besides the
lack of health care workers and medical supplies, there was a shortage
of coffins, morticians and gravediggers (Knox). The conditions in 1918
were not so far removed from the Black Death in the era of the
bubonic plague of the Middle Ages.

iowa_flu

iowa_flu

In 1918-19 this deadly influenza pandemic erupted during the final
stages of World War I. Nations were already attempting to deal with
the  effects and costs of the war. Propaganda campaigns and war
restrictions and rations had been implemented by governments.
Nationalism pervaded as people accepted government authority.
This allowed the public health departments to easily step in and
implement their restrictive measures. The war also gave science
greater importance as governments relied on scientists, now armed
with the new germ theory and the development of antiseptic surgery,
to design vaccines and reduce mortalities of disease and battle
wounds. Their new technologies could preserve the men on
the front and ultimately save the world. These conditions
created by World War I, together with the current social attitudes
and ideas, led to the relatively calm response of the public and
application of scientific ideas. People allowed for strict measures
and loss of freedom during the war as they submitted to the
needs of the nation ahead of their personal needs. They had
accepted the limitations placed with rationing and drafting.
The responses of the public health officials reflected the new
allegiance to science and the wartime society. The medical
and scientific communities had developed new theories and
applied them to prevention, diagnostics and treatment of the
influenza patients.

The Medical and Scientific Conceptions of Influenza

Scientific ideas about influenza, the disease and its origins,
shaped the public health and medical responses. In 1918
infectious diseases were beginning to be unraveled. Pasteur
and Koch had solidified the germ theory of disease through
clear experiments clever science. The bacillus responsible
for many infections such as tuberculosis and anthrax  had
been visualized, isolated and identified. Koch’s postulates
had been developed to clearly link a disease to a specific
microbial agent.

Robert Koch

Robert Koch

The petri dish was widely used to grow sterile cultures of bacteria
and investigate bacterial flora. Vaccines had been created for
bacterial infections and even the unseen rabies virus by
serial passage techniques. The immune system was explained by
Paul Erhlich and his side-chain theory. Tests of antibodies such as
Wasserman and coagulation experiments were becoming commonplace.
Science and medicine were on their way to their complete entanglement
and fusion as scientific principles and methodologies made their way
into clinical practice, diagnostics and therapy.

The Clinical Descriptions of Influenza

Patients with the influenza disease of the epidemic were generally
characterized by common complaints associated with the flu. They had
body aches, muscle and joint pain, headache, a sore throat and a
unproductive cough with occasional harsh breathing (JAMA, 1/25/1919).

The most common sign of infection was the fever, which ranged from
100 to 104 F and lasted for a few days. The onset of the epidemic influenza
was peculiarly sudden, as people were struck down with dizziness, weakness
and pain while on duty or in the street (BMJ, 7/13/1918). After  the
disease was established the mucous membranes became reddened
with sneezing. In some cases there was a hemorrhage of the
mucous membranes of the nose and bloody noses were commonly
seen. Vomiting occurred on occasion, and also sometimes diarrhea
but more commonly there was constipation (JAMA, 10/3/1918).

The danger of an influenza infection was its tendency to progress into
the often fatal secondary bacterial infection of pneumonia. In the
patients that did not rapidly recover after three or four days of fever, there
is an “irregular pyrexia” due to bronchitis or broncopneumonia (BMJ,
7/13/1918). The pneumonia would often appear after a period of
normal temperature with a sharp spike and expectorant of bright
red blood. The lobes of the lung became speckled with “pneumonic
consolidations.” The fatal cases developed toxemia and vasomotor
depression (JAMA, 10/3/1918). It was this tendency for secondary
complications that made this influenza infection so deadly.

pneumonia

pneumonia

hospital ward in 1918

hospital ward in 1918

A military hospital ward in 1918

In the medical literature characterizing the influenza disease, new
diagnostic techniques are frequently used to describe the clinical
appearance. The most basic clinical guideline was the temperature,
a record of which was kept in a table over time. Also closely
monitored was the pulse rate. One clinical account said that
“the pulse was remarkably slow,” (JAMA, 4/12/1919) while others
noted that the pulse rate did not increase as expected. With the
pulse, the respiration rate was measured and reported to provide
clues of the clinical progression.
Patients were also occasionally “roentgenographed” or chest x-rayed,
(JAMA, 1/25/1919). The discussion of clinical influenza also often
included analysis of the blood. The number of white blood cells were
counted for many patients. Leukopenia was commonly associated
with influenza. The albumin was also measured, since it was noted that
transient albuminuria was frequent in influenza patients. This was
done by urine analysis. The Wassermann reaction was another
added new test of the blood for antibodies (JAMA, 10/3/1918).
These new measurements enabled to physicians to have an
image of action and knowledge using scientific instruments. They
could record precisely the progress of the influenza infection and perhaps
were able to forecast its outcome.

The most novel of these tests were the blood and sputum cultures.
Building on the germ theory of disease, the physicians and their
associated research scientists attempted to find the culprit for this
deadly infection. Physicians would commonly order both blood and sputum
cultures of their influenza and pneumonia patients mostly for research
and investigative purposes. At the military training camp
Camp Lewis during a influenza epidemic, “in all cases of pneumonia.
a sputum study, white blood and differential count, blood culture
and urine examinations were made as routine,” (JAMA, 1/25/1919).

The bacterial flora of the nasopharynx of some patients was also cultured
since droplet infection was where the disease disseminated. The
collected swabs and specimens were inoculated onto blood agar of
petri dishes. The grown up bacterial colonies were closely studied to
find the causal organism. Commonly found were pneumococcus,
streptococcus, staphylococcus and Bacillus influenzae (JAMA, 4/12/1919).

pneumonia

pneumonia

These new laboratory tests used in the clinical setting brought in a solid
scientific, biological link to the practice of medicine. Medicine had
become fully scientific and technologic in its understanding and
characterization of the influenza epidemic.

Treatment and Therapy

The therapeutic remedies for influenza patients varied from the
newly developed drugs to oils and herbs. The therapy was much less
scientific than the diagnostics, as the drugs had no clear explanatory
theory of action. The treatment was largely symptomatic, aiming to
reduce fever or pain. Aspirin, or acetylsalicylic acid was a common remedy.
For secondary pneumonia doses of epinephrin were given. To
combat the cyanosis physicians gave oxygen by mask or some
injected it under the skin (JAMA, 10/3/1918). Others used salicin which
reduced pain, discomfort and fever and claimed to reduce the infectivity
of the patient. Another popular remedy was cinnamon in powder or oil form
with milk to reduce temperature (BMJ, 10/19/1918). Finally, salt of quinine
was suggested as a treatment. Most physicians agreed that the patient should
be  kept in bed (BMJ, 7/13/1918). With that was the advice of plenty of
fluids and nourishment. The application of cold to the head, with
warm packs or warm drinks was also advised. Warm baths were used
as a hydrotherapeutic method in hospitals but were discarded for
lack of success (JAMA, 10/3/1918). These treatments, like the
suggested prophylactic measures of the public health officials, seemed to
originate in the common social practices and not in the growing field of
scientific medicine. It seems that as science was entering the medical
field, it served only for explanatory, diagnostic and preventative
measures such as vaccines and technical tests. This science had
little use once a person was ill.

However, a few proposed treatment did incorporate scientific ideas
of germ theory and the immune system. O’Malley and Hartman
suggested to treat influenza patients with the serum of convalescent
patients. They utilize the theorized antibodies to boost the immune
system of sick patients. Other treatments were “digitalis,” the
administration of isotonic glucose and sodium bicarbonate intravenously
which was done in military camps (JAMA, 1/4/1919). Ross and
Hund too utilized ideas about the immune system and properties of the
blood to neutralize toxins and circulate white blood cells. They believed
that the best treatment for influenza should aim to: “…neutralize or render
the intoxicant inert…and prevent the blood destruction with its destructive
leukopenia and lessened coagulability,” (JAMA, 3/1/1919). They tried
to create a therapeutic immune serum to fight infection. These therapies
built on current scientific ideas and represented the highest
biomedical, technological treatment like the antitoxin to diphtheria.

influenza

influenza

In July, an American soldier said that while influenza caused a heavy
fever, it “usually only confines the patient to bed for a few days.” The
mutation of the virus changed all that. [Credit: National Library of Medicine]

recovering_from_influenza

recovering_from_influenza

An old cliché maintained that influenza was a wonderful disease as
it killed no one but provided doctors with lots of patients. The 1918
pandemic turned this saying on its head. [Credit: The Etiology of
Influenza in 1918]

During the 1890 influenza epidemic, Pfeiffer found what he
determined to be the microbial agent to cause influenza.
In the sputum and respiratory tract of influenza patients in 1892,
he isolated the bacteria Bacillus influenzae , which was
accepted as the true “virus” though it was not found in localized
outbreaks (BMJ, 11/2/1918). However, in studies of the 1907-8
epidemic in the US, Lord had found the bacillus in only 3 of 20 cases.
He also found the bacillus in 30% of cultures of sputum from TB patients.
Rosenthal further refuted the finding when he found the bacillus in 1 of 6
healthy people in 1900 (JAMA, 1/18/1919). The bacillus was also
found to be present in all cases of whooping cough and many cases
of measles, chronic bronchitis and scarlet fever (JAMA, 10/5/1918).
The influenza pandemic provided scientists the opportunity to confirm
or refute this contested microbe as the cause of influenza. The sputum
studies from the Camp Lewis epidemic found only a few influenza cases
harvesting the influenza bacilli and mostly type IV pneumococcus . They
concluded that “the recent epidemic at Camp Lewis was an acute
respiratory infection and not an epidemic due to Bacillus influenzae ,”
(JAMA, 1/25/1919). This finding along with others suggested to most
scientists that the Pfeiffer’s Bacillus was not the cause of influenza.

In the 1918-19 influenza pandemic, there was a great drive to find the
etiological agent responsible for the deadly scourge. Scientists in their
labs were working hard, using the cultures obtained from physician clinics,
to isolate the etiological agent for influenza. As a report early in the
epidemic said, “the ‘influence’ of influenza is still veiled in mystery, ”
(JAMA, 10/5/1918). The nominated bacillus influenzae bacteria
seemed to be incorrect and scientists scrambled to isolate the true cause.
In the journals, many authors speculated on the type of agent- was
it a new microbe, was it a bacteria, was it a virus? One journal offered
that “the severity of the present pandemic, the suddenness of onset…
led to the suggestion that the disease cannot be influenza but some other
and more lethal infection,” (BMJ, 11/2/1918). However, most accepted that
the epidemic disease was influenza based on the familiar symptoms
and known pattern of disease. The respiratory disease of influenza was
understood to give warning in the late spring of its potential effects
upon its recrudescence once the weather turned cold in the winter
(BMJ, 10/19/1918).One article with foresight stated that ” there can
be no question that the virus of influenza is a living organism…

flu virus EM

flu virus EM

it is possibly beyond the range of microscopic vision,” (BMJ, 11/16/1918). Another
article confirmed the idea of an “undiscovered virus” and noted that pneumococci
and streptococci were responsible for “the gravity of the secondary pulmonary
complications,” (BMJ, 11/2/1918). The article went on to offer the idea of a
symbiosis of virus and secondary bacterial infection combining to make it
such a severe disease.

The investigators as they attempted to find the responsible agent for the influenza
pandemic were developing ideas of infectious microbes and the concept of the
virus. The idea of the virus as an infectious agent had been around for years.
The articles of the period refer to the “virus” in their discussion but do not
consistently use it to be an infectious microbe, distinctive from bacteria. The
term virus has the same usage and application as bacillus. In 1918, a virus
was defined scientifically to be a submicroscopic infectious entity which could
be filtered but not grown in vitro . In the 1880s Pasteur developed an attenuated
vaccine for the rabies virus by serial passage way ahead of his time. Ivanoski’s
work on the tobacco mosaic virus in 1890s lead to the discovery of the virus.
He found an infectious agent that acted as a micro-organism as it multiplied
yet which passed through the sterilizing filter as a nonmicrobe. By the 1910s
several viruses, defined as filterable infectious microbes, had been identified
as causing infectious disease (Hughes). However, the scientists were still
conceptually behind in defining a virus; they distinguished it only by size
from a bacteria and not as an obligate parasite with a distinct life cycle
dependent on infecting a host cell.

The influenza epidemic afforded the opportunity to research the etiological
agent and develop the idea of the virus. Experiments by Nicolle and Le Bailly in
Paris were the earliest suggestions that influenza was caused by a “filter-passing
virus,” (BMJ, 11/2/1918). They filtered out the bacteria from bronchial expectoration
of an influenza patient and injected the filtrate into the eyes and nose of two monkeys.
The monkeys developed a fever and a marked depression. The filtration was later
administered to a volunteer subcutaneously who developed typical signs of influenza.
They reasoned that the inoculated person developed influenza from the filtrate since
no one else in their quarters developed influenza (JAMA, 12/28/1918). These scientists
followed Koch’s postulates as they isolated the causal agent from patients with the
illness and used it to reproduce the same illness in animals. Through these studies,
the scientists proved that influenza was due to a submicroscopic infectious agent
and not a bacteria, refuting the claims of Pfeiffer and advancing virology. They were
on their way to discerning the virus and characterizing the orthomyxo viruses that
lead to the disease of influenza.

These scientific experiments which unravel the cause of influenza, had immediate
preventative applications. They would assist in the effort to create a effective
vaccine to prevent influenza. This was the ultimate goal of most studies, since
vaccines were thought to be the best preventative solution in the early 20th century.
Several experiments attempted to produce vaccines, each with a different
understanding of the etiology of fatal influenza infection. A Dr. Rosenow invented
a vaccine to target the multiple bacterial agents involved from the serum of patients.
He aimed to raise the immunity to against the bacteria, the “common causes of death,
“and not the cause of the initial symptoms by inoculating with the proportions found
in the lungs and sputum (JAMA, 1/4/1919). The vaccines made for the British forces
took a similar approach and were “mixed vaccines” of pneumococcus and
lethal streptococcus. The vaccine development therefore focused on the culture
results of what could be isolated from the sickest patients and lagged behind the
scientific progress.

Fading of the Pandemic:

In November, two months after the pandemic had erupted, the Public Health Service
began reporting that influenza cases were declining.

Communities slowly lifted their quarantines. Masks were discarded. Schools were
re-opened and citizens flocked to celebrate the end of World War I.

Communities and the disease continued to be a threat throughout the spring of 1919.

By the time the pandemic had ended, in the summer of 1919, nearly 675,000
Americans were dead from influenza. Hundred of thousands more were orphaned
and widowed.

The Legacy of the Pandemic

No one knows exactly how many people died during the 1918-1919 influenza
pandemic. During the 1920s, researchers estimated that 21.5 million people died
as a result of the 1918-1919 pandemic. More recent estimates have estimated
global mortality from the 1918-1919 pandemic at anywhere between 30 and 50
million. An estimated 675,000 Americans were among the dead.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969. These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Research, forgetting the pandemic of 1918-1919, scientific milestones, 20th century influenza or global pandemics.
The Influenza Pandemic occurred in three waves in the United States throughout
1918 and 1919.

More Americans died from influenza than died in World War I. [Credit: National Library of Medicine]

All of these deaths caused a severe disruption in the economy. Claims against life
insurance policies skyrocketed, with one insurance company reporting a 745 percent
rise in the number of claims made. Small businesses, many of which had been unable to operate during the pandemic, went bankrupt.

Joseph goldberger

Joseph goldberger

Joseph Goldberger, one of the leading researchers in the PHS, studied influenza
during the pandemic. But Goldberger had multiple interests and influenza research
became less important to him in the years following 1918. [Credit: Office of the Public
Health Service Historian]

In the summer and fall of 1919, Americans called for the government to research
both the causes and impact of the pandemic. In response, both the federal government
and private companies, such as Metropolitan Life Insurance, dedicated money
specifically for flu research.

In an attempt to determine the effect influenza had different communities, the Public
Health Service conducted several small epidemiological studies. These studies,
however, were conducted after the pandemic and most PHS officers
admitted that the data which was collected was probably inaccurate.

PHS scientists continued to search for the causative agent of influenza in their
laboratories as did their fellow scientists in and outside the United States.

But while there was a burst of enthusiasm for funding flu research in
1918- 1919, the funds allocated for this research were actually fairly meager.
As time passed, Americans became less interested in the pandemic and its
causes. And even when funding for medical research dramatically increased
after World War II, funding for research on the 1918-1919 pandemic remained
limited.

Forgetting the 1918-1919 Pandemic:

In the years following 1919, Americans seemed eager to forget the pandemic.
Given the devastating impact of the pandemic, the reasons for this forgetfulness
are puzzling.

It is possible, however, that the pandemic’s close association with World War I
may have caused this amnesia. While more people died from the pandemic than
from World War I, the war had lasted longer than the pandemic and caused
greater and more immediate changes in American society.

Influenza also hit communities quickly. Often it disappeared within a few weeks of
its arrival. As one historian put it, “the disease moved too fast, arrived, flourished
and was gone before…many people had time to fully realize just how great
was the danger.” Small wonder, then, that many Americans forgot about the
pandemic in the years which followed.

Scientific Milestones in Understanding and Preventing Influenza:

In the early stages of the pandemic, many scientists believed that the agent
responsible for influenza was Pfeiffer’s bacillus. Autopsies and research conducted
during the pandemic ultimately led many scientists to discard this theory.

In late October of 1918, some researchers began to argue that influenza was
caused by a virus. Although scientists had understood that viruses could cause
diseases for more than two decades, virology was still very much in its infancy at
this time.

It was not until 1933 that the influenza A virus, which causes almost every type
of endemic and pandemic influenza, was isolated. Seven years later, in 1940,
the influenza B virus was isolated. The influenza C virus was finally isolated in 1950.

Influenza vaccine was first introduced as a licensed product in the United States in
1944. Because of the rapid rate of mutation of the influenza virus, the
effectiveness of a given vaccine usually lasts for only a year or two.

By the 1950s, vaccine makers were able to prepare and routinely release vaccines
which could be used in the prevention or control of future pandemics. During the
1960s, increased understanding of the virus enabled scientists to develop both
more potent and purer vaccines.

Mass production of influenza vaccines continued, however, to require several
months lead time.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969.

These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics
were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Tuberculosis

Mycobacterium tuberculosis was first discovered in 1882 by Robert Koch and is one of almost 200 mycobacterial species which have been detected by molecular techniques. The genus Actinobacteria (given its own family, the Mycobacteriaceae) includes pathogens known to cause serious diseases in mammals, including tuberculosis (MTBC) and leprosy (M. leprae). Mycobacteria are grouped neither as Gram-positive nor Gram-negative bacteria. MTBC consists of M. tuberculosis, M. bovis, M. bovis BCG (bacillus Calmette-Guérin), M. africanum M. caprae, M. microti, M. canettii and M. pinnipedii, all of which share genetic homology, with no significant variation between sequences (∼0.01 to 0.03%), although differences in phenotypes are present. Cells in the genus have a typical rod, or slightly curved-shape, with dimensions of 0.2 to 0.6 μm by 1 to 10 μm.

Mycobacterium tuberculosis has a waxy mycolic acid lipid complex coating on its cell surface. The cells are impervious to Gram staining, so a common staining procedure used is Ziehl-Neelsen (ZN) staining. The outer compartment of the cell wall contains lipid-linked polysaccharides, is water-soluble, and interacts with the immune system. The inner wall is impermeable. Mycobacteria have some unique qualities that are divergent from members of the Gram-positive group, such as the presence of mycolic acids in the cell wall.

MTBC and M. leprae replication occurs in the tissues of warm-blooded human hosts. This air-borne pathogen is transmitted from an active pulmonary tuberculosis patient by coughing. Droplet nuclei, approximately 1 to 5 μm in size “meander” in the air and are transmitted to susceptible individuals by inhalation. Mycobacteria are incapable of replicating in or on inanimate objects. The risk of infection is dependent to the load of the bacillus that has been inhaled, level of infectiousness, contact perimeter and the immune competency of potential hosts. Due to the size of the droplets inhaled into the lungs, the infection penetrates the defense systems of the bronchi and enters the terminal alveoli. Invading bacteria are then engulfed by alveolar macrophage and dendritic cells.

The cell-mediated immune response alleviates the multiplication of M. tuberculosis and halts infection. Infected individuals with strong immune systems are generally able to combat the infection within 2 to 8 weeks post-infection, when the active cell-mediated immune response stops further multiplication of M. tuberculosis. Tuberculosis infection shows several significant clinical manifestations in pulmonary and extra-pulmonary sites. Prolonged coughing, severe weigh-lost, night sweats, low-grade fever, dyspnoea and chest pain are clinical symptoms indicated from pulmonary infections.

Fort Bayard, N.M., T.B. service assignment

Fort Gayard, NM

Fort Gayard, NM

Fort Bayard, NM Post Hospital circa 1890

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

Tuberculosis, (Pvt.) Richard Johnson said, was “regarded as a much dreaded disease that was easily contracted by association.” In fact, so many hospital corpsmen requested transfers out that the Surgeon General established a policy that no such requests would be considered until after two years of service. Consequently, Johnson noted, “During my time there we had a high percentage of desertions.” For example, all four of the men who arrived with Johnson, within a year—“two of them,” he dryly observed, “owing me money.”

Four years later another young man arrived at Fort Bayard. He, too, remarked on the long journey by rail through the “desert waste of New Mexico,” and then the wagon ride over “dry desolate foothills,” to the post. But his reaction was different from Johnson’s. Capt. Earl Bruns moved from being a patient to a physician at the hospital. For Bruns Fort Bayard was “a veritable oasis in the desert, studded with shade trees, green lawns, shrubbery, and flowers.” He credited the hospital commander, Colonel (Col.) George E. Bushnell, writing that, “[i]n this one spot one man had made the desert bloom like a rose.”

Johnson’s and Bruns’ different views from 1904 and 1908, respectively, may reflect the fact that Johnson was healthy and assigned grudgingly to work at the tuberculosis hospital, whereas Bruns had few other options and came in hopes of regaining his health—or it may reflect the improvements Bushnell made during his first years in command. But every week for the more than twenty years that Fort Bayard was an Army tuberculosis hospital, workers and patients arrived with dread and foreboding, or joy and relief—or a mix of them all.

The approach Fort Bayard and George Bushnell took to tuberculosis was similar to how physicians manage the disease today in that it involved isolating the patient, treating the disease, and educating the patient and his family on how to maintain their health. The hospital offered patients sanctuary from the demands, fears, and prejudices regarding tuberculosis in the outside world. Fort Bayard treated tuberculosis patients with prolonged bed rest, fresh air, and a healthy diet, but undertaking this “rest treatment”—confining oneself to bed for months—proved difficult if not impossible for many patients. Fort Bayard involved patients’ adaptation to new lifestyles as people with tuberculosis. Finally, Fort Bayard managed patients’ transition back to the outside world.

One of the most striking aspects of Fort Bayard was that many of the medical staff had tuberculosis themselves, including George Bushnell. Tuberculosis weakened Bushnell’s lungs and shaped his life in numerous ways. He tired easily, had to carefully monitor his health, and as Earl Bruns observed, “was never a well man.” Bushnell had active tuberculosis five times in his life: the fourth time in 1919 with a breakdown from the strain of wartime work; and the fifth and the final illness in 1924 that lead to his death at age 70. In 1911 he advised his superiors that, “I did not consider myself strong enough to carry on the work of commanding this Hospital and keeping myself in condition for active duty.” The War Department generally required officers in poor physical condition to retire, but the Surgeon General secured a waiver for Bushnell, because “the interests of the service would suffer by his retirement.” After a leave of absence in 1909–10, Bushnell’s annual reports on the competency of his officers included his own name on the list of those competent for hospital duty, but “unfit for active field service.”

“What would our sanatorium movement and our anti-tuberculosis crusade amount to,” wrote tuberculosis expert Adolphus Knopf, “were it not for the labors of tuberculous physicians, or one-time tuberculous physicians, who, because of their infirmity, had become interested in tuberculosis?” Well-known leaders in the antituberculosis movement such as Edward Trudeau and Lawrence Flick established their sanatoriums after they recovered from tuberculosis in order to offer others the treatment. Twenty-one of the first thirty recipients of the Trudeau Medal, established in 1926 for outstanding work in tuberculosis, had the disease. James Waring, a tuberculosis physician who arrived at a Colorado Springs sanatorium on a stretcher in 1908, later wrote, “It has been my good fortune to serve three separate and extended ‘hitches’ as a ‘bed patient,’ the time so spent numbering in all about nine years.” He, like many physicians, saw his personal experience as an asset in his practice. The three key figures in the Army tuberculosis program during World War I were Bushnell, Bruns, and Gerald Webb of Colorado Springs who started a tuberculosis sanatorium after his wife died of the disease.

Bushnell turned tuberculosis into an asset for the Army Medical Department, making Fort Bayard a center of national expertise on the disease. His personal experience with chronic pulmonary tuberculosis gave him good rapport and credibility with many of his patients. Medical officer Earl Bruns wrote that, “[H]e went among the patients and talked to them individually” and thereby provided “a living example of a cure due to rational treatment.” Bruns described how Bushnell spent his days attending to patients, carrying out administrative duties, and devoted hours to supervising the work in the gardens and grounds of Fort Bayard.

(Who’s Who in America, 1924-25. E. H. Bruns in American Review of Tuberculosis, June 1925. 0. B. Webb in Outdoor Life, Sept. 1924. Lancet. Lond., 1924. Jour. Am. Med. Ass’n., 1924, p. 374.)

General George M. Sternberg

In addition to being an Army surgeon, Sternberg was also a noted bacteriologist who, in 1880, had translated Antoine Magnin’s The Bacteria, which presented the latest research in germ theory. Sternberg’s work contributed to preparing American understanding of Robert Koch’s pronouncement in 1882 of the existence of the tubercle bacillus (Ott 1996:55). Over the next two decades Koch’s analysis gained converts, leading to the universally accepted belief that tuberculosis represented a bacterium infection that could be diagnosed and then monitored by microscopic inspection of patient’s sputum.

Sternberg was no doubt aware of the efforts of Edward Livingston Trudeau. Beginning in the 1870s, when he undertook his own recovery from consumption by withdrawing to the Adirondack Mountains, Trudeau had become an advocate of extended bed rest in remote, healthful environments. Quickly accepting Koch’s research, Trudeau argued that those afflicted by the tubercle bacillus could best be healed when removed from cities and placed under the care of physicians who carefully monitored their weight and sputum and who prescribed constant bed rest with exposure to fresh air. Preferring the term “sanatorium,” derived from the Latin word “to heal,” to “sanitarium,” derived from the Latin term for health, Trudeau founded his Adirondack Cottage Sanatorium at Saranac, New York, in 1885. This spawned the opening of hundreds of similar institutions throughout the country (Caldwell 1988:70).

In 1899, Fort Bayard remained within the Army under the auspices of the Army Medical Department. The Army’s decision to retain the fort, even after it had outlived its military usefulness, grew from the strong interest that General George M. Sternberg, Surgeon General of the Army, had in pulmonary tuberculosis and its treatment.
Sternberg was also aware of the relatively good health that the Army’s soldiers had enjoyed serving in the higher elevations of the American West. Members of Zebulon Pike’s expedition of 1810 and of Fremont’s exploratory parties of the 1840s had witness their health improve while in the Rocky Mountains.

………………………………………………………………………………………………………………………………………………..

Upon assuming command in 1904, Bushnell, who had studied botany for years, immediately began to plant flowers, shrubs, and trees. When President Theodore Roosevelt created the Gila Forest Reserve in 1905, Bushnell ensured that Fort Bayard, which adjoined the Reserve, was part of a government reforestation project. The first year alone the Forest Service gave the hospital 250 seedlings of Himalayan cedar and yellow pine. Bushnell also got approval to fence in land for pasturing dairy cattle and arranged to recultivate long-neglected garden plots. The first year he predicted that the garden would generate “about 1300 dollars worth of produce.” After the quartermaster located an underground water source, Bushnell redoubled his cultivation efforts, planting trees, flowers, and grass to mitigate the wind and dust, and “to beautify the Post.” In later years Bushnell successfully grew beans from ancient cave dwellers (Anasazi beans), and made a less successful effort to grow Giant Sequoia from California.28 By 1910 Fort Bayard had four acres of vegetable gardens, a greenhouse, an orchard of 200 fruit trees, and alfalfa fields and hay fields for the dairy herd of 115 Holsteins, which the Silver City Enterprise proclaimed “one of the finest in the west.” The hospital also raised all of the beef consumed at the hospital (thereby avoiding Daniel Appel’s purchasing problems) and consumed pork at small expense by feeding the pigs the waste food. The hospital laboratory raised its own Belgian hares and guinea pigs for experiments.

Bushnell oversaw years of construction at Fort Bayard. In the wake of Florence Nightingale’s writings, nineteenth-century sanitation practices stressed cleanliness and ventilation, giving rise to pavilion style hospitals, narrow one- or two-story buildings lined with windows to provide patients with ample ventilation. In March 1904, Bushnell sent the Surgeon General plans for an “open court building” in modified pavilion style (Figure 2-1).

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

The building consisted of a quadrangle of long, narrow dressing rooms around an open court with porches along both the exterior and interior of the building. The rooms could be used for sleeping in inclement weather and the porches allowed patients to seek sun or shade as they wished. Wide doors enabled the easy movement of beds between the rooms and the porches. “The object of this style of building is to facilitate sleeping out of doors, which is now considered so important in modern sanatoria for the treatment of tuberculosis,” Bushnell explained.

The United States escaped the cauldron of WWI until April 1917. But after years of trying to maintain neutrality, President Woodrow Wilson’s administration mobilized the nation to fight in the most deadly enterprise the world had ever seen. Modern industrialized warfare would kill millions of soldiers, sailors, and civilians and unleash disease and famine across the globe. Typhus flourished in Eastern Europe and a lethal strain of influenza exploded out of the Western Front in 1918, producing one of the worst pandemics in history. Although eclipsed by such fierce epidemics, tuberculosis also fed on the war.

He was ordered to the office of The Surgeon General on June 2, 1917, and placed in charge of the Division of Internal Medicine and on June 13 there appeared S. G. O. Circular No. 20, Examinations for pulmonary tuberculosis in the military service, establishing a standard method of examination of the lungs for tuberculosis. Through his efforts a reexamination of all personnel already in the service was made by tuberculosis examiners and about 24,000 were rejected on that score. He had charge of the location, construction, and administration of all army tuberculosis hospitals, of which eight were built with a capacity of 8,000 patients.

With his relief from service in 1919 he took up his residence on a small farm at Bedford, Mass., where he prepared his Study of the Epidemiology of Tuberculosis (1920) and later Diseases of the Chest (1925) in collaboration with Dr. Joseph H. Pratt of Boston. As chief delegate of the National Tuberculosis Association he attended the first meeting of the International Union Against Tuberculosis in London in 1921. During the winter of 1922-23 he delivered a series of lectures on military medicine at Harvard University. In the summer of 1923 he moved to California and took up his residence at Pasadena.

………………………………………………………………………………………………………………………………………………..

In eighteen months the Selective Service registered twenty-five million men for the draft, examined ten million for military service, and enlisted more than four million soldiers, sailors, and Marines. To the dismay of many people, medical screening boards across the nation soon discovered that American men were not as strong and healthy as they had assumed. Of those eligible for military service, 30 percent were physically unfit; a number of them deemed ineligible to serve had tuberculosis. Therefore, in 1917 Surgeon General William Gorgas called George Bushnell to Washington, DC, to establish the Office of Tuberculosis in the Division of Internal Medicine, leaving Bushnell’s protégé, Earl Bruns, in charge of Fort Bayard. Given the Medical Department’s mission to maintain a strong and healthy fighting force, Bushnell’s new job was to minimize the incidence of tuberculosis among active-duty soldiers and avoid the high cost of disability pensions for men who incurred the disease during military service. It was a tall order.

Wartime tuberculosis had already received attention in 1916, when reports circulated that the French army had sent home 86,000 men with the disease, raising the specter that life in the trenches would generate hundreds of thousands of cases. One investigator found that tuberculosis rates in the British army were double those in peacetime, reversing the prewar downward trend. The head of the New York City Public Health Department, Hermann Biggs, declared that “tuberculosis
offers a problem of stupendous magnitude in France.” Subsequent studies revealed that only 20 percent or less of the French soldiers sent home with tuberculosis actually had the disease; others were either misdiagnosed or had had tuberculosis prior to entering the military and therefore had not contracted it in the trenches. The reports nevertheless galvanized public health officials to address the tuberculosis problem. The Rockefeller Foundation, for example, in cooperation with the American Red Cross, established a Commission for the Prevention of Tuberculosis in France to help the French and protect any Americans from contracting tuberculosis “over there.”

Bushnell established four “tuberculosis screens” by (1) examining all volunteers and draftees before enlistment, (2) checking recruits again in the training camps, (3) examining soldiers already in the Army for tuberculosis, and (4) screening military personnel at discharge to ensure they returned to civil life in sound condition. To implement these activities, Bushnell developed a protocol under which physicians could quickly examine men for tuberculosis as part of the larger physical examination process. He standardized the procedures for examinations throughout the Army, and crafted a narrow definition of what constituted a tuberculosis diagnosis to enable the Army to enlist as many young men as possible. Despite these efforts, soldiers developed active cases of tuberculosis throughout the war. Bushnell’s office also created eight more tuberculosis hospitals in the United States and designated three hospitals with the American Expeditionary Forces (AEF) in France to care for soldiers who developed active tuberculosis in the camps and trenches. Short of resources and knowledge, however, the Army Medical Department at times struggled just to provide beds for tuberculosis patients, let alone deliver the individual care Bushnell and his staff had provided at Fort Bayard before the war.

Overburdened medical personnel worked long hours, in often poor conditions. Thousands of tuberculosis patients resented the diagnosis and protested the conditions in which at times they were virtually warehoused. The draft, which brought millions of young men into government control and responsibility, also exposed the Army Medical Department to public scrutiny. Congress launched an investigation in 1919. World War I, which so dramatically changed the world, profoundly altered the Army’s tuberculosis program as well. It also challenged George Bushnell’s expertise. The Army’s tuberculosis expert had founded his policies on assumptions that, although widely held at the time, proved to be inaccurate and costly in lives and treasure. Wartime tuberculosis, therefore, shows the power of disease to overwhelm both knowledge and institutions.

Bushnell and his contemporaries were familiar with the concept of immunity and the power of vaccination, and the Army Medical Department vaccinated soldiers for smallpox and typhoid. Extending this concept of immunity to tuberculosis, medical officers differentiated between primary infection in childhood and secondary infection later in life. Observing that tuberculosis was often fatal for infants and young children, they reasoned that for survivors, an early infection of tuberculosis bacilli immunized a person against the disease later in life.
A “primary infection,” wrote Bushnell, gave a person some immunity, which “while not sufficient in many cases to prevent extension of disease [within the body]…is sufficient to counteract new infections from without.”8 In an article on “The Tuberculous Soldier,” the revered physician William Osler agreed. For years autopsies had uncovered healed tuberculosis lesions in people who had died in accidents or of other diseases. Although it was not known how many men between the ages of eighteen and forty harbored the tubercle bacillus, Osler wrote, “We do know that it is exceptional not to find a few [lesions] in the bodies of men between these ages dead of other diseases.” Thus, he argued, “In a majority of cases the germ enlists with the soldier. A few, very few, catch the disease in infected billets or barracks.”9 Bushnell reasoned if adults developed tuberculosis, “they do it on account of failure of their resistance.”

At one point Bushnell told the chief surgeon of the AEF, “Personally I have no fear of the contagion of tuberculosis between adults and see no reason why patients of this kind should not be treated in the ordinary hospital.” He asserted that the “really cruel persecution of the consumptive…through the fear that he will infect others, is based on what I must characterize as highly exaggerated notions of the danger of such infection.” This, too, was the prevailing view. Boston bacteriologist Edward O. Otis, who served as a medical officer during the war, wrote that “Undue fear of the communicability of pulmonary tuberculosis from one adult to another is unwarranted in the present state of our knowledge.”
Bushnell reasoned that if men infected with tuberculosis could indeed easily spread it to others, there would be much more tuberculosis in the Army than there was. British physician Leslie Murry, reasoned that although the crowded and damp conditions of trench warfare would have unfavorable effects on soldiers’ health, living outside with plenty of fresh air and good food and hygienic practices would improve their resistance to tuberculosis. Public health specialist George Thomas Palmer countered that although reactivation may not be higher in the military than in civil life, the United States had enough men without tuberculosis to bar anyone suspected of it from the military and thereby avoid an “added financial burden to the nation.” The challenge was to keep tuberculosis out of the Army and tuberculars off the disability rolls, but not to exclude so many men as to impair the nation’s ability to amass an army.

Bushnell’s views of tuberculosis immunity, contagion, interaction with military life, and the risk of overdiagnosis shaped the Army Medical Department programs for screening recruits. He knew he could not guarantee that all tuberculosis could be eliminated from the Army, but asserted that, “a sufficiently rigid selection of promising material in itself practically excludes tuberculosis.” In addition to enlisting the strongest men, Bushnell believed that a massive screening program would pay for itself by eliminating those who would later cost the government in medical services and disability benefits.

But the nation at war did not have the time or resources for the meticulous one-hour examination practiced at Fort Bayard, so Bushnell developed a protocol for civilian and military physicians to examine volunteers, draftees, trainees, and soldiers for tuberculosis in a matter of minutes. Circular No. 20 detailed how physicians should examine recruits, and became the single most important Army tuberculosis document during the war. The circular explained that the apices, or the tops of lungs, were the most common location for tuberculosis lesions, and that “the only trustworthy sign of activity in apical tuberculosis is the presence of persistent moist rales.” Circular No. 20 directed that “the presence of tubercle bacilli in the sputum is a cause for rejection,” and that “no examination for tuberculosis is complete without auscultation following a cough.” It recommended that a sputum sample “be coughed up in [the examiner’s] presence,” to ensure that it was actually from the examinee.

The last one-third of the document detailed X-ray examinations, summarizing eight different kinds of conditions that may appear and that would be grounds for rejection, and which conditions would not. By 1915, a Fort Bayard medical officer stated that X-ray technology “has become one of the most valued procedures in the diagnosis of pulmonary tuberculosis.” Medical officers F. E. Diemer and R. D. MacRae at Camp Lewis, Washington, argued in the pages of the JAMA that X-rays should be the primary diagnostic tool, not an “adjunct.” World War I ultimately, however, did encourage X-ray technology by revealing its power to thousands of physicians, stimulating the search for technical advances, and demonstrating the importance of specialization in reading X-rays. By the end of the war, the Army Medical Department had shipped to France hundreds of X-ray machines for use in Army hospitals and at the bedside, and developed various modes of X-ray equipment, including X-ray ambulances

Calculating that it would require 600 examiners for the screening process, the Medical Department turned to training general practitioners from civil life who knew little about tuberculosis. Bushnell’s office established a six-week tuberculosis course to prepare physicians. The first course at the Army Medical School in Washington, DC, was so popular that instructors offered it at several other training camps in the country. General Hospital No. 16, operating in conjunction with Yale Medical School, also offered a course on hospital administration to train medical officers to run tuberculosis hospitals.

Public health officials and the National Tuberculosis Association asked to be informed of any tuberculous individuals being sent to their communities, including the name and address of the “party assuming responsibility for such continued treatment and care.” The journal American Medicine published an article by British tuberculosis specialist Halliday Sutherland, who expressed concern that if men declined treatment and returned home they could spread tuberculosis to their families. He suggested that the U.S. Army retain men diagnosed with tuberculosis so that the government could provide treatment and discipline them if they resisted. Members of Congress opposed simply discharging men with tuberculosis. Representative Carl Hayden of Arizona argued that such men had given up their civilian lives upon induction into the Army, only to discover “that they were afflicted with a dread disease which prevents them from earning a livelihood.” He suggested that “some provision should be made for the care of such men until they are able to provide for themselves.”

While Bushnell’s policies succeeded in suppressing tuberculosis rates in the Army, the narrow definition of a tuberculosis diagnosis explicitly allowed men with healed lesions in their lungs to serve, and the rapid screening system caused some examiners to miss cases of active disease. Bushnell recognized that “a standard, though imperfect, is believed to be an indispensable adjunct in Army tuberculosis work not only to support the examiner but also to secure the necessary uniformity of practice in the matter of discharge for tuberculosis.” Nationwide, local draft boards and training camps rejected more than 88,000 men for tuberculosis, about 2.3 percent of the 3.8 million men examined. Postwar assessments calculated that of the more than two million soldiers who went to France to serve in the AEF, only 8,717 were evacuated with a diagnosis of tuberculosis, an incidence of only 0.4 percent.

In early 1918 a strep infection in the training camps in the United States caused medical officers to send hundreds of trainees to Army hospitals misdiagnosed with tuberculosis, crowding hospitals and generating paperwork and confusion. For a time, therefore, the Office of The Surgeon General ordered that no one should be discharged for tuberculosis from the training camps unless he had bacilli in his sputum—meaning the very severe cases. More than 50 percent of the patients being sent back to the United States from France with a diagnosis of tuberculosis did not actually have the disease. Bushnell viewed such overdiagnoses as “evil,” because it took men out of the AEF and overburdened tuberculosis hospitals and naval transports, which had to segregate suspected tuberculosis cases in isolation rooms or on open decks.

Faced with what he called “leaking” of soldiers from the AEF due to erroneous tuberculosis diagnoses, Bushnell turned to a specialist for assistance, Gerald B. Webb (Figure 4-3), from Colorado Springs.61 An Englishman by birth, Webb had married an American, and when she developed tuberculosis the couple traveled to Colorado Springs, Colorado, for treatment. His wife struggled with the disease for ten years until her death in 1903, and afterward Webb stayed on in Colorado Springs, remarrying and building a medical practice specializing in tuberculosis. In addition to his medical practice, Webb pioneered research into the body’s immune function, searched for a tuberculosis vaccine, and was a founder of the American Association of Immunologists (1913). Still somewhat bored in Colorado Springs, Webb volunteered for the Medical Corps soon after the United States declared war and helped organize and run tuberculosis screening boards at Camp Russell, Wyoming, and Camp Bowie, Texas. Bushnell
appointed him senior tuberculosis consultant for the AEF. After meeting with Bushnell in Washington and attending the Army War Course for senior officers at Columbia University, Webb sailed to France in March 1918.

Gerald B. Webb, World War I, Gerald B. Webb Papers.

Gerald B. Webb, World War I,

Gerald B. Webb, World War I,

Photograph courtesy of Special Collections, Tutt Library, Colorado College, Colorado Springs, Colorado.

Immunity in tuberculosis: Further experiments Unknown Binding – 1914

Webb instituted a screening process similar to that in the United States, distributing Circular No. 20, and preparing an illustrated version for medical officers in the field. He established a policy directing that only patients with sputum positive for tuberculosis should be sent back to the United States. Others would be tagged “tuberculosis observation” and sent to one of three hospitals designated as tuberculosis observation centers. There, specialists—Bushnell’s “good tuberculosis men”—would distinguish tuberculosis signs from other lung problems such as bronchitis and pneumonia, determining that he was free of disease, and send only patients who were indeed positive for tuberculosis back to the homeland.

Webb traveled to field and base hospitals throughout France. He would typically spend three days at a hospital, examining patients, leading conferences, giving lectures, and, according to his biographer, Helen Clapesattle, “preaching his gospel of fresh air and absolute rest.” He recruited a radiologist to teach the proper reading of X-ray plates, and advocated the early detection of tuberculosis, explaining, “Just as the wounded do better if they are got to the surgeons quickly, so the tuberculosis-wounded are more likely to recover if they are spotted and sent to the doctors early.”

In the 1930s, as Webb had concluded in 1919, scientists came to recognize that early tuberculosis infections did not provide protection and that adults could be reinfected with tuberculosis and develop active disease. In the meantime, with his AEF work done, in January 1919 Webb returned to his family and medical practice in Colorado Springs. The National Tuberculosis Association recognized Webb’s war work by electing him president in 1920, and Webb set the Association on a course of tuberculosis research on the immunity question and the standardization of X-ray diagnostics. He did not return to military service, but was a mentor for young physicians Esmond Long and James Waring, who would be leaders in the Army Medical Department’s tuberculosis program during the next war.

May 1941, as the United States stood on the brink of another world war, Benjamin Goldberg, president of the American College of Chest Physicians, recited some stunning figures at the association’s annual meeting in Cleveland, Ohio. He calculated that from 1919 to 1940 the Veterans Administration had admitted 293,761 tuberculosis patients to its hospitals. These patients had received government care and benefits for a total of 1,085,245 patient-years, at a cost of $1,185,914,489.56. Goldberg’s remarks reveal that although tuberculosis rates in the United States were declining 3 to 4 percent annually during the interwar years, the government’s burden to care for tuberculosis patients remained heavy. The Army was only three-quarters the size it was before World War I (131,000 versus 175,000 strength) and experienced no major epidemics, so that suicide and automobile accidents became the leading causes of death in the peacetime Army. Although hospital admissions of active duty personnel for tuberculosis declined during the decade, tuberculosis admissions at Fitzsimons Hospital in Denver remained constant due to a steady stream of patients who were veterans of the war. Tuberculosis, in fact, became a leading cause of disability discharges from the Army and, with nervous and mental disorders, generated the greatest amount of veterans’ benefits between the wars,

The story of tuberculosis in the Army after World War I, then, is one of increasing demand and decreasing resources, a dynamic that left Fitzsimons financially strapped even before the country entered the Great Depression. An examination of Fitzsimons’ postwar environment—the modern hospital and technology, the ever-changing landscape of veterans’ benefits, and new, invasive treatments for tuberculosis—illuminates these stresses.

President Franklin Delano Roosevelt proclaimed a “limited national emergency” on 8 September 1939, a week after Germany invaded Poland. But due to underfunding during the interwar period, one observer wrote that, “to prepare for war the Medical Department had to start almost from scratch.”1 Given the lean years of the 1920s and 1930s and the Army Medical Department’s policy of discharging officers with tuberculosis from duty, Surgeon General James C. Magee had to turn to the civilian sector for a tuberculosis expert. He recruited Esmond R. Long, M.D., Ph.D., director of the Henry Phipps Institute for the Study, Prevention and Treatment of Tuberculosis in Philadelphia. He could not have made a better choice. Long was also professor of pathology at the University of Pennsylvania, director of medical research for the National Tuberculosis Association, and the youngest person to be awarded the Trudeau Medal at age forty-two years (in 1932) for his tuberculosis research.2 He would now become the Army’s point man on the disease and stand at the front lines of the Medical Department’s struggle with tuberculosis beginning before Pearl Harbor to well after V-J (Victory-Japan) Day.

His mission to reduce the effect of tuberculosis on the Army differed from that of Colonel (Col.) George Bushnell in the previous war because disease was less of a threat. In fact, World War II would be the first war in which more American personnel died of battle wounds than of disease. Of 405,399 recorded fatalities, battle deaths outnumbered those from disease and nonbattle injuries more than two to one: 291,557 to 113,842.3 Malaria, sexually transmitted diseases, and respiratory infections did sicken millions of soldiers, sailors, Marines, and airmen, but most survived. Thanks in part to sulfa drugs and, beginning in 1943, penicillin to treat bacterial infections, the Army Medical Department had only 14,904 deaths of 14,998,369 disease admissions worldwide, a 0.1 percent death rate.4 Tuberculosis declined, too, representing only 1 percent of Army hospital admissions for diseases—1.2 per 1,000 cases per year—a rate much lower than the 12 per 1,000 cases per year during World War I. The Medical Department concluded that “tuberculosis was not a major cause of non-effectiveness during the war.”

But Sir Arthur S. McNalty, chief medical officer of the British Ministry of Health (1935–40), called tuberculosis “one of the camp followers of war.” War abetted tuberculosis, he explained, because of the “lowering of bodily resistance and increased physical or mental strain or both.”6 It also found fertile ground in crowded barracks and camps, and ran rampant in the World War II prison camps and Nazi concentration camps. And just one active case of tuberculosis per thousand in the Army meant thousands of tuberculosis sufferers among the 11 million Americans in uniform, each of whom consumed Medical Department resources: the average hospital stay per case during the war was 113 days.7

But if tuberculosis was a camp follower, Esmond Long (Figure 8-1) was a tuberculosis follower.8 He tracked it down, studied it, and tried to prevent its spread at every stage of American involvement in the war. With war looming in 1940, the National Research Council asked Long to chair the Division of Medical Sciences, Subcommittee for Tuberculosis, to advise the government on preventing and controlling tuberculosis in both civilian and military populations during war mobilization. Once the United States entered the war, Long received a commission as a colonel in the Medical Corps and moved his family from Philadelphia to Washington, DC. Working out of the Office of The Surgeon General, Long set up a screening process with the Selective Service to keep tuberculosis out of the Army and then traveled to more than ninety induction camps to ensure adherence to the procedures. He also oversaw the expansion of tuberculosis treatment facilities in the United States, inspected Fitzsimons and other Army tuberculosis hospitals, advised medical officers on treating patients, kept abreast of research developments in the labs, monitored outbreaks of tuberculosis in the theaters of war, and wrote articles for medical and lay periodicals to publicize the Army’s antituberculosis program.

In 1945 Long traveled to the European theater to inspect hospitals caring for tubercular refugees and liberated prisoners of war (POWs). There he saw the horrors of the concentration camps at Buchenwald and Dachau where Army medical personnel cared for thousands of former prisoners sick and dying of typhus, starvation, and tuberculosis. After the war Long organized the tuberculosis control program for the Allied occupation of Germany, and returned annually in the 1950s to assess its progress. He split his time between the Army Medical Department and the Veterans Administration (VA) to supervise the transition of the federal tuberculosis treatment program from the War Department to the VA. He also helped organize and evaluate the antibiotic trials, which ultimately led to an effective cure for tuberculosis. After returning to civilian life Long continued to study tuberculosis in the Army, and he wrote the key tuberculosis chapters for the Army Medical Department’s official history of the war.

With Long as a guide, this chapter shows how war once again served as handmaiden to disease around the globe. This time the Army Medical Department assumed not only national but international responsibilities for the control of tuberculosis in military and civilian populations, among friend and foe. Long and the Army Medical Department did succeed in demoting tuberculosis from the leading cause of disability discharge for American World War I personnel (13.5 percent of discharges), to thirteenth position during the years 1942–45 (1.9 percent of all discharges), behind conditions such as psychoneuroses, ulcers, respiratory diseases, arthritis, and other diseases.9 But this achievement required continued vigilance, an Army-wide surveillance program, and dedicated personnel and resources. The first step was to keep tuberculosis out of the Army.

After war broke out in Europe, Congress passed the National Defense Act of 1940, which established the first peacetime military draft in U.S. history, increasing Army strength eightfold from 210,000 in September 1939 to almost 1.7 million (1,686,403) by December 1941. This resulted in a 75 percent rise in the number of patients in military hospitals, straining the Medical Department, which had only seven general hospitals and 119 station hospitals in 1939.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Figure.. Esmond R. Long, who directed the Army tuberculosis program during World War II. Photograph courtesy of the National Library of Medicine, Image #B017302.

“Good Tuberculosis Men”

Soon appropriating freely, pledging “all of the resources of the country“ to meet the crisis, the War Department was constantly readjusting to meet the escalating emergency.

The National Research Council Committee on Medicine, Subcommittee on Tuberculosis, chaired by Long, met for the first time on 24 July 1940 and prioritized its responsibilities: first, develop recommendations on how to screen draft registrants for tuberculosis; second, screen civilians in federal service and wartime industries; third, figure out how to care for people rejected by the draft for the disease; and finally, help civilian and military agencies prepare for tuberculosis in war refugee populations. In its first nine-hour meeting, the subcommittee decided on centralized tuberculosis screening centers at 200 recruiting stations and generated a list of tuberculosis specialists nationwide to evaluate recruits and interpret X-rays at those centers. Subcommittee members stressed the importance of maintaining good records for processing any subsequent benefits claims and, most importantly, called for X-ray screening of all inductees—not just those who looked like they might have tuberculosis.

The War Department leadership initially rejected such comprehensive screening of inductees as expensive and time-consuming. The fact that tuberculosis death rates in the country had fallen two-thirds from 140 per 100,000 people in 1917 to 45 per 100,000 people in 1941, and in the Army from 4.6 per 1,000 in 1922 to 1.4 per 1,000 in 1940, may have led to complacency. But Long, his colleagues, and the national tuberculosis community, mindful of the cost to the nation in sickness, death, and disability benefits in the previous war, persisted. The American College of Chest Surgeons asked in July 1940, “Shall We Spread or Eliminate Tuberculosis in the Army?” and their president, Benjamin Goldberg, reported that the VA had spent almost $1.2 billion on tuberculosis patients through 1940. One medical officer calculated that 31 percent of all veterans who died as a result of World War I service and whose dependents received benefits, had died of tuberculosis. Even the lay press chimed in with a TIME magazine article, “TB Warning,” that stressed the importance of chest X-rays.16 Advocates pointed out that X-ray technology was more available and less expensive than in the previous war, and radiologists were more plentiful and skillful. They were also confident that new technology, such as the development of a lens that allowed the direct and rapid photography of a fluoroscopic image and new 4 x 5 inch films, which made storage and transport easier than that of the 11 x 14 inch films, rendered screening more practical than in 1917–18.

The Army Medical Department agreed with the National Research Council subcommittee. Since 1934 it had required X-rays for all Army personnel assigned overseas, but it had not yet convinced the War Department on universal screening. In June 1941, Brigadier General (Brig. Gen.) Charles Hillman, Chief, Office of The Surgeon General Professional Service Division, told the National Tuberculosis chairman, C. M. Hendricks, that “the desirability of routine X-rays had long been recognized by the Surgeon General’s Office,” but “considerations other than medical entered the picture and the character of induction

Camp Follower: Tuberculosis in World War II 277

Examinations had to be adapted to the limitations of time, place, and available equipment.” When Fitzsimons informed Hillman later that new recruits were arriving at the hospital with tuberculosis, he responded almost plaintively. “I am working with the Adjutant General to devise some method by which every volunteer for enlistment in the Regular Army will have a chest X-ray and serological test before acceptance.” He asked for all available evidence of sick recruits, explaining that “data on Regular Army men of short service now in Fitzsimons with tuberculosis will help me get the thing across.” As the data and advice accumulated, in January 1942, the Adjutant General required that all voluntary applicants and reenlisting men be given chest X-rays. Finally, on 15 March 1942, mobilization regulations made chest X-rays mandatory in all induction physicals.

With universal screening in place, Long, as chief of the tuberculosis branch in the Office of The Surgeon General, oversaw the screening process and faced a task similar to that of George Bushnell in 1917–18, finding that fine line between excluding as much tuberculosis as possible from the Army without rejecting too few or too many men. Conscious of his predecessor’s miscalculations, Long was careful not to criticize Bushnell’s tuberculosis program, at one point noting that World War I medical officers were “not to be reproached for not having knowledge that came into existence only later, any more than the chief of the Army air service in 1917 is to be reproached because more efficient airplanes are available now than then.”

The wartime emergency produced a public health campaign regarding tuberculosis and other disease threats. A War Department pamphlet, What Every Citizen Should Know about Wartime Medicine, presented the issue as one of maintaining troop health and limiting public costs. “The strenuous activity of soldiering is likely to cause extension of an incipient (early) tuberculous invasion of the lungs, or to precipitate the breakdown and reactivation of arrested cases,” it explained. Such illness could result in disability “and the necessity of providing long care of these patients in military hospitals where they must remain isolated from nontuberculous patients.” The Public Health Service also created a tuberculosis office to handle the expected increase in tuberculosis, and, as the National Research Council Subcommittee recommended, gave war industry workers chest examinations.

As military and civilian screening boards found thousands of people with active tuberculosis and sent many of them to tuberculosis sanatoriums and hospitals, they generated what a public health nurse referred to as “potentially the greatest case finding program that workers in tuberculosis control have ever known.” At the same time, however, war mobilization drew civilian medical personnel into the military, reducing staffing in home front institutions. Army medical personnel ultimately numbered more than 688,000, including 48,000 physicians in the Medical Corps, 14,000 dentists in the Dental Corps, and 56,000 nurses in the Army Nurse Corps—a large portion of the nation’s medical professionals.27 To maintain his nursing staff, VA Director Frank Hynes even asked the Army Nurse Corps in May 1942 not to hire VA nurses away from his hospitals.

Army tuberculosis rates during World War II, while lower than during World War I, did show a similar “U” curve with high rates at the beginning of the war as the Selective Service built up the military forces and cases that had eluded screening became active during training or combat (Figure 8-2). Tuberculosis rates fell as radiologists became more proficient at identifying tuberculosis infections, and then another sharp, higher increase in cases at the end of the war as discharge examinations found people who had developed active tuberculosis during their service. Postwar studies also revealed a seemingly paradoxical phenomenon that during the war military personnel serving overseas had lower tuberculosis rates than those serving in the United States, yet higher rates when they returned home.

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

Chart comparing the incidence curves of tuberculosis in the Army during World War I and World War II. From Esmond R. Long, “Tuberculosis,” in John Boyd Coates, Robert S. Anderson, and W. Paul Havens, eds., Internal Medicine in World War II, Medical Department, U.S. Army in World War II, vol. 2, Infectious Diseases (Washington, DC: Office of The Surgeon General, Department of the Army, 1961), chart 17, p. 335. Available at http://history.amedd.army.mil/booksdocs/wwii/infectiousdisvolii/chapter11chart17.pdf.

The Medical Department of the United States Army in the World War. Communicable and Other Diseases. Washington: U. S. Government Printing Office, 1928, vol. IX, pp. 171-202.
Letter, The Adjutant General, to Commanding Generals of all Corps Areas and Departments, 25 Oct. 1940, subject: Chest X-rays on Induction Examinations.
M. R. No. 1-9, Standards of Physical Examination During Mobilization, 31 Aug. 1940 and 15 Mar. 1942
Long, E. R.: Exclusion of Tuberculosis. Physical Standards for Induction and Appointment.[Official record.]

Long, E. R., and Stearns, W. II.: Physical Examination at Induction; Standards With Respect to Tuberculosis Induction and Their Application as Illustrated by a Review of 53,400 X-ray Films of Men in the Army of the United States. Radiology 41: 144-150, August 1943.
Long, Esmond R., and Jablon, Seymour: Tuberculosis in the Army of the United States in World War II. An Epidemiological Study with an Evaluation of X-ray Screening. Washington: U. S. Government Printing Office, 1955.

It is estimated that, before roentgen examination became mandatory (MR No. 1-9, 15 March 1942), one. million men had been accepted without this form of examination. Where roentgen examination was practiced, it resulted in a rejection rate of about 1 percent for tuberculosis. Applying this figure, it can be estimated that some 10,000 men were accepted who would have been rejected if they had been subjected to chest roentgen-ray study. Various studies have shown that approximately one-half of these would have been cases of active

http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Troops who developed tuberculosis were not discovered until their separation examinations, conducted when they were once again in the United States.

In the end, the screening process rejected 171,300 men for tuberculosis as the primary cause (thousands more had tuberculosis in addition to the disqualifying condition), and Long calculated that this saved the government millions of dollars in hospitalization costs. After the war, however, Long identified two factors that allowed tuberculous men into the Army: the failure to screen all inductees until March 1942, and the 4 x 5 inch stereoscopic (fluorographic) films, which were used in the interest of speed but which Long believed caused examiners to miss about 10 percent of minimal tuberculosis lesions in recruits. To better understand the latter problem he had two radiologists read the same X-rays and found substantial disagreement between their findings. Long therefore concluded that “if the induction films had each been read by two different radiologists, undoubtedly many more of the men who had tuberculosis at entry could have been excluded from service.” The Army ultimately discharged 15,387 enlisted men for tuberculosis during the war, which earned it thirteenth position as a cause of disability discharge.

American military forces fought in nine theaters of war—five in the Pacific and Asia, the other four in North Africa, the Mediterranean, Europe, and the Middle East. The Allies gave priority to defeating Germany and Italy in Europe beginning with operations in North Africa and the Mediterranean. After fighting in Tunisia in 1942–43, the Allies invaded Sicily on 10 July 1943, and moved up the Italian peninsula. By April 1944—in preparation for the D-Day invasion on 6 June 1944—the United States had more than 3 million soldiers in Europe, supported by 258,000 medical personnel managing a total of 318 hospitals with 252,050 beds. The war against Japan got off to a slower start as U.S. military forces developed the means to execute an island war across vast expanses of ocean. After fighting began in the Southwest Pacific, military forces grew from 62,500 troops in March 1942 to 670,000 in the summer of 1944 with 60,140 medical personnel. Even though military personnel developed tuberculosis in all of the nine theaters, the numbers were not high and tuberculosis was not a major military problem. In the Southwest Pacific theater, for example, only sixty-four of more than 40,000 hospital admissions were for the disease.

Tuberculosis was of the greatest consequence in the North Africa and Mediterranean theaters, in part due to poor screening early in the war, but also because, according to historian Charles Wiltse, it was the theater “in which the lessons of ground combat were learned by the Medical Department as much as by the line troops.” In general, medical personnel learned the importance of treating battle casualties as promptly as possible and keeping hospitals and clearing stations mobile and far forward to shorten evacuation and turnaround times. With regard to tuberculosis, the Medical Department had to relearn the World War I lesson of the importance of having skilled practitioners—or “good tuberculosis men”—in theater. They also ascertained which treatments were appropriate close to the battle lines and which were not, and when and how best to evacuate tubercular patients to the United States.

When soldiers with tuberculosis began to appear at Army medical stations in North Africa in late 1942, Major General (Maj. Gen.) Paul R. Hawley, chief of medical services for the European theater of operations, called for a tuberculosis specialist. On Long’s recommendation, Hawley appointed Col. Theodore Badger (Figure 8-3) as a senior consultant in tuberculosis on 2 January 1943. A professor of medicine at the Harvard School of Medicine, Badger had served in the Navy during World War I, and then attended Yale and Harvard where he earned his medical degree. Chief of medical service of the 5th General Hospital (GH), organized out of Harvard, Badger would play a role similar to that played by Gerald Webb during World War I—medical specialist, teacher, and troubleshooter.

Assessing the tuberculosis situation in the Mediterranean theater, Badger identified five hazards: (1) the development of active disease in American troops who had not been X-rayed upon induction; (2) association with British troops and civilians who had not been screened for tuberculosis; (3) drinking of nonpasteurized and possibly infected milk that could transmit tuberculosis; (4) battlefield conditions that could activate soldiers’ latent infections; and (5) the undetermined effects of other respiratory infections.41 Badger soon got the Army to use pasteurized milk and to establish X-ray centers with the proper equipment and trained staff, but he was not able to examine the thousands of American soldiers in the war zone. To gauge the extent of the tuberculosis problem he therefore arranged for a mobile X-ray unit to conduct spot surveys of troops in the field. Three examinations of some 3,000 troops each found only about 1 percent with signs of tuberculosis. To avoid losing manpower, Badger reported in mid-1943 that “up to the present time no individual has been removed from duty because of X-ray findings, and follow-up study has, so far, not indicated the necessity for it.” Badger planned to recheck those with suspicious films every few months to see if the signs had advanced. Badger recommended that patients with pleural effusion, the accumulation of fluid between the layers of the membranes that line the lungs and chest cavity that often indicates tuberculosis, be evacuated back to the United States. He also ended the practice of transporting some tuberculosis patients sitting up

. As the first true air war, World War II saw the introduction of air evacuation when Army aeromedical squadrons deployed in early 1943. After successful trials in the Pacific and North Africa, air evacuation increased so that during the Battle of the Bulge (1944–45), some patients arrived in U.S. hospitals within three days of being wounded. Some medical officers were concerned about the effects of transporting tuberculosis patients by air where they would be exposed to high speeds, jolting, and reduced air pressure. Tuberculosis specialists in New Mexico and Colorado therefore studied 143 white, male military patients, twenty-two-years old to twenty-eight-years old, with active tuberculosis flown to Army hospitals in nonpressurized air ambulances for any signs of trouble. Fearing the worst, they instead found that “severe discomfort, pulmonary hemorrhage, and spontaneous pneumothorax did not occur in the series either during or following the flight,” and concluded that air transport up to 10,000 feet was safe and preferable to time-consuming travel by water. By the end of the war the consensus was that rapid air evacuation to the United States also reduced the need to give a tuberculosis patient a pneumothorax in the field.

From the roof of Fitzsimons’ new building in April 1943, Rocky Mountain News reporter John Stephenson could see the Rocky Mountain Arsenal, the Denver Ordnance Plant, and Lowry Field, “places where the Army studies how to kill people.” But, he wrote, “The Army is merciful. It lets the right-hand of justice know what the left hand of mercy is doing at Fitzsimons General Hospital.” The largest Army hospital in the world, Fitzsimons had 322 buildings on 600 acres, paved streets with traffic lights, a post office, barbershop, pharmacy school, dental school, print shop, bakery, fire department, and chapel. It was, wrote Stephenson, “a city of 10,000.”61 No longer a liability, Fitzsimons was the pride of the Army Medical Department. One Army inspector reported that “it is apparent that no expense has been spared in this extraordinary building or in the general equipment and maintenance of the whole hospital plant.”62 As Congressman Lawrence Lewis had hoped, Fitzsimons’ mission now extended beyond caring for tuberculosis patients to meeting the general medical and surgical needs of the wider military community in the Denver region.

During the war the hospital maintained about 3,500 beds, reaching its highest daily patient population after the war—3,719 on 3 February 1946. The annual occupancy rate, calculated in patient days, increased from 603,683 in 1942 to a high of 1,097,760 for 1945, about 85 percent capacity.

With the reduction of tuberculosis in the Army over the years, the percentage of tuberculosis patients among all those at Fitzsimons had declined from 80 percent to 90 percent in the 1920s to 40 percent to 50 percent in the late 1930s. As the Army grew it now rose again. During the war Fitzsimons admitted more than 8,100 patients with tuberculosis. In fact, in 1943, only eighteen patients had battle injuries; the rest were in the hospital for illness and noncombat injuries. Unlike during the previous war, however, this Medical Department had a network of more than fifty veterans’ hospitals to which it could transfer patients too disabled by tuberculosis or other disease or injury to return to duty. Now, instead of allowing patients to stay in the service and receive the benefit of hospitalization with the hopes that they would recover and return to duty, the Medical Department discharged patients to VA hospitals as soon as they were determined to be unfit for military service, thereby reserving capacity for active-duty personnel. Maj. D. P. Greenlee had returned from a training course in penicillin therapy at Bushnell General Hospital in Utah to supervise the administration of the new drug on a variety of infections. He soon reported a cure rate of 93 percent. There were fewer victories in tuberculosis treatment.

During the war about one-quarter of all tuberculosis patients were treated with pneumothorax. During the war Fitzsimons surgeon Col. John B. Grow and other surgeons tried lung resection to treat tuberculosis, with few patient deaths. In 1946, however, when Grow’s staff contacted thirty patients who had had such surgery, they found that half of them were doing well, but three others had died, seven were seriously ill, and the rest were still under treatment. “It was felt that pulmonary resection in the presence of positive sputum was extremely hazardous and the indications were consequently narrowed down.”

Outside the operating rooms, the “City of 10,000” had a rich social life with people arriving at the post from all corners of the country. With Congressman Lewis’s acquisition of the School for Medical Technicians, Fitzsimons assumed the role of medical trainer, offering six- to twelve-week courses in technical training for dental, laboratory, X-ray, surgical, clinical, and pharmacy assistants. By 1946 the School had graduated more than 28,000 such technicians to serve around the world. The Women’s Army Corps arrived at Fitzsimons in February 1944 when 165 women attended the medical technicians school as part of the first coeducational class.74 Members of the Women’s Army Corps, rehabilitation aides, Education Department staff, dietitians, as well as nurses increased the female presence at Fitzsimons, as did activities of welfare organizations such as the Gold Star Mothers, the Red Cross, and the Junior League. Fitzsimons’ patients and staff also enjoyed visits from celebrities, including Jack Benny, Miss America, Gary Cooper, Dorothy Lamour, and other entertainers such as the big band leader Fred Waring and his Pennsylvanians, the Denver Symphony Orchestra, and an African American Methodist Church children’s choir from Denver. Like communities across the country, the hospital participated in war bond campaigns and had a huge war garden that produced thousands of ears of sweet corn and bushels of other vegetables.

Despite national mobilization and generous congressional funding, the Army could not escape the strain on its hospitals. By July 1944, Fitzsimons had reached capacity so the Medical Department designated two more hospitals as specialty centers for tuberculosis. Earl Bruns’ widow Caroline, who lived in Denver at the time, was no doubt pleased when the department named Bruns General Hospital in Santa Fe, New Mexico, in honor of her husband. Bruns along with Moore General Hospital in Swannanoa, North Carolina, cared for enlisted patients with minimal or suspected tuberculosis.

As Allied troops liberated France in 1944 and crossed into Germany they encountered thousands of refugees or “displaced persons”—escaped prisoners from Nazi concentration camps, exhausted and terrified Jews, slave laborers, political prisoners, Allied POWs, and other victims. The Nazi camps that held these people served as incubators for diseases such as tuberculosis and typhus, and the frightened, sick, and starved refugees inundated Army hospitals in late 1944 and early 1945. Theodore Badger reported one of the first waves that arrived on 18 December 1944 when 304 men, most of them Russians, came to the 50th GH in Commercy, France. They had been in the Nazi labor camps for the mines and heavy industries, where thousands died and survivors were malnourished and sick. All of the 304 had tuberculosis, 90 percent with moderate or advanced disease. Four were dead on arrival, eight more died in the first week, and one-third of the patients would die by May.96 Alarmed, Gen. Hawley, Chief Surgeon of the European Theater of Operations, ordered that all displaced civilians and recovered military personnel be examined for signs of tuberculosis “to establish the gravity of the situation.” The situation was dire. At one time the 46th GH had more than 1,000 tuberculosis patients, all recovered Allied POWs, causing Esmond Long to remark that the hospital “had the largest number of tuberculosis patients of any Army hospital in the world.”

The 46th GH from Portland, Oregon, which had cared for tuberculosis patients in the Mediterranean theater, also stood on the front lines of the tuberculosis problem in Europe. Serving at Besancon, France, the hospital would receive the Meritorious Service Unit Plaque and Col. J. G. Strohm, the commanding officer, the Bronze Star Medal for service during the liberation of France. During the spring of 1945, the 46th GH admitted 2,472 Russians, forty-one Poles, and 128 Yugoslav POWs and former slave laborers freed by American forces. The influx began on 12 March and within four days the 46th GH had admitted 1,200 such patients.

“The hospital staff was agast [sic] at the terrible physical condition of these people,” reported the hospital commander.99 When Badger visited the 46th GH in March 1945 he said the patients “constitute one of the most seriously affected groups with tuberculosis and malnutrition that I have ever seen,” explaining that most of them suffered “acute fulminating, rapidly fatal disease, mixed with chronic, slowly progressive, fibrotic tuberculosis. ”Medical personnel (Figure 8-4) cared for these patients as best they could, comforting many of them as they died. They began the rest treatment with some men but, as Badger reported, convincing Allied POWs to submit to absolute bed rest after months of confinement was “practically impossible.” Badger was able to report that after a month “those men who did not die of acute tuberculosis showed marked improvement.”

46th General Hospital nurses who cared for former prisoners of war.

46th General Hospital nurses who cared for former prisoners of war.

Figure 8-4. 46th General Hospital nurses who cared for former prisoners of war. Photograph courtesy of Oregon Health Sciences University, Historical Collections and Archives, Portland, Oregon.

26th Gen Hospital WWII, North Africa

26th Gen Hospital WWII, North Africa

In late 1944 Hawley requested 100,000 additional hospital beds for the displaced persons and POWs he expected to encounter after the German surrender, but Gen. George Marshall and Secretary of War Henry L. Stimson denied the request, believing they could not spare resources of that magnitude. The European Theater, they decided, must use German medical personnel and hospitals to care for the prisoners. Only after the war did American hospital units transfer their equipment and supplies to German civilians and Allies for their use.

The liberation of Europe also freed American POWs, who, not surprisingly, had higher rates of tuberculosis than other American military personnel. Captured British medical officer Capt. A. L. Cochrane cared for some of them in the prison where he was confined and noted sardonically that imprisonment was “an excellent place to study tuberculosis; [and] to learn the vast importance of food in human health and happiness.” German prison guards gave POWs only 1,000 to 1,500 calories per day, so Red Cross food parcels, which provided an additional 1,500 daily calories per person, were critical to preventing malnutrition and physical breakdown. Cochrane observed that the American and British POWs received the most parcels and had the lowest tuberculosis rates in the camp, while the Russians received nothing at all and had the highest rates. During the eighteen months that French POWs received the Red Cross parcels, he noted, just two men of 1,200 developed tuberculosis but when parcels for the French ceased to arrive in 1945, their tuberculosis rate rose to equal that of the Russians. The situation, he concluded, showed the “vast importance of nutrition in the incidence of tuberculosis.” Not all Americans got their parcels, though. William H. Balzer, with an American artillery unit, was captured in February 1943, and remembered how German guards stole the Americans’ packages.
Balzer survived imprisonment but never recovered from the ordeal. Severely disabled (70 percent), he died in 1960 on his forty-sixth birthday.

Exact tuberculosis rates among American POWs are not known because the rush of events surrounding the liberation of prisoners from German and Japanese control prevented a systematic X-ray survey. Rates did appear to be higher, though, for prisoners of the Japanese than for prisoners of the Germans. Long reported that about 0.6 percent of recovered troops from European POW camps had tuberculosis, whereas data from the Pacific theater suggested that 1 percent of recovered prisoners had tuberculosis. Moreover, an analysis of the chest X-rays done at West Coast debarkation hospitals revealed that 101 (or 2.7 percent) of 3,742 former POWs of the Japanese showed evidence of active tuberculosis. John R. Bumgarner was a tuberculosis ward officer at Sternberg General Hospital in Manila, the Philippines, before the war. A POW for forty-two months after the Japanese invasion, he described his experience in Parade of the Dead. Bumgarner did what he could to care for many of the 13,000 prisoners in the camp, but knew that “my patients were poorly diagnosed and poorly treated.” The narrow cots were so close together, he wrote, “the crowding and the breathing of air loaded with this bacilliary miasma from coughing ensured that those mistakenly segregated would be infected.”

Bumgarner was able to stay relatively healthy throughout his imprisonment. His luck ended, however, because “on my way home across the Pacific I had the first symptoms of tuberculosis.” Severe chest pain and subsequent X-rays at Letterman Hospital in San Francisco revealed active disease. “I had gone through more than four years of hell—now this!” Discharged on disability for tuberculosis in September 1946 he began to work at the Medical College of Virginia but soon had a lung hemorrhage. This time it took eight years of rest, with surgery and new antibiotic treatment for him to recover. By 1956, however, Bumgarner had married his sweetheart, Evelyn, and begun a medical career in cardiology that lasted for thirty years.

Tuberculosis continued to take its toll on POWs for years after the war. The VA followed POWs as a special group because, explained Long, of “the hardships that many of these men endured, and the notorious tendency for tuberculosis to make its appearance years after the acquisition of infection.” A follow-up study published in 1954 reported that for American POWs during the six years after liberation tuberculosis was the second highest cause of death, after accidents.

If the challenges Army medical personnel faced in caring for sick and starving POWs and refugees were unprecedented, the scale of disease and suffering they encountered in the Nazi concentration camps was almost unimaginable. Allied troops had heard about secret and deadly camps but were not prepared for what they found. As the Allies converged on Berlin from the East and the West, the Nazis evacuated thousands of prisoners—most of them Jews seized from across Europe, as well as POWs—to interior camps to hide their crimes and prevent the inmates from falling into Allied hands. These evacuations became death marches as SS (abbreviation of Schutzstaffel, which stood for “defense squadron”) guards beat and murdered people, and failed to feed them for days on end. Survivors were crowded into camps such as Buchenwald and Dachau making them even more chaotic and deadly. Americans, therefore, liberated camps that were riven with disease, especially typhus, tuberculosis, and malnutrition.

The Allies liberated Buchenwald on 11 April 1945. The following day the world learned that Franklin Roosevelt had died. Americans then liberated Dachau on 29 April, the day Italian partisans executed Mussolini in Milan, and the next day Hitler killed himself in his bunker. Dachau (Figure 8-5) had been the first of hundreds of concentration camps in the German Reich to which the Nazis sent political enemies, the disabled, people accused of socially deviant behavior, and, increasingly after the Kristallnacht pogroms of 1938, Jewish men, women, and children. In January1945 Dachau held 67,000 prisoners, but with troops of the Seventh U.S. Army approaching the SS began evacuating and killing prisoners. Capt. Marcus J. Smith, a medical officer in his thirties, arrived at Dachau on 30 April 1945, the day after liberation, part of a small team trained to treat persons displaced by the war. Horror greeted him outside the camp in a train of forty boxcars loaded with more than two thousand corpses. Smith called the frost that had formed on the bodies in the intense cold, “Nature’s shroud.” Inside Dachau he encountered more grotesque piles of naked, skeletal bodies of prisoners and scattered, mutilated bodies of German guards.

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Figure 8-5. Dachau survivors gather by the moat to greet American liberators, 29 April 1945. Photograph courtesy of the United States Holocaust Memorial Museum, Washington, DC.
Smith found more than 30,000 prisoners, mostly Jews of forty nationalities, and all men except for about 300 women the SS had kept in a brothel. They were in desperate condition. Typhus and dysentery raged, at least half of the prisoners were starving, and hundreds had advanced tuberculosis. “The well, the sick, the dying, and the dead lie next to each other in these poorly ventilated, unheated, dark, stinking buildings,” Smith told his wife. The men were “malnourished and emaciated, their diseases in all stages of development: early, late, and terminal.” He wondered, “What am I going to write in my notebook?” and then started a list of needed supplies: clothes, shoes, socks, towels, bedding, beds, soap, toilet paper, more latrines, and new quarters. He almost despaired. “What are we going to do with the starving patients? How will we care for them without sterile bandages, gloves, bedpans, urinals, thermometers, and all the basic material? How do we manage without an organization? No interns, no nursing staff, no ambulances, no bathtubs, no laboratories, no charts, and no orderlies, no administrator, and no doctors.… I feel helpless and empty. I cannot think of anything like this in modern medical history.”

American efforts did prevent a deadly typhus epidemic from sweeping postwar Europe and helped contain tuberculosis rates in Germany, but the Nazis had created a human catastrophe so immense that even the most dedicated efforts would at times fall short.

Faced with horror on such a scale, Smith and other Army Medical Department personnel assigned to the concentration camps threw themselves into the work of cleansing, comforting, treating, and nurturing their patients. American commanders called in at least six Army evacuation hospitals (EH) to care for the sick and dying in the liberated camps. EH No. 116 and EH No. 127 began arriving at Dachau on 2 May with some forty medical officers, forty nurses, and 220 enlisted men. Consulting with Smith and his team, the units set up in the former SS guard barracks. They tore out partitions to create larger wards, scrubbed the walls and floors with Cresol solution, sprayed them with dichloro-diphenyl-trichloroethane (DDT), and then set up cots to create two hospitals of 1,200 beds each. Medical staff also discovered physician-prisoners who had cared for the sick and injured as well as they could, and could now advise and assist, and in some cases translate for the medical staff. In two days the hospitals were ready to admit patients by triage, segregating them by disease and prognosis. Laurence Ball, the EH No. 116 commander, noted that more than 900 patients had “two or more diseases, such as malnutrition, typhus, diarrhea, and tuberculosis.” Staff bathed and deloused them, gave them clean pajamas, and put them to bed.

Death by overeating was but one of the dangers that the prisoners faced. During May 1945, American hospitals at Dachau had more than 4,000 typhus patients and lost 2,226 to typhus and other diseases. Typhus, a rickettsial disease transmitted by body lice, had a mortality rate as high as 40 percent. With no medical cure, treatment consisted of supportive care—keeping patients clean and nourished—to mitigate effects of prolonged fever, such as the breakdown of tissue into gangrene. The Americans knew that typhus had taken three million lives in Eastern Europe after World War I, but now they had a means of prevention and better weapons—a typhus vaccine and DDT. On 2 May, the day the evacuation hospitals arrived, the commander of the Seventh Army imposed quarantines for typhus and tuberculosis, and summoned the U.S. Typhus Commission, which had controlled a typhus outbreak in Naples, Italy. A typhus team arrived the next day and began to immunize American personnel and dust them with DDT. On 7 May staff began to vaccinate inmates but kept typhus patients isolated for at least twenty-one days from the onset of illness to prevent transmission to others. This meant that the Americans did not immediately enter the inner camp barracks—the worst, most typhus-infested part of the camp—nor did they quickly relieve crowding there for fear of spreading typhus-bearing lice. It took over a week for personnel to prepare more spacious and clean quarters.

Smith wrote his lists, reported to his wife, and kept track of the daily death toll, finding comfort as the number of people who died daily fell from 200 during the first week to twenty by the end of May. Another medical officer performed autopsies. He chose ten of the dead bodies, five from the death train and five from the camp yard, to see what had caused their deaths. All had typhus and extreme malnutrition, eight had advanced tuberculosis, and some bodies had signs of fractures and head injuries.

Survivors in Dachau, 1 May 1945

Survivors in Dachau, 1 May 1945

By the end of May, conditions at Dachau had improved. Typhus was abating and American officials began to release groups of inmates by nationality. Beyond Dachau, the U.S. Typhus Commission tracked down new cases of typhus in civilian and military populations, deloused one million people, sprayed fifteen tons of DDT, and created a cordon sanitaire on the Rhine requiring all who crossed from Germany to be vaccinated and dusted to prevent the spread of disease. Thus the Army averted a broader typhus epidemic.138 The tuberculosis situation was more complicated and presented the Americans with a conundrum. What to do with thousands of people suffering from a long-term, infectious, and deadly disease?

As with the American POWs, tuberculosis continued to follow Dachau survivors into their new lives. Thousands of Jewish survivors emigrated to what would become the state of Israel. Fifteen years after liberation, the Israeli Minister of Health reported that although concentration camp survivors comprised only 25 percent of the population, they accounted for 65 percent of the tuberculosis cases in the country. Tuberculosis continued to thrive in Europe as well.

Historian Albert Cowdrey has credited the American actions with preventing a number of postwar scourges: “No one can prove that a great typhus epidemic, mass deaths of prisoners of war, or widespread outbreaks of disease among the German population would have taken place without the efforts of Army doctors of the field forces and the military government.” But, he continued, “conditions were ripe for such tragedies to occur, and Army medics brought both professional knowledge and military discipline to forestalling what might have been the last calamities of the war in Europe.” Thus, as usual, in public health the good news is no news at all.

Thousands of men survived the Vietnam War because of the quality of their hospital care. US hospitals in Vietnam were the best that could be deployed, incorporating several improvements from previous field hospitals. Army doctors were better trained, and they had good facilities at the semi-permanent base camps. As a result, more advanced surgical procedures were possible: more laparotomies, thoracotomies, vascular repairs (including even aortic and carotid repairs), advanced neurosurgery for head wounds, and other medical procedures. Blood transfusions were performed, with massive quantities of blood available for seriously wounded patients; some patients received as many as 50 units of blood. Advances in equipment resulted in the development of intensive care units with mechanical ventilators. There were far more medications available for particular diseases than in earlier conflicts.

With about 30 physicians assigned, the 12th could keep four or five operating tables going all day, and two or three all night. A common practice was delayed primary closure for wounds with a high likelihood of infection. Instead of stitching the wound closed immediately, dirt and contaminants were flushed out, bleeding was controlled, dead flesh was removed (debrided), the wound was packed with sterile gauze, and antibiotics were administered. For a few days the patient healed, while nurses changed the bandages and made sure the wound did
not get worse. Then doctors removed any remaining contaminants or dead flesh and stitched up the wound. This procedure reduced the incidence of infection compared to immediate wound closure at a risk of a larger scar.

In any given year in Vietnam, about one soldier in three was hospitalized for disease. The main causes for hospitalization were malaria, psychiatric problems, and ordinary fevers. Although many men fell sick, competent care was available and most recovered quickly and returned to duty.

The war spurred advances in surgery and medical trauma research. New surgical techniques allowed limbs that previously would have been amputated to remain functional. Nurse anesthetist Rosemary Sue Smith recalled the development of new blood-handling procedures:

We started separating blood into its components, because we were getting a lot of aggregates that were causing a lot of disseminated intravascular coagulopathy in patients, and causing a lot of blood clots, and pulmonary thrombosis, and a lot of ARDS, Adult Respiratory Distress Syndrome, which started in Da Nang and was called Da Nang Lung initially. It has developed into today being called Adult Respiratory Distress Syndrome, and they did a lot of research on this, and they were having us separate our blood into its components, into fresh frozen plasma and into platelets, and then we started doing blood tests to see which the patients would need. If their platelets were low, or if their blood clotting factors were low, we would just give them the particular products. We actually started breaking these products down and administering them in the Vietnam War, and it’s carried over into civilian life now. They’re used today in acute trauma to prevent disseminated intravascular coagulopathy and prevent Adult Respiratory Distress Syndrome on massive traumas that have to be naturally resuscitated with blood and blood products.

In the 1960s, intensive care was still quite new and the 12th had only one (later two) intensive care wards fully equipped and staffed. A key piece of equipment was the ventilator, then called “respirator.” Ventilators worked on pure oxygen until 1969, when research revealed physiological problems from prolonged breathing of pure oxygen. Early ventilators required considerable maintenance; valves needed frequent cleaning or the machines broke down.

Antibiotics were important because of the wide variety of bacteria and large number of penetrating wounds; in the face of a possible systemic infection (the development of sepsis), antibiotics were delivered through an IV. Nurse Rosie Parmeter recalled having to prepare antibiotics to be delivered through an IV several times a day for each patient, a necessary but time-consuming task.

About two-thirds of patients cared for by the 12th were US military; the other third were mainly Vietnamese but also included nonmilitary Americans and Free World Military Assistance Forces personnel. Staff regularly dealt with the Vietnamese, both military and civilian, enemy and friendly. There were wards set aside for enemy prisoners (who were stabilized, then transferred to hospitals at POW camps) and civilians. Wounded South Vietnamese Army soldiers were also stabilized and transferred to hospitals run by the Army of the Republic of Vietnam (ARVN). Civilian patients often stayed longer because the war swamped the available hospitals for Vietnamese civilians.

Through the years of the Vietnam War, US forces sustained 313,616 wounded in action; at peak strength, there were 26 American hospitals. The 12th Evacuation Hospital was at Cu Chi for 4 years and treated just over 37,000 patients. Records for the 12th are incomplete, but the average died-of-wounds rate in Vietnam was about 2.8% of patients who reached a hospital alive. Applied to the 12th, that rate amounted to about 1,036 patients, including prisoners and Vietnamese as well as Americans. But over 36,000 people survived and could return home because of the treatment they received at the 12th Evac.

Sources:

Fort Bayard,  by David Kammer, Establishment of Fort Bayard Army Post
http://newmexicohistory.org/places/fort-bayard
George Ensign Bushnell, Colonel, Medical Corps, U. S. Army
THE ARMY MEDICAL BULLETIN, NUMBER 50 (OCTOBER 1939)
http://history.amedd.army.mil/biographies/bushnell
Chapter One, The Early Years: Fort Bayard, New Mexico
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=332041d7-dbd2-4edf-823f-29a66c0b65ef
Dachau concentration camp (Wikipedia)
http://en.wikipedia.org/wiki/Dachau_concentration_camp
Office of Medical History – United States Army
Esmond R. Long, M. D., TUBERCULOSIS IN WORLD WAR I
Chapter 14 – Tuberculosis
http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Chapter Four, Tuberculosis in World War I
Chapter Five, “A Gigantic Task”: Treating and Paying for Tuberculosis in the Interwar Period
Chapter Six, “Good Tuberculosis Women”: Tuberculosis Nursing during the Interwar Period
Chapter Seven, Surviving the Great Depression: Fitzsimons and the New Deal
Chapter Eight, Camp Follower: Tuberculosis in World War II
http://www.cs.amedd.army.mil/FileDownload aspx?
Good Tuberculosis Men”: The Army Medical Department’s Struggle with Tuberculosis Carol R. Byerly
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=986faf8a-b833-46a8-a251-00f72c91da2f

The Global Distribution of Yellow Fever and Dengue
D.J. Rogers1, A.J. Wilson1, S.I. Hay1,2, and A.J. Graham1
Adv Parasitol. 2006 ; 62: 181–220. http://dx.doi.org:/10.1016/S0065-308X(05)62006-4.

http://www.historyofvaccines.org/content/timelines/yellow-fever

History of yellow fever
http://en.wikipedia.org/wiki/History_of_yellow_fever

Additional Reading:

Open Wound: The Tragic Obsession of William Beaumont.
Jason Karlawish

The Great Influenzs. John M. Barry.
Penguin. 2004.
Univ Mich Press. 2011.

Flu. The story of the great influenza pandemic of 1918 and
the search for the virus that caused it.
Gina Kolata.
Touchstone. 1999

Pestilence. A Medieval Tale of Plague.
Jeani Rector
The HorrorZime. 2012

Knife Man: The extraordinary life of John Hunter, Father of Modern Surgery
Wendy Moore.
Broadway Books. 2005

Hospital.
Julie Salamon.
Penguin Press. 2008.

Overdosed America.

John Abramson.
Harper. 2004.

Sick.
Jonathen Cohn.
Harper Collins. 2007.
.

Read Full Post »

Selected Contributions to Chemistry from 1880 to 1980

Curator: Larry H. Bernstein, MD, FCAP

 

FUNDAMENTALS OF CHEMISTRY – Vol. I  The Contribution of Nobel Laureates to Chemistry

– Ferruccio Trifiro

http://www.eolss.net/sample-chapters/c06/e6-11-01-04.pdf

This chapter deals with the contribution to the development of chemistry of all the Nobel Prize winners in chemistry up to the end of the twentieth century, together with some in physics and medicine or physiology that have had particular relevance for the advances achieved in chemistry. The contributions of the various Nobel laureates cited are briefly summarized. The Nobel laureates in physics dealt with in this chapter are those who made important contributions to ard the understanding of the properties of atoms, the development of theoretical tools to treat the chemical bond, or the development of new analytical instrumentation. The Nobel laureates in medicine or physiology cited here are those whose contributions have been in the area of using chemistry to understand natural processes, such as the physiological aspects of living organisms through electron and ion exchange processes, enzymatic catalysis, and DNA-based chemistry. Eight areas of thought or thematic areas were chosen into which the contributions of the Nobel laureates to chemistry can be subdivided.

  1. The Properties of Molecules

4.1. The Discovery of Coordination and Metallorganic Compounds

4.2. The Discovery of New Organic Molecules

4.3. The Emergence of Quantum Chemistry

  1. The Dynamics of Chemical Reactions

6.1. Kinetics of Heterogeneous and Homogeneous Processes

6.2. The Identification of the Activated State

  1. The Understanding of Natural Processes

8.1. From Ferments to Enzymes

8.2. Understanding the Mechanism of Action of Enzymes

8.3. Mechanisms of Important Natural Processes

8.4. Characterization of Biologically Important Molecules

  1. The Identification of Chemical Entities

9.1. Analytical Methods

9.2. New Separation Techniques

9.3. The Development of New Instrumentation for Structure Analysis

The Nobel Prize in Chemistry: The Development of Modern Chemistry

by Bo G. Malmström and Bertil Andersson*

http://www.nobelprize.org/nobel_prizes/themes/chemistry/malmstrom/

Introduction

1.1 Chemistry at the Borders to Physics and Biology

The turn of the century 1900 was also a turning point in the history of chemistry. A survey of the Nobel Prizes in Chemistry during this century provides a view toward important trends in the development of Chemistry at the center of the sciences, bordering onto physics, which provides its theoretical foundation, on one side, and onto biology on the other. The fact that chemistry flourished during the beginning of the 20th century is intimately connected with fundamental developments in physics.

In 1897 Sir Joseph John Thomson of Cambridge announced his discovery of the electron, for which he was awarded the Nobel Prize for Physics in 1906. It took a number of years before its relevance to chemistry was seen. In 1911 Ernest Rutherford, who had worked in Thomson’s laboratory in the 1890s, formulated an atomic model, which depicted a cloud of electrons circling around the nucleus. Rutherford had received the Nobel Prize for Chemistry in 1908 for his work on radioactivity.

In Rutherford’s atomic model the stability of atoms was at variance with the laws of classical physics. Niels Bohr from Copenhagen brought clarity to this dilemma in the distinct lines observed in the spectra of atoms, the regularities of which had been discovered in 1890 by the physics professor Johannes (Janne) Rydberg at Lund University. This was the basis for Bohr’s formulation (1913) of an alternative atomic model. Only certain circular orbits of the electrons are allowed. In this model light is emitted (or absorbed), when an electron makes a transition from one orbit to another. For this, Bohr received the Nobel Prize for Physics in 1922

Gilbert Newton Lewis next suggested in 1916 that strong (covalent) bonds between atoms involve a sharing of two electrons between these atoms (electron-pair bond). Lewis also contributed fundamental work in chemical thermodynamics, and his brilliant textbook, Thermodynamics (1923), written together with Merle Randall, is counted as one of the masterworks in the chemical literature. Lewis never received a Nobel Prize.

However, important work was published in the 1890s, considered by the first Nobel Committee for Chemistry (see Section 2). Three of the Laureates during the first decade, Jacobus Henricus van’t Hoff, Svante Arrhenius and Wilhelm Ostwald, are generally regarded as the founders of a new branch of chemistry, physical chemistry. Fundamental work was also recognized in organic chemistry and in the chemistry of natural products, which is clearly reflected in the early prizes. Further, the Nobel Committee, recognized the border towards biology in 1907, with the prize to Eduard Buchner “for his biochemical researches and his discovery of cell-free fermentation”.

  1. The First Decade of Nobel Prizes for Chemistry

So much fundamental work in chemistry had been carried out during the last two decades of the 19th century that a decision for the first several prizes was not easy.  In 1901 the Academy had to consider 20 nominations, but no less than 11 of these named van’t Hoff, who was selected. van’t Hoff had already established the four valences for the carbon atom in his PhD thesis in Utrecht in 1874, foundation work for  modern organic chemistry. But the Nobel Prize was awarded for his later work on chemical kinetics and equilibria and on the osmotic pressure in solution, published in 1884 and 1886.

In his 1886 work van’t Hoff showed that most dissolved chemical compounds give an osmotic pressure equal to the gas pressure they would have exerted in the absence of the solvent. An apparent exception was aqueous solutions of electrolytes (acids, bases and their salts), but in the following year Arrhenius showed that this anomaly could be explained, if it is assumed that electrolytes in water dissociate into ions. Arrhenius had already presented the rudiments of his dissociation theory in his doctoral thesis, which was defended in Uppsala in 1884 and was not entirely well received by the faculty. It was, however, strongly supported by Ostwald in Riga, who, in fact, travelled to Uppsala to initiate a collaboration with Arrhenius. In 1886-1990 Arrhenius did work with Ostwald, first in Riga and then in Leipzig, and also with van’t Hoff in Berlin. Arrhenius was awarded the Nobel Prize for Chemistry in 1903,  and he was also nominated for the Prize for Physics (see Section 1).

The award of the Nobel Prize for Chemistry in 1909 to Ostwald was chiefly in recognition of his work on catalysis and the rates of chemical reactions. Ostwald had in his investigations, following up observations in his thesis in 1878, shown that the rate of acid-catalyzed reactions is proportional to the square of the strength of the acid, as measured by titration with base. His work offered support not only to Arrhenius’ theory of dissociation but also to van’t Hoff’s theory for osmotic pressure. Ostwald was founder and editor of Zeitschrift für Physikalische Chemie, the publication of which is generally regarded as the birth of this new branch of chemistry.

Three of the Nobel Prizes for Chemistry during the first decade were awarded for pioneering work in organic chemistry. In 1902 Emil Fischer, then in Berlin, was given the prize for “his work on sugar and purine syntheses”. Fischer’s work is an example of the growing interest biologically important substances, and was a foundation for the development of biochemistry. Another major influence from organic chemistry was the development of chemical industry, and a chief contributor here was Fischer’s teacher, Adolf von Baeyer in Munich, who was awarded the prize in 1905 “in recognition of his services in the advancement of organic chemistry and the chemical industry, … ” His contributions include, in particular, structure determination of organic

Ernest Rutherford [Lord Rutherford since 1931], professor of physics in Manchester, was awarded the Nobel Prize for Chemistry in 1908. In his studies of uranium disintegration he found two types of radiation, named α- and β-rays, and by their deviation in electric and magnetic fields he could show that α-rays consist of positively charged particles. He had received many nominations for the Nobel Prize for Physics (see Section 1).

In 1897 Eduard Buchner, at the time professor in Tübingen, published results demonstrating that the fermentation of sugar to alcohol and carbon dioxide can take place in the absence of yeast cells. Louis Pasteur had earlier maintained that alcoholic fermentation can only occur in the presence of living yeast cells. Buchner’s experiments showed unequivocally that fermentation is a catalytic process caused by the action of enzymes, as had been suggested by Berzelius for all life processes. Because of Buchner’s experiment, 1897 is generally regarded as the birth date for biochemistry proper. Buchner was awarded the Nobel Prize for Chemistry in 1907, when he was professor at the agricultural college in Berlin. This confirmed the prediction of his former teacher, Adolf von Baeyer: “This will make him famous, in spite of the fact that he lacks talent as a chemist.”

  1. The Nobel Prizes for Chemistry 1911-2000

3.1 General and Physical Chemistry

The Nobel Prize for Chemistry in 1914 was awarded to Theodore William Richards of Harvard University for “his accurate determinations of the atomic weight of a large number of chemical elements”. In 1913 Richards had discovered that the atomic weight of natural lead and of that formed in radioactive decay of uranium minerals differ. This pointed to the existence of isotopes, i.e. atoms of the same element with different atomic weights, which was accurately demonstrated by Francis William Aston at Cambridge University, with the aid of an instrument developed by him, the mass spectrograph. For his achievements Aston received the Nobel Prize for Chemistry in 1922.

One branch of physical chemistry deals with chemical events at the interface of two phases, for example, solid and liquid, and phenomena at such interfaces have important applications all the way from technical to physiological processes. Detailed studies of adsorption on surfaces, were carried out by Irving Langmuir at the research laboratory of General Electric Company, who was awarded the Nobel Prize for Chemistry in 1932, the first industrial scientist to receive this distinction.

Two of the Prizes for Chemistry in more recent decades have been given for fundamental work in the application of spectroscopic methods (Prizes for Physics in 1952, 1955 and 1961) to chemical problems. Gerhard Herzberg, a physicist at the University of Saskatchewan, received the Nobel Prize for Chemistry in 1971 for his molecular spectroscopy studies “of the electronic structure and geometry of molecules, particularly free radicals”. The most used spectroscopic method in chemistry is undoubtedly NMR (nuclear magnetic resonance), and Richard R. Ernst at ETH in Zürich was given the Nobel Prize for Chemistry in 1991 for “the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy”. Ernst’s methodology has now made it possible to determine the structure in solution (in contrast to crystals; cf. Section 3.5) of large molecules, such as proteins.

3.2 Chemical Thermodynamics

The Nobel Prize for Chemistry to van’t Hoff was in part for work in chemical thermodynamics, and many later contributions in this area have also been recognized with Nobel Prizes.  Walther Hermann Nernst of Berlin received this award in 1920 for work in thermochemistry, despite a 16-year opposition to this recognition from Arrhenius. Nernst had shown that it is possible to determine the equilibrium constant for a chemical reaction from thermal data, and in so doing he formulated what he himself called the third law of thermodynamics. This states that the entropy, a thermodynamic quantity, which is a measure of the disorder in the system, approaches zero as the temperature goes towards absolute zero. van’t Hoff had derived the mass action equation in 1886, with the aid of the second law which says, that the entropy increases in all spontaneous processes [this had already been done in 1876 by J. Willard Gibbs at Yale, who certainly had deserved a Nobel Prize].  Nernst showed in 1906 that it is possible with the aid of the third law, to derive the necessary parameters from the temperature dependence of thermochemical quantities. Nernst carried out thermo-chemical measurements at very low temperatures to prove his heat theorem. G.N. Lewis (see Section 1.1) in Berkeley extended these studies in the 1920s and his new formulation of the third law was confirmed by his student, William Francis Giauque, who extended the temperature range experimentally accessible by introducing the method of adiabatic demagnetization in 1933. He managed to reach temperatures a few thousandths of a degree above absolute zero and could thereby provide extremely accurate entropy estimates. He also showed that it is possible to determine entropies from spectroscopic data. Giauque was awarded the Nobel Prize for Chemistry in 1949 for his contributions to chemical thermodynamics.

The next Nobel Prize given for work in thermodynamics went to Lars Onsager of Yale University in 1968 for contributions to the thermodynamics of irreversible processes. Classical thermodynamics deals with systems at equilibrium, in which the chemical reactions are said to be reversible, but many chemical systems, for example, the most complex of all, living organisms, are far from equilibrium and their reactions are said to be irreversible. Onsager developed his so-called reciprocal relations in 1931, describing the flow of matter and energy in such systems, but the importance of his work was not recognized until the end of the 1940s. A further step forward in the development of non-equilibrium thermodynamics was taken by Ilya Prigogine in Bruxelles, whose theory of dissipative structures was awarded the Nobel Prize for Chemistry in 1977.

3.3 Chemical Change

The chief method to get information about the mechanism of chemical reactions is chemical kinetics, i.e. measurements of the rate of the reaction as a function of reactant concentrations as well as its dependence on temperature, pressure and reaction medium. Important work in this area had been done already in the 1880s by two of the early Laureates, van’t Hoff and Arrhenius, who showed that it is not enough for molecules to collide for a reaction to take place. Only molecules with sufficient kinetic energy in the collision do, in fact, react, and Arrhenius derived an equation in 1889 allowing the calculation of this activation energy from the temperature dependence of the reaction rate. With the advent of quantum mechanics in the 1920s (see Section 3.4), Eyring developed his transition-state theory in 1935 which showed that the activation entropy is also important. Strangely, Eyring never received a Nobel Prize (see Section 1.2).

In 1956 Sir Cyril Norman Hinshelwood of Oxford and Nikolay Nikolaevich Semenov from Moscow shared the Nobel Prize for Chemistry “for their researches into the mechanism of chemical reactions”.  A limit in investigating reaction rates is set by the speed with which the reaction can be initiated. If this is done by rapid mixing of the reactants, the time limit is about one thousandth of a second (millisecond). In the 1950s Manfred Eigen from Göttingen developed chemical relaxation methods that allow measurements in times as short as a thousandth or a millionth of a millisecond (microseconds or nanoseconds). The methods involve disturbing an equilibrium by rapid changes in temperature or pressure and then follow the passage to a new equilibrium. Another way to initiate some reactions rapidly is flash photolysis, i.e. by short light flashes, a method developed by Ronald G.W. Norrish at Cambridge and George Porter (Lord Porter since 1990) in London. Eigen received one-half and Norrish and Porter shared the other half of the Nobel Prize for Chemistry in 1967. The milli- to picosecond time scales gave important information on chemical reactions. However, it was not until it was possible to generate femtosecond laser pulses (10-15 s) that it became possible to reveal when chemical bonds are broken and formed. Ahmed Zewail (born 1946 in Egypt) at California Institute of Technology received the Nobel Prize for Chemistry in 1999 for his development of “femtochemistry” and in particular for being the first to experimentally demonstrate a transition state during a chemical reaction. His experiments relate back to 1889 when Arrhenius (Nobel Prize, 1903) made the important prediction that there must exist intermediates (transition states) in the transformation from reactants to products.

Henry Taube of Stanford University was awarded the Nobel Prize for Chemistry in 1983 “for his work on the mechanism of electron transfer reactions, especially in metal complexes”. Even if Taube’s work was on inorganic reactions, electron transfer is important in many catalytic processes used in industry and also in biological systems, for example, in respiration and photosynthesis.

3.4 Theoretical Chemistry and Chemical Bonding

Quantum mechanics, developed in the 1920s, offered a tool towards a more basic understanding of chemical bonds. In 1927 Walter Heitler and Fritz London showed that it is possible to solve exactly the relevant equations for the hydrogen molecule ion, i.e. two hydrogen nuclei sharing a single electron, and thereby calculate the attractive force between the nuclei. A pioneer in developing such methods was Linus Pauling at California Institute of Technology, who was awarded the Nobel Prize for Chemistry in 1954 “for his research into the nature of the chemical bond …” Pauling’s valence-bond (VB) method is rigorously described in his 1935 book Introduction to Quantum Mechanics (written together with E. Bright Wilson, Jr., at Harvard). A few years later (1939) he published an extensive non-mathematical treatment in The Nature of the Chemical Bond, a book which is one of the most read and influential in the entire history of chemistry. Pauling was not only a theoretician, but he also carried out extensive investigations of chemical structure by X-ray diffraction (see Section 3.5). On the basis of results with small peptides, which are building blocks of proteins, he suggested the α-helix as an important structural element. Pauling was awarded the Nobel Peace Prize for 1962, and he is the only person to date to have won two unshared Nobel Prizes.

α-helix   Pauling’s α-helix

α-carbon atoms are black, other carbon atoms grey, nitrogen atoms blue, oxygen atoms red and hydrogen atoms white; R designates amino-acid side chains. The dotted red lines are hydrogen bonds between amide and carbonyl groups in the peptide bonds.

Pauling’s VB method cannot give an adequate description of chemical bonding in many complicated molecules, and a more comprehensive treatment, the molecular-orbital (MO) method, was introduced already in 1927 by Robert S. Mulliken from Chicago and later developed further. MO theory considers, in quantum-mechanical terms, the interaction between all atomic nuclei and electrons in a molecule. Mulliken also showed that a combination of MO calculations with experimental (spectroscopic) results provides a powerful tool for describing bonding in large molecules. Mulliken received the Nobel Prize for Chemistry in 1966.

Theoretical chemistry has also contributed significantly to our understanding of chemical reaction mechanisms. In 1981 the Nobel Prize for Chemistry was shared between Kenichi Fukui in Kyoto and Roald Hoffmann of Cornell University “for their theories, developed independently, concerning the course of chemical reactions”. Fukui introduced in 1952 the frontier-orbital theory, according to which the occupied MO with the highest energy and the unoccupied one with the lowest energy have a dominant influence on the reactivity of a molecule. Hoffmann formulated in 1965, together with Robert B. Woodward (see Section 3.8), rules based on the conservation of orbital symmetry, for the reactivity and stereochemistry in chemical reactions.
3.5 Chemical Structure

The most commonly used method to determine the structure of molecules in three dimensions is X-ray crystallography. The diffraction of X-rays was discovered by Max von Laue in 1912, and this gave him the Nobel Prize for Physics in 1914. Its use for the determination of crystal structure was developed by Sir William Bragg and his son, Sir Lawrence Bragg, and they shared the Nobel Prize for Physics in 1915. The first Nobel Prize for Chemistry for the use of X-ray diffraction went to Petrus (Peter) Debye, then of Berlin, in 1936. Debye did not study crystals, however, but gases, which give less distinct diffraction patterns.

Many Nobel Prizes have been awarded for the determination of the structure of biological macromolecules (proteins and nucleic acids). Proteins are long chains of amino-acids, as shown by Emil Fischer (see Section 2), and the first step in the determination of their structure is to determine the order (sequence) of these building blocks. An ingenious method for this tedious task was developed by Frederick Sanger of Cambridge, and he reported the amino-acid sequence for a protein, insulin, in 1955. For this achievement he was awarded the Nobel Prize for Chemistry in 1958. Sanger later received part of a second Nobel Prize for Chemistry for a method to determine the nucleotide sequence in nucleic acids (see Section 3.12), and he is the only scientist so far who has won two Nobel Prizes for Chemistry.

The first protein crystal structures were reported by Max Perutz and Sir John Kendrew in 1960, and these two investigators shared the Nobel Prize for Chemistry in 1962. Perutz had started studying the oxygen-carrying blood pigment, hemoglobin, with Sir Lawrence Bragg in Cambridge already in 1937, and ten years later he was joined by Kendrew, who looked at crystals of the related muscle pigment, myoglobin. These proteins are both rich in Pauling’s α-helix (see Section 3.4), and this made it possible to discern the main features of the structures at the relatively low resolution first used. The same year that Perutz and Kendrew won their prize, the Nobel Prize for Physiology or Medicine went to Francis Crick, James Watson and Maurice Wilkins “for their discoveries concerning the molecular structure of nucleic acids … .” Two years later (1964) Dorothy Crowfoot Hodgkin received the Nobel Prize for Chemistry for determining the crystal structures of penicillin and vitamin B12.

Crystallographic electron microscopy was developed by Sir Aaron Klug in Cambridge, who was awarded the Nobel Prize for Chemistry in 1982. Attempts to prepare crystals of membrane proteins for structural studies were unsuccessful, but in 1982 Hartmut Michel managed to crystallize a photosynthetic reaction center after a painstaking series of experiments. He then proceeded to determine the three-dimensional structure of this protein complex in collaboration with Johann Deisenhofer and Robert Huber, and this was published in 1985. Deisenhofer, Huber and Michel shared the Nobel Prize for Chemistry in 1988. Michel has later also crystallized and determined the structure of the terminal enzyme in respiration, and his two structures have allowed detailed studies of electron transfer (cf. Sections 3.3 and 3.4) and its coupling to proton pumping, key features of the chemiosmotic mechanism for which Peter Mitchell had already received the Nobel Prize for Chemistry in 1978 (see Section 3.12). Functional and structural studies on the enzyme ATP synthase, connected to this proton pumping mechanism, was awarded one-half of the Nobel Prize for Chemistry in 1997, shared between Paul D. Boyer and John Walker (see Section 3.12).

3.6 Inorganic and Nuclear Chemistry

Much of the progress in inorganic chemistry during the 20th century has been associated with investigations of coordination compounds, i.e., a central metal ion surrounded by a number of coordinating groups, called ligands. In 1893 Alfred Werner in Zürich presented his coordination theory, and in 1905 he summarized his investigations in this new field in a book (Neuere Anschauungen auf dem Gebiete der anorganischen Chemie), which appeared in no less than five editions from 1905-1923. . Werner showed that a structure for compounds in which a metal ion binds several other molecules (ligands), all the ligand molecules are bound directly to the metal ion. Werner was awarded the Nobel Prize for Chemistry in 1913. Taube’s investigations of electron transfer, awarded in 1983 (see Section 3.3), were mainly carried out with coordination compounds, and vitamin B12 as well as the proteins hemoglobin and myoglobin, investigated by the Laureates Hodgkin, Perutz and Kendrew (see Section 3.5), also belong to this category.

Much inorganic chemistry in the early 1900s was a consequence of the discovery of radioactivity in 1896, for which Henri Becquerel from Paris was awarded the Nobel Prize for Physics in 1903, together with Pierre and Marie Curie. In 1911 Marie Curie received the Nobel Prize for Chemistry for her discovery of the elements radium and polonium and for the isolation of radium and studies of its compounds, and this made her the first investigator to be awarded two Nobel Prizes. The prize in 1921 went to Frederick Soddy of Oxford for his work on the chemistry of radioactive substances and on the origin of isotopes. In 1934 Frédéric Joliot and his wife Irène Joliot-Curie, the daughter of the Curies, discovered artificial radioactivity, i.e., new radioactive elements produced by the bombardment of non-radioactive elements with a-particles or neutrons. They were awarded the Nobel Prize for Chemistry in 1935 for “their synthesis of new radioactive elements”.

Many elements are mixtures of non-radioactive isotopes (see Section 3.1), and in 1934 Harold Urey of Columbia University had been given the Nobel Prize for Chemistry for his isolation of heavy hydrogen (deuterium). Urey had also separated uranium isotopes, and his work was an important basis for the investigations by Otto Hahn from Berlin. In attempts to make transuranium elements, i.e., elements with a higher atomic number than 92 (uranium), by radiating uranium atoms with neutrons, Hahn discovered that one of the products was barium, a lighter element. Lise Meitner, at the time a refugee from Nazism in Sweden, who had earlier worked with Hahn and taken the initiative for the uranium bombardment experiments, provided the explanation, namely, that the uranium atom was cleaved and that barium was one of the products. Hahn was awarded the Nobel Prize for Chemistry in 1944 “for his discovery of the fission of heavy nuclei”, and it can be wondered why Meitner was not included. Hahn’s original intention with his experiments was later achieved by Edwin M. McMillan and Glenn T. Seaborg of Berkeley, who were given the Nobel Prize for Chemistry in 1951 for “discoveries in the chemistry of transuranium elements”.

The use of stable as well as radioactive isotopes have important applications, not only in chemistry, but also in fields as far apart as biology, geology and archeology. In 1943 George de Hevesy from Stockholm received the Nobel Prize for Chemistry for his work on the use of isotopes as tracers, involving studies in inorganic chemistry and geochemistry as well as on the metabolism in living organisms. The prize in 1960 was given to Willard F. Libby of the University of California, Los Angeles (UCLA), for his method to determine the age of various objects (of geological or archeological origin) by measurements of the radioactive isotope carbon-14.

3.7 General Organic Chemistry

Contributions in organic chemistry have led to more Nobel Prizes for Chemistry than work in any other of the traditional branches of chemistry. Like the first prize in this area, that to Emil Fischer in 1902 (see Section 2), most of them have, however, been awarded for advances in the chemistry of natural products and will be treated separately (Section 3.9). Another large group, preparative organic chemistry, has also been given its own section (Section 3.8), and here only the prizes for more general contributions to organic chemistry will be discussed.

In 1969 the Nobel Prize for Chemistry went to Sir Derek H. R. Barton from London, and Odd Hassel from Oslo for developing the concept of conformation, i.e. the spatial arrangement of atoms in molecules, which differ only by the orientation of chemical groups by rotation around a single bond. This stereochemical concept rests on the original suggestion by van’t Hoff of the tetrahedral arrangement of the four valences of the carbon atom (see Section 2), and most organic molecules exist in two or more stable conformations.

The Nobel Prize for Chemistry in 1975 to Sir John Warcup Cornforth of the University of Sussex and Vladimir Prelog of ETH in Zürich was also based on research in stereochemistry. Not only can a compound have more than one geometric form, but chemical reactions can also have specificity in their stereochemistry, thereby forming a product with a particular three-dimensional arrangement of the atoms. This is especially true of reactions in living organisms, and Cornforth has mainly studied enzyme-catalyzed reactions, so his work borders onto biochemistry (Section 3.12). One of Prelog’s main contributions concerns chiral molecules, i.e. molecules that have two forms differing from one another as the right hand does from the left. Stereochemically specific reactions have great practical importance, as many drugs, for example, are active only in one particular geometric form.

Organometallic compounds constitute a group of organic molecules containing one or more carbon-metal bond, and they are thus the organic counterpart to Werner’s inorganic coordination. In 1952 Ernst Otto Fischer and Sir Geoffrey Wilkinson independently described a completely new group of organometallic molecules, called sandwich compounds in which compounds a metal ion is bound not to a single carbon atom but is “sandwiched” between two aromatic organic molecules. Fischer and Wilkinson shared the Nobel Prize for Chemistry in 1973.

3.8 Preparative Organic Chemistry

One of the chief goals of the organic chemist is to be able to synthesize increasingly complex compounds of carbon in combination with various other elements. The first Nobel Prize for Chemistry recognizing pioneering work in preparative organic chemistry was that to Victor Grignard from Nancy and Paul Sabatier from Toulouse in 1912. Grignard had discovered that organic halides can form compounds with magnesium. Sabatier was given the prize for developing a method to hydrogenate organic compounds in the presence of metallic catalysts. The prize in 1950 was presented to Otto Diels from Kiel and Kurt Alder from Cologne “for their discovery and development of the diene synthesis”, developed in 1928, by which organic compounds containing two double bonds (“dienes”) can effect the syntheses of many cyclic organic substances.

The German organic chemist Hans Fischer from Munich had already done significant work on the structure of hemin, the organic pigment in hemoglobin, when he synthesized it from simpler organic molecules in 1928. He also contributed much to the elucidation of the structure of chlorophyll, and for these important achievements he was awarded the Nobel Prize for Chemistry in 1930 (cf. Section 3.5). He finished his determination of the structure of chlorophyll in 1935, and by the time of his death he had almost completed its synthesis as well.

Robert Burns Woodward from Harvard is rightly considered the founder of the most advanced, modern art of organic synthesis. He designed methods for the total synthesis of a large number of complicated natural products, for example, cholesterol, chlorophyll and vitamin B12. He received the Nobel Prize for Chemistry in 1965, and he would probably have received a second chemistry prize in 1981 for his part in the formulation of the Woodward-Hoffmann rules (see Section 3.4), had it not been for his early death.

The Nobel Prize for Chemistry in 1984 was given to Robert Bruce Merrifield of Rockefeller University “for his development of methodology for chemical synthesis on a solid matrix”. Specifically, the synthesis of large peptides and small proteins.

3.9 Chemistry of Natural Product

The synthesis of complex organic molecules must be based on detailed knowledge of their structure. Early work on plant pigments was carried out by Richard Willstätter, a student of Adolf von Baeyer from Munich (see Section 2). Willstätter showed a structural relatedness between chlorophyll and hemin, and he demonstrated that chlorophyll contains magnesium as an integral component. He also carried out pioneering investigations on other plant pigments, such as the carotenoids, and he was awarded the Nobel Prize for Chemistry in 1915 for these achievements. Willstätter’s work laid the ground for the synthetic accomplishments of Hans Fischer (see Section 3.8). In addition, Willstätter contributed to the understanding of enzyme reactions.

The prizes for 1927 and 1928 were both presented to Heinrich Otto Wieland from Munich and Adolf Windaus from Göttingen, respectively, at the Nobel ceremony in 1928. These two chemists had done closely related work on the structure of steroids. The award to Wieland was primarily for his investigations of bile acids, whereas Windaus was recognized mainly for his work on cholesterol and his demonstration of the steroid nature of vitamin D. Wieland had already in 1912, before his prize-winning work, formulated a theory for biological oxidation, according to which removal of hydrogen (dehydrogenation) rather than reaction with oxygen is the dominating process.

Investigations on vitamins were recognized in 1937 and 1938 with the prizes to Sir Norman Haworth from Birmingham and Paul Karrer from Zürich and to Richard Kuhn from Heidelberg. Haworth did outstanding work in carbohydrate chemistry, establishing the ring structure of glucose. He was the first chemist to synthesize vitamin C, and this is the basis for the present large-scale production of this nutrient. Haworth shared the prize with Karrer, who determined the structure of carotene and of vitamin A. Kuhn also worked on carotenoids, and he published the structure of vitamin B2 at the same time as Karrer. He also isolated vitamin B6. In 1939 the Nobel Prize for Chemistry was shared between Adolf Butenandt from Berlin and Leopold Ruzicka (1887-1976) of ETH, Zurich. Butenandt was recognized “for his work on sex hormones”, having isolated estrone, progesterone and androsterone. Ruzicka synthesized androsterone and also testosterone.

The awards for outstanding work in natural-product chemistry continued after World War II. In 1947 Sir Robert Robinson from Oxford received the prize for his studies on plant substances, particularly alkaloids, such as morphine. Robinson also synthesized steroid hormones, and he elucidated the structure of penicillin. Many hormones are of a polypeptide nature, and in 1955 Vincent du Vigneaud of Cornell University was given the prize for his synthesis of two such hormones, vasopressin and oxytocin. Finally, in this area, Alexander R. Todd (Lord Todd since 1962) was recognized in 1957 “for his work on nucleotides and nucleotide co-enzymes”. Todd had synthesized ATP (adenosine triphosphate) and ADP (adenosine diphosphate), the main energy carriers in living cells, and he determined the structure of vitamin B12 (cf. Section 3.5) and of FAD (flavin-adenine dinucleotide).

3.10 Analytical Chemistry and Separation Science

A prize in analytical chemistry was given to Jaroslav Heyrovsky from Prague in 1959 for his development of polarographic methods of analysis. In these a dropping mercury electrode is employed to determine current-voltage curves for electrolytes. A given ion reacts at a specific voltage, and the current is a measure of the concentration of this ion.

The analysis of macromolecular constituents in living organisms requires specialized methods of separation. Ultracentrifugation wad developed by The Svedberg from Uppsala a few years before he was awarded the Nobel Prize for Chemistry in 1926 “for his work on disperse systems” (see Section 3.11). Svedberg’s student, Arne Tiselius, studied the migration of protein molecules in an electric field, and with this method, named electrophoresis, he demonstrated the complex nature of blood proteins. Tiselius also refined adsorption analysis, a method first used by the Russian botanist, Michail Tswett, for the separation of plant pigments and named chromatography by him. In 1948 Tiselius was given the prize for these achievements. A few years later (1952) Archer J.P. Martin from London and Richard L.M. Synge from Bucksburn (Scotland) shared the prize “for their invention of partition chromatography”, and this method was a major tool in many biochemical investigations later awarded with Nobel Prizes (see Section 3.12).

3.11 Polymers and Colloids

The Svedberg who received the Nobel Prize for Chemistry in 1926, also investigated gold sols. He used Zsigmond’s ultramicroscope to study the Brownian movement of colloidal particles, so named after the Scottish botanist Robert Brown, and confirmed a theory developed by Albert Einstein in 1905 and, independently, by M. Smoluchowski. His greatest achievement was, however, the construction of the ultracentrifuge, with which he studied not only the particle size distribution in gold sols but also determined the molecular weight of proteins, for example, hemoglobin. In the same year as Svedberg got the prize the Nobel Prize for Physics was awarded to Jean Baptiste Perrin of Sorbonne for developing equilibrium sedimentation in colloidal solutions, a method which Svedberg later perfected in his ultracentrifuge. Svedberg’s investigations with the ultracentrifuge and Tiselius’s electrophoresis studies (see Section 3.10) were instrumental in establishing that protein molecules have a unique size and structure, and this was a prerequisite for Sanger’s determination of their amino-acid sequence and the crystallographic work of Kendrew and Perutz (see Section 3.5).

3.12 Biochemistry

The second Nobel Prize for discoveries in biochemistry came in 1929, when Sir Arthur Harden from London and Hans von Euler-Chelpin from Stockholm shared the prize for investigations of sugar fermentation, which formed a direct continuation of Buchner’s work awarded in 1907. With his young co-worker, William John Young, Harden had shown in 1906 that fermentation requires a dialysable substance, called co-zymase, which is not destroyed by heat. Harden and Young also demonstrated that the process stops before all sugar (glucose) has been used up, but it starts again on addition of inorganic phosphate, and they suggested that hexose phosphates are formed in the early steps of fermentation. von Euler had done important work on the structure of co-zymase, shown to be nicotinamide adenine dinucleotide (NAD, earlier called DPN). As the number of Laureates can be three, it may seem appropriate for Young to have been included in the award, but Euler’s discovery was published together with Karl Myrbäck, and the number of Laureates is limited to three.

The next biochemical Nobel Prize was given in 1946 for work in the protein field. James B. Sumner of Cornell University received half the prize “for his discovery that enzymes can be crystallized” and John H. Northrop together with Wendell M. Stanley, both of the Rockefeller Institute, shared the other half “for their preparation of enzymes and virus proteins in a pure form”. Sumner had in 1926 crystalized an enzyme, urease, from jack beans and suggested that the crystals were the pure protein. His claim was, however, greeted with great scepticism, and the crystals were suggested to be inorganic salts with the enzyme adsorbed or occluded. Just a few years after Sumner’s discovery Northrop, however, managed to crystalize three digestive enzymes, pepsin, trypsin and chymotrypsin, and by painstaking experiments shown them to be pure proteins. Stanley started his attempt to purify virus proteins in the 1930s, but not until 1945 did he get virus crystals, and this then made it possible to show that viruses are complexes of protein and nucleic acid. The pioneering studies of these three investigators form the basis for the enormous number of new crystal structures of biological macromolecules, which have been published in the second half of the 20th century (cf. Section 3.5).

Several Nobel Prizes for Chemistry have been awarded for work in photosynthesis and respiration, the two main processes in the energy metabolism of living organisms (cf. Section 3.5). In 1961 Melvin Calvin of Berkeley received the prize for elucidating the carbon dioxide assimilation in plants. With the aid of carbon-14 (cf. Section 3.6) Calvin had shown that carbon dioxide is fixed in a cyclic process involving several enzymes. Peter Mitchell of the Glynn Research Laboratories in England was awarded in 1978 for his formulation of the chemiosmotic theory. According to this theory, electron transfer (cf. Sections 3.3 and 3.4) in the membrane-bound enzyme complexes in both respiration and photosynthesis, is coupled to proton translocation across the membranes, and the electrochemical gradient thus created is used to drive the synthesis of ATP (adenosine triphosphate), the energy storage molecule in all living cells. Paul D. Boyer of UCLA and John C. Walker of the MRC Laboratory in Cambridge shared one-half of the 1997 prize for their elucidation of the mechanism of ATP synthesis; the other half of the prize went to Jens C. Skou in Aarhus for the first discovery of an ion-transporting enzyme. Walker had determined the crystal structure of ATP synthase, and this structure confirmed a mechanism earlier proposed by Boyer, mainly on the basis of isotopic studies.

Luis F. Leloir from Buenos Aires was awarded in 1970 “for the discovery of sugar nucleotides and their role in the biosynthesis of carbohydrates”. In particular, Leloir had elucidated the biosynthesis of glycogen, the chief sugar reserve in animals and many microorganisms. Two years later the prize went with one half to Christian B. Anfinsen of NIH and the other half shared by Stanford Moore and William H. Stein, both from Rockefeller University, for fundamental work in protein chemistry. Anfinsen had shown, with the enzyme ribonuclease, that the information for a protein assuming a specific three-dimensional structure is inherent in its amino-acid sequence, and this discovery was the starting point for studies of the mechanism of protein folding, one of the major areas of present-day biochemical research. Moore and Stein had determined the amino-acid sequence of ribonuclease, but they received the prize for discovering anomalous properties of functional groups in the enzyme’s active site, which is a result of the protein fold.

Naturally a number of Nobel Prizes for Chemistry have been given for work in the nucleic acid field. In 1980 Paul Berg of Stanford received one half of the prize for studies of recombinant DNA, i.e. a molecule containing parts of DNA from different species, and the other half was shared by Walter Gilbert from Harvard and Frederick Sanger (see Section 3.5) for developing methods for the determination of the base sequences of nucleic acids. Berg’s work provides the basis of genetic engineering, which has led to the large biotechnology industry. Base sequence determinations are essential steps in recombinant-DNA technology, which is the rationale for Gilbert and Sanger sharing the prize with Berg.

Sidney Altman of Yale and Thomas R. Cech of the University of Colorado shared the prize in 1989 “for their discovery of the catalytic properties of RNA”. The central dogma of molecular biology is: DNA –> RNA –> enzyme. The discovery that not only enzymes but also RNA possesses catalytic properties have led to new ideas about the origin of life. The 1993 prize was shared by Kary B. Mullis from La Jolla and Michael Smith from Vancouver, who both have given important contributions to DNA technology. Mullis developed the PCR (“polymerase chain reaction”) technique, which makes it possible to replicate millions of times a specific DNA segment in a complicated genetic material. Smith’s work forms the basis for site-directed mutagenesis, a technique by which it is possible to change a specific amino-acid in a protein and thereby illuminate its functional role.

  1. Concluding Remarks

The first eighty years of Nobel Prizes for Chemistry outlines the development of modern chemistry. The prizes cover a broad spectrum of the basic chemical sciences, from theoretical chemistry to biochemistry, and also a number of contributions to applied chemistry. Organic chemistry dominates with no less than 25 awards. This is not surprising, since the special valence properties of carbon result in an almost infinite variation in the structure of organic compounds. Also, a large number of the prizes in organic chemistry were given for investigations of the chemistry of natural products of increasing complexity, and have lead to pharmaceutical development .

As many as 11 prizes have been awarded for biochemical discoveries. The first biochemical prize was already given in 1907 (Buchner), but only three awards in this area came in the first half of the century, illustrating the explosive growth of biochemistry in recent decades (8 prizes in 1970-1997). At the other end of the chemical spectrum, physical chemistry, including chemical thermodynamics and kinetics, dominates with 14 prizes, but there have also been 6 prizes in theoretical chemistry. Chemical structure is a large area with 8 prizes, including awards for methodological developments as well as for the determination of the structure of large biological molecules or molecular complexes. Industrial chemistry was first recognized in 1931 (Bergius, Bosch), but many more recent prizes for basic contributions lie close to industrial applications.

Read Full Post »

« Newer Posts - Older Posts »