Feeds:
Posts
Comments

Archive for the ‘Antimalarial Preparation’ Category

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Curator: Stephen J. Williams, PhD

Marc W. Kirschner*

Department of Systems Biology
Harvard Medical School

Boston, Massachusetts 02115

With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like spe­cies, arise by descent with modification, so in their ear­liest forms even the founders of great dynasties are only marginally different than their sister fields and spe­cies. It is only in retrospect that we can recognize the significant founding events. Before embarking on a def­inition of systems biology, it may be worth remember­ing that confusion and controversy surrounded the in­troduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retro­spect molecular biology was new and different. It intro­duced both new subject matter and new technological approaches, in addition to a new style.

As a point of departure for systems biology, consider the quintessential experiment in the founding of molec­ular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonle­thal mutation in these genes in a multicellular organ­ism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiologi­cal. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previ­ous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.

That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been per­fected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new con­tinent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the sim­plistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organ­isms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combina­tions, something that is not assured in chemistry. It also downplays the significant regulatory features that in­volve interactions between gene products, their local­ization, binding, posttranslational modification, degra­dation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the con­served genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different pheno­types in different tissues of metazoan organisms. These circuits may have certain robustness, but more impor­tant they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes them­selves. Among other things it loads the deck in evolu­tionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.

Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the pheno­type. One aspect of systems biology is the develop­ment of techniques to examine broadly the level of pro­tein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.

High-throughput biology has opened up another im­portant area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and repro­ducible environment. The real world of ecology, evolu­tion, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later ex­tended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some ge­neticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and pro­tein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quan­titative effects, partially masked or accentuated by other genetic and environmental conditions. To under­stand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environ­mental variation.

Extracts and explants are relatively accessible to syn­thetic manipulation. Next there is the explicit recon­struction of circuits within cells or the deliberate modifi­cation of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of de­scribing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, pro­teins, cells in tissues, and whole organisms in their en­vironment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.

You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biologi­cal organization and processes in terms of the molecu­lar constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the impor­tance of defining a succession of physiological states in that process, and on evolutionary biology and ecol­ogy for the appreciation that all aspects of the organ­ism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology gen­erates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of sys­tems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.

Source: “Meaning of Systems Biology” Cell, Vol. 121, 503–504, May 20, 2005, DOI 10.1016/j.cell.2005.05.005

Old High-throughput Screening, Once the Gold Standard in Drug Development, Gets a Systems Biology Facelift

From Phenotypic Hit to Chemical Probe: Chemical Biology Approaches to Elucidate Small Molecule Action in Complex Biological Systems

Quentin T. L. Pasquer, Ioannis A. Tsakoumagkos and Sascha Hoogendoorn 

Molecules 202025(23), 5702; https://doi.org/10.3390/molecules25235702

Abstract

Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.

5.1.5. Large-Scale Proteomics

While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.

All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.

5.2. Genetic Approaches

Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.

Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.

5.2.1. Resistance Cloning

The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.

In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].

While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].

When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].

While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.

5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens

When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).

An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].

The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].

Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].

An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].

From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.

SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY

Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence

Youngjun Park, Dominik Heider and Anne-Christin Hauschild. Cancers 202113(13), 3148; https://doi.org/10.3390/cancers13133148

Abstract

The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question

1. Introduction

The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.

Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)

Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).

Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16]. 

2. Systems Biology in Cancer Research

Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].

In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.

2.1. Biological Network Analysis for Biomarker Validation

The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].

2.2. De Novo Construction of Biological Networks

While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].

2.3. Network Based Machine Learning

A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].

In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.

3. Network-Based Learning in Cancer Research

As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.

3.1. Molecular Characterization with Network Information

Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].

Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.

3.2. Tumor Heterogeneity Study with Network Information

The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].

In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].

3.3. Drug Target Identification with Network Information

In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].

Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.

4. Deep Learning in Cancer Research

DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].

In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].

4.1. Challenges for Deep Learning in Cancer Research

Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].

Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).

4.2. Molecular Charactization with Network and DNN Model

DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].

Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].

4.3. Tumor Heterogeneity with Network and DNN Model

As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].

4.4. Drug Target Identification with Networks and DNN Models

In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].

The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].

4.5. Graph Neural Network Model

In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].

In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].

4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge

The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].

As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].

Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.

“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”

References

  1. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 20067, 820–828. [Google Scholar] [CrossRef] [PubMed]
  2. Kreeger, P.K.; Lauffenburger, D.A. Cancer systems biology: A network modeling perspective. Carcinogenesis 201031, 2–8. [Google Scholar] [CrossRef] [PubMed]
  3. Vucic, E.A.; Thu, K.L.; Robison, K.; Rybaczyk, L.A.; Chari, R.; Alvarez, C.E.; Lam, W.L. Translating cancer ‘omics’ to improved outcomes. Genome Res. 201222, 188–195. [Google Scholar] [CrossRef]
  4. Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 2014158, 929–944. [Google Scholar] [CrossRef] [PubMed]
  5. Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell 2018173, 283–285. [Google Scholar] [CrossRef]
  6. Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol. 20073, 140. [Google Scholar] [CrossRef]
  7. Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol. 20171, 25. [Google Scholar] [CrossRef] [PubMed]
  8. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 201920, e262–e273. [Google Scholar] [CrossRef]
  9. Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods 201512, 615. [Google Scholar]
  10. Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun. 202011, 729. [Google Scholar] [CrossRef]
  11. Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet. 201910, 13. [Google Scholar] [CrossRef]
  12. Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun. 202011, 728. [Google Scholar] [CrossRef]
  13. Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res. 201824, 1248–1259. [Google Scholar] [CrossRef]
  14. Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis 20198, 44. [Google Scholar] [CrossRef]
  15. Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics 201935, 5191–5198. [Google Scholar] [CrossRef]
  16. Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  17. Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.M.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M.; Network, C.G.A.R.; et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 201345, 1113. [Google Scholar] [CrossRef]
  18. The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature 2020578, 82. [Google Scholar] [CrossRef] [PubMed]
  19. King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science 2003302, 643–646. [Google Scholar] [CrossRef] [PubMed]
  20. Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol. 201028, 1075. [Google Scholar] [CrossRef] [PubMed]
  21. Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 200927, 1160. [Google Scholar] [CrossRef]
  22. Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 20145, 412. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform. 201920, 572–584. [Google Scholar] [CrossRef] [PubMed]
  24. Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet. 201617, 630. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet. 20178, 84. [Google Scholar] [CrossRef]
  26. Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med. 201117, 297. [Google Scholar] [CrossRef] [PubMed]

Use of Systems Biology in Anti-Microbial Drug Development

Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965

In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.

Genome Sequences and Proteomic Structural Databases

In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.

Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.

There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.

We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.

FIGURE 2

Figure 2. (A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).

Examples of Understanding and Combatting Resistance

The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.

FIGURE 3

Figure 3. (A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).

Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:

20th Anniversary and the Evolution of Computational Biology – International Society for Computational Biology

Featuring Computational and Systems Biology Program at Memorial Sloan Kettering Cancer Center, Sloan Kettering Institute (SKI), The Dana Pe’er Lab

Quantum Biology And Computational Medicine

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

New anti-Malarial treatment

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Malaria Proteasome Inhibitors Could Reverse Parasite Drug Resistance

http://www.genengnews.com/gen-news-highlights/malaria-proteasome-inhibitors-could-reverse-parasite-drug-resistance/81252358/

 

http://www.genengnews.com/Media/images/GENHighlight/thumb_108676_web2680362491.jpg

This structure (bottom left) of the malaria parasite’s proteasome, obtained using the revolutionary Cryo-Electron Microscopy technique, enabled the design of a specific inhibitor (front) against the mosquito-borne malaria parasite (pictured at back). [University of Melbourne]

 

  • With media attention recently focused on the spread of the Zika virus, it’s easy to forget about the mosquito-borne disease that has been credited with killing one out of every two people who have ever lived—malaria. Currently, close to 50 percent of the world’s population live in malaria-endemic areas, leading to between 200–500 million new cases and close to 500,000 deaths annually (mostly children under the age of five).

    Adding to the complexities of trying to control this disease is that resistance to the most effective antimalarial drug, artemisinin, has developed in Southeast Asia, with fears it will soon reach Africa. Artemisinin-resistant species have spread to six countries in five years.

    A collaborative team of scientists from Stanford University, University of California, San Francisco, University of Melbourne, and the MRC in Cambridge have used cutting-edge technology to design a smarter drug to combat the resistant strain.

    “Artemisinin causes damage to the proteins in the malaria parasite that kill the human cell, but the parasite has developed a way to deal with that damage. So new drugs that work against resistant parasites are desperately needed,” explained coauthor Leann Tilley, Ph.D., professor and deputy head of biochemistry and molecular biology in the Bio21 Molecular Science and Biotechnology Institute at The University of Melbourne.

    Malaria is caused by the protozoan parasite from the genus Plasmodium. Five different species are known to cause malaria in humans, with P. falciparum infection leading to the most deaths. The parasite is transmitted through the bite of the female mosquito and ultimately ends up residing within the host’s red blood cells (RBCs)—replicating and then bursting forth to invade more RBCs in a recurrently timed cycle.

    “This penetration/replication/breakout cycle is rapid—every 48 hours—providing the opportunity for large numbers of mutations that can produce drug resistance,” said senior study author Matthew Bogyo, Ph.D., professor in the department of pathology at Stanford Medical School. “Consequently, several generations of antimalarial drugs have long since been rendered useless.”

    The compound that investigators developed targets the parasites proteasome—a protein degradation pathway that removes surplus or damaged proteins through a cascade of proteolytic reactions.

    “The parasite’s proteasome is like a shredder that chews up damaged or used-up proteins. Malaria parasites generate a lot of damaged proteins as they switch from one life stage to another and are very reliant on their proteasome, making it an excellent drug target,” Dr. Tilley noted.

    The scientists purified the proteasome from the malaria parasite and examined its activity against hundreds of different peptide sequences. From this, they were able to design inhibitors that selectively targeted the parasite proteasome while sparing the human host enzymes.

    The findings from this study were published recently in Nature through an article titled “Structure- and function-based design of Plasmodium-selective proteasome inhibitors.”

    Additionally, scientists at the MRC used a new technique called Single-Particle Cryo-Electron Microscopy to generate a three-dimensional, high-resolution structure of a protein, based on thousands composite images.

    The researchers tested the new drug in red blood cells infected with parasites and found that it was as effective at killing the artemisinin resistant parasites as it was for the sensitive parasites.

    “The compounds we’ve derived can kill artemisinin-resistant parasites because those parasites have an increased need for highly efficient proteasomes,” Dr. Bogyo commented. “So, combining the proteasome inhibitor with artemisinin should make it possible to block the onset of resistance. That will, in turn, allow the continued use of that front-line malaria treatment, which has been so effective up until now.”

    “The new proteasome inhibitors actually complement artemisinin drugs,” Dr. Tilley added. “Artemisinins cause protein damage and proteasome inhibitors prevent the repair of protein damage. A combination of the two provides a double whammy and could rescue the artemisinins as antimalarials, restoring their activity against resistant parasites.”

    The scientists were excited by their results, as they may provide a much-needed strategy to combat the growing levels of resistance for this deadly pathogen. However, the researchers tempered their exuberance by noting that many more drug libraries needed to be screened before clinical trials can begin.

    “The current drug is a good start, but it’s not yet suitable for humans. It needs to be able to be administered orally and needs to last a long time in the blood stream,” Dr. Tilley concluded.

Read Full Post »

Malaria Vaccine Efficacy

Curators: Larry H. Bernstein, MD, FCAP, and Aviva Lev-Ari, PhD, RN

LPBI

Malaria Vaccine Efficacy Could Rely on Parasite’s Genotype

NEW YORK (GenomeWeb) – A malaria vaccine may be more effective against parasites whose genotype matches that of the vaccine itself, according to researchers from Harvard University and the Fred Hutchinson Cancer Research Center.

Reporting this week in the New England Journal of Medicine, researchers evaluated malarial genotypes of individuals enrolled in a phase III trial of GlaxoSmithKline’s vaccine, RTS,S/ASO1.

https://www.genomeweb.com/sequencing-technology/malaria-vaccine-efficacy-could-rely-parasites-genotype

The vaccine was previously evaluated in a large phase III trial in Africa in more than 15,000 children and was found to confer “moderate protective efficacy against clinical disease and severe malaria that wanes over time,” according to the study authors.

The mechanism by which the vaccine confers protection is incompletely understood, although it is known to target a specific protein produced by thePlasmodium falciparum malaria parasite called circumsporozoite protein. However, the circumsporozoite protein contains regions where polymorphisms can occur, including a conserved tandem repeat with a length polymorphism between 37 and 44 repeat unit, and numerous polymorphisms within the C-terminal region of protein.

Researchers hypothesized the vaccine might be less effective against malaria parasites with polymorphisms in those regions.

To test this theory, they used PCR and next-generation sequencing on both Illumina’s MiSeq instrument and Pacific Biosciences’ RS II. The researchers targeted and sequenced the circumsporozoite protein C-terminal and as well as a control region with the MiSeq from children enrolled in the clinical trial who had become infected with malaria. They used the PacBio system to sequence the longer repeat region.

Over 4,000 samples were sequenced on the MiSeq and over 3,000 on the PacBio. Samples included patients at multiple time points after they received the vaccine.

Genetic data of the malaria parasite was evaluated from 1,181 kids between the ages of five and 17 months who received the RTS,S vaccine and 909 who received a control vaccine, all of whom had developed clinically confirmed malaria.

Over two-thirds of patents had “complex infections,” defined as being founded by two or more distinct parasite lineages, the authors reported. Patients that received the RTS,S vaccine were more likely to have complex infections — 71 percent had complex infections compared to 61 percent of patients who received the control vaccine.

Looking at the relationship between polymorphisms to the C-terminal region and vaccine efficacy, the researchers found that one-year post vaccination, the C-terminal region in the malaria parasite matched that of the vaccine in 139 individuals, but was a mismatch in 1,951 individuals. Thus, cumulative vaccine efficacy against malaria with a perfect genotype match at the C-terminal site was 50.3 percent. For those without a perfect match, efficacy was 33.4 percent.

In addition, efficacy was higher immediately after receiving the vaccine. Through six months post vaccination, efficacy was 70.2 percent in individuals with a matched genotype and 56.3 percent in those with mismatched genotypes.

Looking at the relationship between the number of repeats and vaccine efficacy, the researchers found a non-significant effect with increasing repeats and vaccine efficacy.

The results suggest that among children between the ages of five and 17 months the RTS,S vaccine “has greater activity against malaria parasites with matched circumsporozoite protein allele than against mismatched malaria,” the authors concluded, and overall vaccine efficacy will depend on the genotype of the local parasite population.

In addition, the authors noted, “Genetic surveillance of circumsporozoite protein sequences in parasite populations could inform the development of future vaccine candidates targeting polymorphic malaria proteins.”

Read Full Post »

Secret Maoist Chinese Operation Conquered Malaria

Larry H. Bernstein, MD, FCAP, Curator

LPBI

Secret Maoist Chinese Operation Conquered Malaria — and Won a Nobel

10/07/2015 – Jia-Chen Fu, Emory University

http://www.scientificcomputing.com/articles/2015/10/secret-maoist-chinese-operation-conquered-malaria-%E2%80%94-and-won-nobel?et_cid=4866514&et_rid=535648082

http://www.scientificcomputing.com/sites/scientificcomputing.com/files/Secret_Maoist_Chinese_Operation_Conquered_Malaria_and_Won_a_Nobel_440.jpg

This photo taken September 23, 2011, and released by Xinhua News Agency on October 5, 2015, shows Chinese pharmacologist Tu Youyou posing with her trophy after winning the Lasker Award, a prestigious U.S. medical prize, in New York. Three scientists from Ireland, Japan and China won the 2015 Nobel Prize in medicine on October 5 for discovering drugs against malaria and other parasitic diseases that affect hundreds of millions of people every year. Tu was awarded the prize for discovering artemisinin, a drug that has helped significantly reduce the mortality rates of malaria patients. (Wang Chengyun/Xinhua via AP)

At the height of the Cultural Revolution, Project 523 — a covert operation launched by the Chinese government and headed by a young Chinese medical researcher by the name of Tu Youyou — discovered what has been the most powerful and effective antimalarial drug therapy to date.

Known in Chinese as qinghaosu and derived from the sweet wormwood (Artemisia annua L.), artemisinin was only one of several hundred substances Tu and her team of researchers culled from Chinese drugs and folk remedies and systematically tested in their search for a treatment to chloroquine-resistant malaria.

How Tu and her team discovered artemisinin tells us much about the continual Chinese effort to negotiate between traditional/modern and indigenous/foreign.

Indeed, contrary to popular assumptions that Maoist China was summarily against science and scientists, the Communist party-state needed the scientific elite for certain political and practical purposes.

Medicine, particularly when it also involved foreign relations, was one such area. In this case, it was the war in Vietnam and the scourge of malaria that led to the organization of Project 523.

North Vietnamese soldiers had to deal with disease as well as the enemy. manhhaiCC BY

North Vietnamese soldiers had to deal with disease

North Vietnamese soldiers had to deal with disease

https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/97412/width668/image-20151006-7375-bhm608.jpg

A request from Vietnam and a military answer

As fighting escalated between American and Vietnamese forces throughout the 1960s, malaria became the number one affliction compromising Vietnamese soldier health. The increasing number of chloroquine-resistant malaria cases in the civilian population further heightened North Vietnamese concern.

In 1964, the North Vietnamese government approached Chinese leader Mao Tse Tung and asked for Chinese assistance in combating malaria. Mao responded, “Solving your problem is the same as solving our own.”

From the beginning, Project 523, which was classified as a top-secret state mission, was under the direction of military authorities. Although civilian agencies were invited to collaborate in May 1967, military supervision highlighted the urgent nature of the research and protected it from adverse political winds.

The original three-year plan produced by the People’s Liberation Army Research Institute aimed tointegrate far and near, integrate Chinese and Western medicines, take Chinese drugs as its priority, emphasize innovation, unify plans, divide labor to work together.

The medical mission

Project 523 had three goals: the identification of new drug treatments for fighting chloroquine-resistant malaria, the development of long-term preventative measures against chloroquine-resistant malaria, and the development of mosquito repellents.

To achieve these ends, research on Chinese drugs and acupuncture was integral.

The decision to investigate Chinese drugs was not without precedent. Back in 1926, Chen Kehui and Carl Schmidt of the Peking Union Medical College published their original paper on ephedrine, derived from Chinese herb mahuang. It ignited a research fire in which more than 500 scientific papers on ephedrine (for relief for asthma) appeared around the world by 1929.

In the 1940s, state interest in the Chinese drug changshan and its antimalarial properties led to the establishment of a state-funded research institute and experimental farm in Sichuan province.

Project 523’s embrace of Chinese materia medica — the traditional body of knowledge about substances’ healing properties — is a more recent example of the efforts to “scientize” Chinese medicine through selective appropriation and detailed investigation.

Biomedical interest in Chinese drugs was not in itself new. But the institutional climate within which Project 523 investigators worked was different from earlier antimalarial research efforts. The Vietnam War had exacerbated an epidemiological crisis to which Maoist China responded with nationalist fervor by turning to its institutions of traditional Chinese medicine.

In the 1960s, such institutions were a mixing ground of specialists, many of whom possessed more than a passing familiarity with Chinese medicine and biomedicine. This ensured that qinghao research proceeded within a climate in which scientists, “who themselves had learnt the ways of appreciating traditional knowledge, worked side by side with historians of traditional medicine, who had textual learning.”

Tons of Artemisia annua are grown annually in China today. Novartis AGCC BY-NC-ND

Tons of Artemisia

Tons of Artemisia

https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/97373/width668/image-20151006-29239-8md0dg.jpg

Tu Youyou’s story

Tu Youyou’s research fits within this Maoist story of medical systematization and standardization.

Born in 1930, she was a medical student during the 1950s, when state efforts to make Chinese medicine scientific through the research and expertise of biomedical researchers were especially acute. She rose to the head of a malaria research group at the Beijing Academy of Traditional Chinese Medicine in 1969.

The group was composed of phytochemical researchers who studied the chemical compounds that occur naturally in plants and pharmacological researchers who focused on the science of drugs. They began with a list of over 2,000 Chinese herbal preparations, of which 640 preparations were found to have possible antimalarial activities. They worked steadily and obtained more than 380 extracts from some 200 Chinese herbs, which they then evaluated against a mouse model of malaria.

Of the 380+ extracts they had obtained, a qinghao (Artemisia annua L.) extract appeared promising, but inconsistently so. Faced with varying results, Tu and her team returned to the existing materia medica literature and reexamined each instance in which qinghao appeared in a traditional recipe.

Tu was drawn to one particular reference made by Ge Hong 葛洪 (284-363) in his fourth-century BC text, Emergency Prescriptions One Keeps Up One’s Sleeve. Ge Hong instructed: take a bunch of qing hao and two sheng [2 x 0.2 liter] of water for soaking it, wring it out to obtain the juice, and ingest it in its entirety.

Chinese woodcut portrait of Ge Hong. Gan Bozong via Wellcome ImagesCC BY

Chinese woodcut portrait of Ge Hong

Chinese woodcut portrait of Ge Hong

https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/97408/width237/image-20151006-7333-1k4ymc1.jpg

In what can be characterized as her eureka moment, Tu had the idea that “the heating involved in the conventional extraction step we had used might have destroyed the active components, and that extraction at a lower temperature might be necessary to preserve antimalarial activity.” Herhunch proved correct; once they switched to a lower-temperature procedure, Tu and her team obtained much better and more consistent antimalarial activity with qinghao. By 1971, they had obtained a nontoxic and neutral extract that was called qinghaosu or artemisinin. It was 100 percent effective against malarial parasites in animal models.

Tu’s research has drawn accolades from the international scientific community, while also igniting adebate in the Chinese language media about the celebration of individual inventors over collective group efforts.

Tu Youyou poses with Chinese officials after the announcement of her Nobel Prize. China Daily China Daily Information Corp – CDIC/Reuters

This too, perhaps, may be part of the legacy of Maoist mass science, which demanded research that served practical needs and engaged the masses. Scientific achievement, while important, was not the be-all, end-all of scientific work. During the Cultural Revolution, it mattered that science proceed along revolutionary lines. It mattered that scientific advances resulted from collective endeavor and drew from popular sources. Does it still?

Jia-Chen Fu, Assistant Professor of Chinese, Emory University. This article was originally published on The Conversation. Read the original article.

The discovery of artemisinin (qinghaosu) and gifts from Chinese medicine

Youyou Tu

Lasker~DeBakey Clinical Medical Research Award
© 2011 Nature America, Inc. All rights reserved.
1218 volume 17 | number 10 | october 2011 nature medicine

Youyou Tu is at the Qinghaosu Research Center, Institute of Chinese Materia Medica, China Academy of Chinese Medical Sciences, Beijing, China.
e-mail: youyoutu1930cn@yahoo.com.cn

Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.

Discovery of antimalarial effect of qinghao

Malaria, caused by Plasmodium falciparum, has been a life-threatening disease for thousands of years. After the failure of international attempts to eradicate malaria in the 1950s, the disease rebounded, largely due to the emergence of parasites resistant to the existing antimalarial drugs of the time, such as chloroquine. This created an urgent need for new antimalarial medicines. In 1967, a national project against malaria was set up in China under the leadership of the Project 523 office. My institute quickly became involved in the project and appointed me to be the head of a malaria research group comprising both phytochemical and pharmacological researchers. Our group of young investigators started working on the extraction and isolation of constituents with possible antimalarial activities from Chinese herbal materials. During the first stage of our work, we investigated more than 2,000 Chinese herb preparations and identified 640 hits that had possible antimalarial activities. More than 380 extracts obtained from ~200 Chinese herbs were evaluated against a mouse model of malaria. However, progress was not smooth, and no significant results emerged easily. The turning point came when an Artemisia annua L. extract showed a promising degree of inhibition against parasite growth. However, this observation was not reproducible in subsequent experiments and appeared to be contradictory to what was recorded in the literature. Seeking an explanation, we carried out an intensive review of the literature. The only reference relevant to use of qinghao (the Chinese name of Artemisia annua L.) for alleviating malaria symptoms appeared in Ge Hong’s A Handbook of Prescriptions for Emergencies: “A handful of qinghao immersed with 2 liters of water, wring out the juice and drink it all” (Fig. 1). This sentence gave me the idea that the heating involved in the conventional extraction step we had used might have destroyed the active components, and that extraction at a lower temperature might be necessary to preserve antimalarial activity. Indeed, we obtained much better activity after switching to a lower temperature procedure.

Figure 1 A Handbook of Prescriptions for Emergencies by Ge Hong (284–346 CE). (a) Ming dynasty version (1574 CE) of the handbook. (b) “A handful of qinghao immersed with 2 liters of water, wring out the juice and drink it all” is printed in the fifth line from the right. (From volume 3.)

Beyond artemisinin Dihydroartemisinin was not initially considered a useful therapeutic agent by organic chemists because of concerns about its chemical stability. During evaluation of the artemisinin
covery of artemisinin was the first step in our advancement—the revelation. We then went on to experience the second step—creation— by turning the natural molecule into a drug. We had found that, in the genus Artemisia, only the species A. annua and its fresh leaves in the alabastrum stage contain abundant artemisinin. My team, however, used an Artemisia local to Beijing that contained relatively small amounts of the compound. For pharmaceutical production, we urgently required an Artemisia rich in artemisinin. The collaborators in the nationwide Project 523 found an A. annua L. native to the Sichuan province that met this requirement. The first formulation we tested in patients was tablets, which yielded unsatisfactory results. We found out in subsequent work that this was due to the poor disintegration of an inappropriately formulated tablet produced in an old compressing machine. We shifted to a new preparation—a capsule of pure artemisinin—that had satisfactory clinical efficacy. The road leading toward the creation of a new antimalarial drug opened again.

Spreading the word

In addition to problems of production and formulation, we also faced challenges regarding the dissemination of our findings to the world. The stereo-structure of artemisinin, a sesquiterpene lactone, was determined with the assistance of a team at the Institute of Biophysics, Chinese Academy of Sciences, in 1975. The structure (Fig. 3) was first published in 1977
commentary (ref. 2), and both the new molecule and the paper were immediately cited by the Chemical Abstracts Service in the same year. However, the prevailing environment in China at the time restrained the publication of any papers concerning qinghaosu, with the exception of several published in Chinese2–20. Fortunately, in 1979, the China National Committee of Science and Technology granted us a National Invention Certificate in recognition of the discovery of artemisinin and its antimalarial efficacy. In 1981, the fourth meeting of the Scientific Working Group on the Chemotherapy of Malaria, sponsored by the United Nations Development Programme, the World Bank and the World Health Organization (WHO), took place in Beijing (Fig. 4). During a special program for research and training in tropical diseases, a series of presentations on qinghaosu and its antimalarial properties elicited enthusiastic response. As the first speaker of the meeting, I presented our report “Studies on the Chemistry of Qinghaosu.” The studies disclosed on this presentation were then published in 1982 (ref. 10). The efficacy of artemisinin and its derivatives in treating several thousand patients infected with malaria in China attracted worldwide attention in the 1980s 21. We subsequently separated the extract into its acidic and neutral portions and, at long last, on 4 October 1971, we obtained a nontoxic, neutral extract that was 100% effective against parasitemia in mice infected with Plasmodium berghei and in monkeys infected with Plasmodium cynomolgi. This finding represented the breakthrough in the discovery of artemisinin.

From molecule to drug

During the Cultural Revolution, there were no practical ways to perform clinical trials of new drugs. So, in order to help patients with malaria, my colleagues and I bravely volunteered to be the first people to take the extract. After ascertaining that the extract was safe for human consumption, we went to the Hainan province to test its clinical efficacy, carrying out antimalarial trials with patients infected with both Plasmodium vivax and P. falciparum. These clinical trials produced encouraging results: patients treated with the extract experienced rapid disappearance of symptoms—namely fever and number of parasites in the blood—whereas patients receiving chloroquine did not. Encouraged by the clinical outcome, we moved on to investigate the isolation and purification of the active components from Artemisia (Fig. 2). In 1972, we identified a colorless, crystalline substance with a molecular weight of 282 Da, a molecular formula of C15H22O5, and a melting point of 156–157 °C as the active component of the extract. We named it qinghaosu (or artemisinin; su means “basic element” in Chinese).

Figure 2 Artemisia annua L. (a) A hand-colored drawing of qinghao in Bu Yi Lei Gong Pao Zhi Bian Lan (Ming Dynasty, 1591 CE). (b) Artemisia annua L. in the field.

Figure 3 Artemisinin. (a) Molecular structure of artemisinin. (b) A three-dimensional model of artemisinin. Carbon atoms are represented by black balls, hydrogen atoms are blue and oxygen atoms are red. The Chinese characters underneath the model read Qinghaosu.

Figure 4 Delegates at the fourth meeting of the Scientific Working Group on the Chemotherapy of Malaria in Beijing in 1981. Professor Ji Zhongpu (center, first row), president of the Academy of Traditional Chinese Medicine, delivered the opening remarks to the meeting. The author is in the second row (fourth from the left).

In keeping with Goldstein’s view, we found that dihydroartemisinin was more stable and ten times more effective than artemisinin. More importantly, there was much less disease recurrence during treatment with this derivative. Adding a hydroxyl group to the molecule also introduced more opportunities for developing new artemisinin derivatives through esterification.

My group later developed dihydroartemisinin into a new medicine. Over the past decade, my colleagues and I have explored the use of artemisinin and dihydroartemisinin for the treatment of other diseases22–33.

The history of the discovery of qinghaosu and the knowledge we gained about the molecule and its derivatives during the course of our studies are summarized in the book Research on Qinghaosu and Its Derivatives (in Chinese)34. In 2005, the WHO announced a switch in strategy to artemisinin combination therapy (ACT). ACT is currently widely used, saving many lives, mostly those of children in Africa. The therapy markedly reduces the symptoms of malaria because of its antigametocyte activity.

Other gifts from Chinese medicine Artemisinin, with its unique sesquiterpene lactone created by phytochemical evolution, is a true gift from old Chinese medicine. The route to the discovery of artemisinin was short compared with those of many other phytochemical discoveries in drug development. But this is not the only instance in which the wisdom of Chinese medicine has borne fruit. Clinical studies in China have shown that arsenic, an ancient drug used in Chinese medicine, is an effective and relatively safe drug in the treatment of acute promyelocytic leukemia (APL)35. Arsenic trioxide now is considered the firstline treatment for APL, exerting its therapeutic effect by promoting the degradation of promyelocytic leukemia protein (PML), which drives the growth of APL cells36. Huperzine A, an effective agent for treatment of memory dysfunction, is a novel acetylcholinesterase inhibitor derived from the Chinese medicinal herb Huperzia serrata37, and a derivative of huperzine A is now undergoing clinical trails in Europe and the United States for the treatment of Alzheimer’s disease. However, the use of a single herb for the treatment of a specific disease is rare in Chinese medicine. Generally, the treatment is determined by a holistic characterization of the patient’s syndrome, and a prescription comprises a group of herbs specifically tailored to the syndrome. The rich correlations between syndromes and prescriptions have fueled the advancement of Chinese medicine for thousands of years.

Progress in the therapy of cardiovascular and cerebrovascular diseases has also received gifts from Chinese medicine. A key therapeutic concern for Chinese medicine is the principle of activating blood circulation to remove blood stasis, and there are several examples of this principle in action in Western medicine. Compounds derived from Chinese medicinal products—the molecules chuangxiongol and paeoniflorin—have been tested for their efficacy in preventing restenosis after percutaneous coronary intervention (PCI). A multicenter, randomized, double-blind, placebo-controlled trial (335 patients, 6 months) showed that restenosis rates were significantly reduced by the medicine as compared with the placebo (26.0% versus 47.2%)38. Evidence supporting the therapeutic value of related strategies from Chinese medicine aimed at activating blood circulation has been obtained in the treatment of ischemic diseases39 and in the management of myocardial ischemiareperfusion injury40–43. Also in relation to cardiovascular disease, a new discipline called biomechanopharmacology aims at combining the pharmacological effects of Chinese medicine with the biomechanical properties of flowing blood44. The joint application of exercise (to increase the shear stress of blood flow) with extracts from shenlian, another Chinese medicine, shows promise for the prevention of atherosclerosis45. And recent reports have begun to provide a glimpse into the molecular mechanisms that account for the effects of Chinese remedies. For example, a recent study identified a potential mechanism to account for the effect of salvianolic acid B, a compound from the root of Salvia miltiorrhiza, in combination with increased shear stress, on the functions of endothelial cells46. The examples cited here represent only a sliver of the gifts or potential gifts Chinese medicine has to offer. It is my dream that Chinese medicine will help us conquer life threatening diseases worldwide, and that people across the globe will enjoy its benefits for health promotion.

ACKNOWLEDGMENTS I wish to express my heartfelt thanks to all my colleagues at the Academy of Traditional Chinese Medicine for their devotion to our work and for their exceptional contributions to the discovery and application of artemisinin and its derivatives. I thank my colleagues in the Shangdong Provincial Institute of Chinese Medicine, the Yunnan Provincial Institute of Materia Medica, the Institute of Biophysics and the Shanghai Institute of Organic Chemistry at the Chinese Academy of Sciences, Guangzhou University of Chinese Medicine and the Academy of Military Medical Sciences for their significant contributions to Project 523. I also would pay my respects to the leadership at the national Project 523 office and their sound efforts in organizing the malaria project activities.

COMPETING FINANCIAL INTERESTS The author declares no competing financial interests.

1. Goldstein, J.L. Creation and revelation: two different routes to advancement in the biomedical sciences. Nat. Med. 13, 1151–1154 (2007). 2. Collaboration Research Group for Qinghaosu. A new sesquiterpene lactone—qinghaosu [in Chinese]. Kexue Tongbao 3, 142 (1977). 3. Liu, J.M. et al. Structure and reaction of qinghaosu [in Chinese]. Acta Chimi. Sin. 37, 129–143 (1979). 4. Collaboration research group for Qinghaosu. Studies on new anti-malarial drug qinghaosu [in Chinese]. Yaoxue Tongbao 14, 49–53 (1979). 5. Collaboration research group for Qinghaosu. Antimalarial studies on qinghaosu [in Chinese]. Chin. Med. J. 92, 811–816 (1979). 6. Tu, Y.Y. The awarded Chinese invention: antimalarial drug qinghaosu [in Chinese]. Rev. World Invent. 4, 26 (1981). 7. Tu, Y.Y. et al. Studies on the constituents of Artemisia annua L [in Chinese]. Yao Xue Xue Bao 16, 366–370 (1981). 8. Tu, Y.Y., Ni, M.Y., Zhong, Y.R. & Li, L.N. Studies on the constituents of Artemisia annua L. and derivatives of artemisinin [in Chinese]. Zhongguo Zhong Yao Za Zhi 6, 31 (1981). 9. Tu, Y.Y. et al. Studies on the constituents of Artemisia annua L. (II). Planta Med. 44, 143–145 (1982). 10. Collaboration Research Group for Qinghaosu. Chemical studies on qinghaosu. J. Tradit. Chin. Med. 2, 3–8 (1982). 11. Xiao, Y.Q. & Tu, Y.Y. Isolation and identification of the lipophilic constituents from Artemisia anomala S. Moore [in Chinese]. Yao Xue Xue Bao 19, 909–913 (1984). 12. Tu, Y.Y., Yin, J.P., Ji, L., Huang, M.M. & Liang, X.T. Studies on the constituents of Artemisia annua L. (III) [in Chinese]. Chin. Tradit. Herbal Drugs 16, 200–201 (1985). 13. Wu, C.M. & Tu, Y.Y. Studies on the constituents of Artemisia apiacea Hance [in Chinese]. Chin. Tradit. Herbal Drugs 6, 2–3 (1985). 14. Tu, Y.Y., Zhu, Q.C. & Shen, X. Studies on the constituents of Young Artemisia annua L [in Chinese]. Zhongguo Zhong Yao Za Zhi 10, 419–420 (1985). 15. Wu, C.M. & Tu, Y.Y. Studies on the constituents of Artemisia gmelinii Web.exstechm [in Chinese]. Chin. Bull. Bot. 3, 34–37 (1985). 16. Wu, C.M. & Tu, Y.Y. Studies on the constituents of Artemisia argyi Levl et vant [in Chinese]. Zhongguo Zhong Yao Za Zhi. 10, 31–32 (1985). 17. Xiao, Y.Q. & Tu, Y.Y. Isolation and identification of the lipophilic constituents from Artemisia anomala S. Moore [in Chinese]. Acta Bot. Sin. 28, 307–310 (1986). 18. Tu, Y.Y. Study on authentic species of Chinese herbal drug winghao [in Chinese]. Bull. Chin. Mater. Med. 12, 2–5 (1987). 19. Yin, J.P. & Tu, Y.Y. Studies on the constituents of Artemisia eriopoda Bunge [in Chinese]. Chin. Tradit. Herbal Drugs 20, 149–150 (1989). 20. Gu, Y.C. & Tu, Y.Y. Studies on chemical constituents of Artemisia japonica Thunb [in Chinese]. Chin. Tradit. Herbal Drugs 24, 122–124 (1993). 21. Klayman, D.L. Qinghaosu (artemisinin): an antimalarial drug from China. Science 228, 1049–1055 (1985). 22. Sun, X.Z. et al. Experimental study on the immunosuppressive effects of qinghaosu and its derivatives [in Chinese]. Zhongguo Zhong Xi Yi Jie He Za Zhi 11, 37–38 (1993). 23. Yang, S.X., Xie, S.S., Ma, D., Long, Z.Z. & Tu, Y.Y. Immunologic enhancement and reconstitution by qinghaosu and its derivatives [in Chinese]. Chin. Bull. Pharm 9, 61–63 (1992). 24. Chen, P.H., Tu, Y.Y., Wang, F.Y., Li, F.W. & Yang, L. Effect of dihydroqinghaosu on the development of Plasmodium yoelii in Anopheles stephensi [in Chinese]. Ji Sheng Chong Xue Yu Ji Sheng Chong Bing Za Zhi 16, 421–424 (1998). 25. Huang, L. et al. Studies on the antipyretic and antiinflammatory effects of Artemisia annua L [in Chinese]. Zhongguo Zhong Yao Za Zhi 18, 44–48 (1993). 26. Tu, Y.Y. The development of new antimalarial drugs: qinghaosu and dihydro-qinghaosu. Chin. Med. J. 112, 976–977 (1999). 27. Xu, L.M., Chen, X.R. & Tu, Y.Y. Effect of hydroartemisinin on lupus BXSB mice [in Chinese]. Chin. J. Dermatovenerol. Integr. Tradit. West. Med. 1, 19–20 (2002). 28. Dong, Y.J. et al. Effect of dihydro-qinghaosu on autoantibody production, TNFa secretion and pathologic change of lupus nephritis in BXSB mice [in Chinese]. Zhongguo Zhong Xi Yi Jie He Za Zhi. 23, 676–679 (2003). 29. Dong, Y.J. et al. The effects of DQHS on the pathologic changes in BXSB mice lupus nephritis and the effect mechanism [in Chinese]. Chin. Pharmacol. Bull. 19, 1125–1128 (2003). 30. Tu, Y.Y. The development of the antimalarial drugs with new type of chemical structure—qinghaosu and dihydroqinghaosu. Southeast Asian J. Trop. Med. Public Health 35, 250–251 (2004). 31. Yang, L., Huang, M.M., Zhang, D. & Tu, Y.Y. Determination of scopoletin in qinghao by HPLC [in Chinese]. Chin. J. Exp. Tradit. Med. Formulae 12, 10–11 (2006). 32. Li, W.D., Dong, Y.J., Tu, Y.Y. & Lin, Z.B. Dihydroarteannuin ameliorates lupus symptom of BXSB mice by inhibiting production of TNF-alpha and blocking the signaling pathway NF-kappa B translocation. Int. Immunopharmacol. 6, 1243–1250 (2006). 33. Zhang, D., Yang, L., Yang, L.X., Huang, M.M. & Tu, Y.Y. Determination of artemisinin, arteannuin B and artemisinic acid in Artemisia annua by HPLC-UVELSD [in Chinese]. Yao Xue Xue Bao 42, 978–981 (2007).
34. Qinghao Ji Qinghaosulei Yaowu (Artemisia annua L., Artemisinin and its Derivatives) [in Chinese] (ed. Tu, Y.Y.) (Publisher of Chemical Industry, Beijing, 2009). 35. Chen, G.Q. et al. Use of arsenic trioxide (As2O3) in the treatment of acute promyelocytic leukemia (APL): I. As2O3 exerts dose-dependent dual effects on APL cells. Blood 89, 3345–3353 (1997). 36. Zhang, X.W. et al. Arsenic trioxide controls the fate of the PML-RARalpha oncoprotein by directly binding PML. Science 328, 240–243 (2010). 37. Tang, X.C. & Han, Y.F. Pharmacological profile of huperzine A, a novel acetylcholinesterase inhibitor from Chinese herb. CNS Drug Rev. 5, 281–300 (1999). 38. Chen, K.J. et al. XS0601 reduces the incidence of restenosis: a prospective study of 335 patients undergoing percutaneous coronary intervention in China. Chin. Med. J. 119, 6–13 (2006). 39. Gao, D. et al. The effect of Xuefu Zhuyu decoction on in vitro endothelial progenitor cell tube formation. Chin. J. Integr. Med. 16, 50–53 (2010). 40. Zhao, N. et al. Cardiotonic pills, a compound Chinese medicine, protects ischemia-reperfusion-induced microcirculatory disturbance and myocardial damage in rats. Am. J. Physiol. Heart Circ. Physiol. 298, H1166–H11176 (2010). 41. Xu, X.S. et al. The antioxidant Cerebralcare Granule attenuates cerebral microcirculatory disturbance during ischemia-reperfusion injury. Shock 32, 201–209 (2009). 42. Sun, K. et al. Cerebralcare Granule, a Chinese herb compound preparation, improves cerebral microcirculatory disorder and hippocampal CA1 neuron injury in gerbils after ischemia–reperfusion. J. Ethnopharmacol. 130, 398–406 (2010). 43. Han, J.Y. et al. Ameliorating effects of compounds derived from Salvia miltiorrhiza root extract on microcirculatory disturbance and target organ injury by ischemia and reperfusion. Pharmacol. Ther. 117, 280–295 (2008). 44. Liao, F. et al. Biomechanopharmacology: a new borderline discipline. Trends Pharmacol. Sci. 27, 287–289 (2006). 45. You, Y. et al. Joint preventive effects of swimming and Shenlian extract on rat atherosclerosis. Clin. Hemorheol. Microcirc. 47, 187–198 (2011). 46. Xie, L.X. et al. The effect of salvianolic acid B combined with laminar shear stress on TNF-alpha-stimulated adhesion molecule expression in human aortic endothelial cells. Clin. Hemorheol. Microcirc. 44, 245–258 (2010).

 

A race against RESISTANCE

Several African nations could strike a major blow against malaria by sacrificing the efficacy of some older drugs. Can they make it work?

BY AMY MAXMEN

It is September in southeastern Mali, and Louka Coulibaly is standing in the shade of a squat, concrete building, giving instructions to a dozen men and women perched on a wobbly wooden bench. Coulibaly, a local medical supervisor, hands out nylon backpacks, each filled with bags of pills, plastic cups and a porcelain mortar and pestle that the women pause to admire. By noon, the men and women are packing up and heading back to their respective villages on foot, bicycle and motorcycle.

The following day, they and about 1,400 other health workers throughout the region will set up shop in public spaces: under the shade of mango trees, in one-room schools, at market stands and in district health centres. They will mix and mash the pills with the mortar and pestle, dissolve them in water in a cup, and hand the bitter dandelion-coloured liquid to about 164,000 children.

The effort is part of a broad campaign to prevent malaria by providing African children with drugs usually used to treat the disease. Nearly 1.2 million healthy children from parts of Mali, Togo, Chad, Niger, Nigeria and Senegal received these drugs during the rainy season — from around July to November — when malaria usually ravages the population. The countries’ governments are deploying this intervention — known as seasonal malaria chemoprevention, or SMC — with financial support from the United States, the United Nations and the medical aid organization Médecins sans Frontières (MSF), also called Doctors Without Borders. Next year, many plan to expand the campaigns, and other countries hope to launch their own, encouraged by
recommendations from the World Health Organization (WHO).

Preventive use of anti-malarial drugs is not new: tourists routinely swallow them when travelling. But public-health officials have long instructed people living in regions where the disease is endemic to refrain from taking drugs prophylactically, in part because of concerns that the parasite that causes malaria will develop resistance when many people take the medicine on a long-term basis.

That risk has not disappeared. In fact, scientists fully expect SMC to encourage widespread drug resistance. No one knows when, exactly, but it could happen within as few as five years. Until then, SMC has the power to prevent 8.8 million cases and 80,000 deaths each year if implemented in regions with high rates of seasonal malaria. That is considered a powerful enough benefit to justify losing the drugs. “Life is a risk,” says Coulibaly, a Malian hired by MSF to train local health workers. “And if you don’t take risks, you don’t win.”

The project is designed to forestall drug resistance as long as possible, and to work in concert with mosquito nets and other preventive methods. Supporters hope that the combination will significantly suppress malaria, so that even if resistance eventually spreads, the caseload should be smaller and manageable with other treatments. But SMC will not be as successful if funding and infrastructure falter — and so far, programmes have had a shaky start. Still, advocates say that the challenges can be overcome.

WHEN INTENTIONS BACKFIRE

Previous attempts at large-scale malaria chemo prevention offer lessons on what not to do. In the 1950s, David Clyde, a malaria researcher with the British Colonial Medical Service, administered the drug pyrimethamine to villagers in Tanzania. At the time, pyrimethamine had a strong track record of clearing the parasite. But with any drug, there is a slim chance that some strains of parasite will be resistant and will survive to infect others — a chance that increases when many people take the medicine in an area where the parasites are abundant and circulate year-round.

Clyde’s experiment drove this concept home: malaria rates dropped at first, but after five months, 37% of infections in the village no longer responded to the drug1. Eight years later, pyrimethamine resistance had spread: up to 40% of infections within 25 kilometres of the original intervention site were unresponsive.

The 1960s brought more lessons — this time, when scientists tried adding the drug chloroquine to table salt. Clinical trials had shown2 that the salt drastically lowered malaria rates. But when the tactic was scaled up and the salt was distributed to markets in Guyana and Brazil, people consumed only what met their tastes. Others opted for untreated salt when they could, because the chloroquine made their skin itch. As a result, many people carried sub-therapeutic levels of the drug — not enough to reduce the malaria burden, but enough to promote resistance. “The salt campaigns were a disaster,” says Christopher Plowe, a malariologist at the University of Maryland School of Medicine in Baltimore.

Governments and aid organizations mostly shelved chemoprevention programmes after that, but resistance continued to grow — albeit slowly — as people used drugs to treat malaria infections. Between 1960 and 2000, chloroquine resistance crept around the globe and the malaria death toll steadily rose. That trend started to reverse around 2005, after the widespread adoption of the drug artemisinin, derived from Chinese sweet wormwood (Artemisia annua). Today, artemisinin-based drugs are the gold standard for treating malaria.

SECOND CHANCE

Alassane Dicko, a malariologist at the University of Bamako in Mali, was a graduate student in Plowe’s laboratory in 2001, when he started to think seriously about reviving chemoprevention. As a child, Dicko had lost his older brother and his best friend to malaria. Later, as a medical student working in hospitals, he was distraught at the number of children he saw dying. “You really feel it,” he says. “If we want to do anything for this country in terms of health, we need to stop malaria first.”

Dicko suggested that older antimalarials might be repurposed for prevention in places where resistance to them is not yet widespread. By using drugs seasonally, only in uninfected children and in combination rather than alone, he hoped to avoid some of the mistakes of the past. With drug combinations, parasites need to acquire several mutations to survive. These mutations usually come at a cost to the parasite, so removing the selective pressure of the drugs during the dry season would give parasites still sensitive to the treatment a chance to outcompete resistant ones.

Dicko proposed using a mixture of sulphadoxine and pyrimethamine called SP, which was known to be relatively safe over the long term. In 2002, his team treated 130 children with SP for two months in a placebo-controlled trial in Mali3. The treatment reduced malaria by 68%.

Other West African scientists followed the study. Among them was Badara Cissé, a Senegalese researcher then pursuing his doctorate with malariologist Brian Greenwood at the London School of Hygiene and Tropical Medicine. Greenwood had been considering chemoprevention since the 1980s, and he and Cissé immediately grasped the potential in Dicko’s approach. In 2004, they began a trial in Senegal to test three monthly doses of SP plus artesunate, an artemisinin derivative. Compared with the placebo group, nearly nine out of ten malaria cases were averted4.

With a US$4.5-million grant from the Bill & Melinda Gates Foundation in 2008, Cissé and his colleagues launched an as-yet-unpublished, 3-year clinical trial to study SP with another drug, amodiaquine (to preserve the efficacy of artemisinin). They treated nearly 200,000 children under 10 years old and found that they had 83% fewer cases of malaria than controls, says Cissé. Smaller trials in other African nations reported similar findings. These are impressive numbers, especially given how recalcitrant malaria has been to preventive measures. No vaccine has ever proved fully effective against the disease, for example. And the one that is closest to approval — RTS,S — has shown disappointing results in ongoing clinical trials, with less than a 50% reduction in cases (see Nature 502, 271–272; 2013).

RESISTING THE CRITICS

SMC raised some concerns that slowed its adoption. Some health officials suggested that natural, partial immunity to the parasite — built up as a child survives multiple bouts of malaria — would be compromised. Others fretted about the potential side effects of taking the drugs regularly. But the loudest complaints were about losing the drugs to resistance.

In a cramped office in a makeshift building at the University of Dakar, Cissé explains how he was frustrated by the deliberations among public-health officials as malaria waged war on Senegal’s children. He slumps in a chair that seems much too small for him and asks, “Isn’t it selfish to sit in our offices with air conditioning, saying that we should save these drugs?” He recalls a single night, 20 years ago, when he watched five children die of malaria. There was nothing he could do to save them. “If this happened to you, you would not be debating about the fear of losing a drug,” he says.

In 2012, SMC finally won over most officials. The Cochrane Collaboration — an international group based in Melbourne, Australia, that specializes in evidence assessment — analysed results from trials in Senegal, Mali, Burkina Faso, Ghana and Gambia, and concluded5 that SMC could prevent more than three-quarters of malaria cases in places where the disease struck seasonally. In the trials, the signs of side effects, resistance and reduced immunity were all minimal. According to another report6, nearly 21 million children in these regions stood to gain from SMC each year. And prevention is cheaper than treatment. Each month, chemoprevention costs $1.50 per child, which pales in comparison to the costs of travel and medical care for a child who falls ill. In November 2012, the WHO published SMC-implementation guidelines that enabled countries to apply for funds from international organizations7.

SLOW START

Implementation has been a challenge, however. Mamadou Lamine Diouf, the drug-procurement manager for Senegal’s National Malaria Control Program, says that the rollout there was supposed to reach nearly 600,000 children each month, starting in July and August. But he and the US agency footing the bill for the medicine had underestimated how much time it would take to get these older drugs manufactured anew and assessed by various organizations. By early November, health workers had managed to reach only 53,000 children. “We are learning by doing,” says Diouf. “Now we know that if we don’t master this long supply chain, nothing will be possible.”

Drug delays set back chemo prevention pilots in northern Nigeria by a month. Togo’s campaign did not start until September. Burkina Faso’s project failed to launch when funds came up short. And the size of Mali’s intended intervention dropped after a coup d’état and an invasion by al-Qaeda affiliates last year sent the nation into disarray.

Still, with the lessons learned, supporters say that they will be better prepared next year (see ‘A million ounces of prevention’). In March, some countries plan to apply for funding from the Global Fund to Fight AIDS, Tuberculosis and Malaria. Scott Filler, a disease coordinator at the Global Fund, which is based in Geneva, Switzerland, says, “There are not many things that can prevent malaria in 75% of children, so we will fully support it when countries come to us.”

A MILLION OUNCES OF PREVENTION

By November 2013, seasonal malaria chemoprevention (SMC) reached almost 1.2 million children in areas that receive at least 60% of their annual rainfall in the rainy season. If SMC were scaled up to cover all areas where it might be effective, it could reach 25 million children and prevent an estimated 80,000 deaths each year.

Areas where more than 60% of annual rainfall falls in the rainy season

Mali  344,00

Niger 230,00

Chad 274,000

Nigeria 190,000

Senegal 53,000

Togo 88,000

Plan to implement SMC in 2014
Implemented SMC by November 2013, with number of children treated

As the programmes continue, researchers will keep watch to see if resistance to the drugs mounts. Randomly selected people who come to hospitals to be treated for malaria in Mali, Chad and Niger will have a spot of their blood smeared on filter paper, placed in a ziplock bag and shipped to a laboratory in Bamako, where Dicko and his colleagues will look for mutations associated with resistance to SP and amodiaquine. The University of Dakar will conduct similar tests.

For the campaigns to have a long-lasting effect, chemoprevention must work faster than the parasites acquire resistance. Supporters hope that the treatments will destroy most malaria parasites over the next several years, driving down infection rates and keeping them down even when resistance begins to spread.

Ramanan Laxminarayan, director of the Center for Disease Dynamics, Economics and Policy, a health-policy think tank in Washington DC, is sceptical. He predicts that imperfect implementation will prevent campaigns from having the benefits seen in clinical trials, and that the disease will bounce back in the end. Importantly, says Paul Milligan, a malaria researcher at the London School of Hygiene and Tropical Medicine, funding agencies must support follow-up evaluations to catch unintended effects such as increased vulnerability to malaria in children who outgrow the interventions. Plowe adds: “If we just roll this out without surveillance, we risk repeating all of the mistakes made in the past.”

Yet surveillance and drug resistance mean little to the mothers who congregate in a small village in the Koutiala region of Mali just after sunrise in September. Awa Damale, 25 years old and clad in an embroidered aqua dress and matching headscarf, arrives by donkey cart with her four children and two from another family. Five of the children swallow their medicine, but one of Damale’s sons has felt ill this week. He tests positive for malaria and gets a referral to the nearest clinic. SMC is for prevention only.

The boy’s illness may be a sign that the drugs he took last month are not 100% effective — or that he did not swallow all of the medicine — but his condition does not dampen Damale’s enthusiasm. It is the first time this year that one of her children has had malaria. Before the intervention, she constantly juggled working on the farm with caring for sick children. She does not want to hear about the possibility of the programme drying up or the drugs losing potency years down the road. Most of her children are healthy now, and that is what matters most. ■

SEE EDITORIAL P.165

Amy Maxmen is a freelance science journalist in New York City. Travel for this story was paid for by a grant from the Pulitzer Center on Crisis Reporting in Washington DC.
1. Plowe, C. V. Trans. R. Soc. Trop. Med. Hyg. 103, S11– S14 (2009). 2. Giglioli, G., Rutten, F. J. & Ramjattan, S. Bull. World Health Org. 36, 283–301 (1967). 3. Dicko, A. et al. Malar. J. 7, 123 (2008). 4. Cissé, B. et al. Lancet 367, 659–667 (2006). 5. Meremikwu, M. M., Donegan, S., Sinclair, D., Esu, E. & Oringanje, C. Cochrane Database Systematic Rev. 2012 http://dx.doi.org/10.1002/14651858. CD003756.pub4 (2012). 6. Cairns, M. et al. Nature Commun. 3, 881 (2012). 7. World Health Organization Seasonal malaria chemoprevention with sulfadoxine-pyrimethamine plus amodiaquine in children: A field guide (2012).

 

Read Full Post »

Nobel Prize in Medicine – 2015

Larry H Bernstein, MD, FCAP, Curator

LPBI

 

Nobel Prize in Medicine Awarded for Drugs to Battle Malaria and Other Tropical Diseases

  • The Nobel Prize in medicine was awarded today to three scientists from the U.S., Japan, and China, for discovering drugs to fight malaria and other tropical diseases that affect hundreds of millions of people annually.

http://www.genengnews.com/gen-news-highlights/nobel-prize-in-medicine-awarded-for-drugs-to-battle-malaria-tropical-diseases/81251819/

The prize was awarded by the Nobel judges in Stockholm to William Campbell, Ph.D., who was born in Ireland and became a U.S. citizen in 1962, Satoshi Omura, Ph.D., of Japan, and Youyou Tu, the first-ever Chinese medicine laureate.

Dr. Campbell was associated with the Merck Institute for Therapeutic Research from 1957 to 1990, and from 1984 to 1990 he was senior scientist and director for assay research and development.

Dr. Campbell, 85, is currently a research fellow emeritus at Drew University in Madison, NJ. Dr. Omura, 80, is a professor emeritus at Kitasato University in Japan and is from the central prefecture of Yamanashi. Ms. Tu, 84, is chief professor at the China Academy of Traditional Chinese Medicine.

Nobel prize recipients Dr. Campbell and Dr. Omura were cited for discovering avermectin, derivatives of which have helped lower the incidence of river blindness and lymphatic filariasis. These two diseases are caused by parasitic worms that affect millions of people in Africa and Asia.

Ms. Tu, who won the Lasker Award in 2011, was inspired by traditional Chinese remedies to find an alternative treatment for the ailing first line therapies for malaria, quinine and chloroquine. Ms. Tu poured through ancient texts searching for herbal malaria tinctures and came upon an example that utilized the Chinese sweet wormwood plant, Artemisia annua. From this plant she was able to extract the active compound for the antimalarial drug called artemisinin—currently the first line of defense given in malarial endemic regions that have seen resistance to other commonly used drugs, such as chloroquine. Artemisinin has greatly aided in reducing the mortality rates of malaria, a parasitic disease spread by mosquitos that affects close to 50% of the world’s population.

Efforts to eradicate the black fly date back decades. Merck developed Mectizan (ivermectin), a drug to treat river blindness, which kills the worm’s larvae and prevents the adult worms from reproducing. In 1987, Dr. P. Roy Vagelos, the chairman of Merck reportedly decided to make Mectizan available without charge because those who need it the most could not afford to pay for it.

The oral medication ivermectin paralyzes and sterilizes the parasitic worm that causes the illness.

The disease is spread by bites of the black fly, which breeds in fast-flowing rivers. The worm can live in the human body for many years and it can grow to two feet in length, producing millions of larvae. Infected people suffer severe itching, skin nodules, and a variety of eye lesions, and in extreme cases blindness.

“The two discoveries have provided humankind with powerful new means to combat these debilitating diseases that affect hundreds of millions of people annually,” said the Nobel Committee in a statement. “The consequences in terms of improved human health and reduced suffering are immensurable.”

 

 

 

Nobel Prize Predictions See Honors for Gene Editing Technology

By Julie Steenhuysen

http://www.medscape.com/viewarticle/851475?

Scientists selected as “Citation Laureates” rank in the top 1% of citations in their research areas.

“That is a signpost that the research wielded a lot of impact,” said Christopher King, an analyst with IP&S who helped select the winners.

Among the predicted winners for the Nobel Prize in Chemistry are Emmanuelle Charpentier of Helmholtz Center for Infection Research in Germany and Jennifer Doudna of the University of California, Berkeley. They were picked for their development of the CRISPR-Cas9 method for genome editing.

The technique has taken biology by storm, igniting fierce patent battles between start-up companies and universities, and touching off ethical debates over its potential for editing human embryos.

Missing from the list is Feng Zhang, a researcher at the MIT-Harvard Broad Institute, who owns a broad U.S. patent on the technology, which is the subject of a legal battle. King said he was aware of Zhang’s claims on the technology, but noted that his scientific citations did not rise to the level of a nomination.

Other contenders for the chemistry prize, which will be awarded on Oct. 7 in Stockholm, include John Goodenough of the University of Texas Austin, and Stanley Whittingham of Binghamton University in New York for research leading to the development of the lithium-ion battery.

Also in contention is Carolyn Bertozzi of Stanford University for her contributions to “bioorthogonal chemistry,” which refers to chemical reactions in live cells and organisms. Bertozzi’s lab is using the process to develop smart probes for medical imaging.

For the Nobel in medicine, to be announced Oct. 5, Thomson Reuters picked Kazutoshi Mori of Kyoto University and Peter Walter of the University of California, San Francisco. They showed that a mechanism known as the unfolded protein response acts as a “quality control system” inside cells, deciding whether damaged cells live or die.

Other contenders include Jeffrey Gordon of Washington University in St. Louis for showing a relationship between diet and metabolism and microbes that live in the human gut.

The group also picked a trio of researchers – Alexander Rudensky of Memorial Sloan Kettering Cancer Center, Dr. Shimon Sakaguchi of Osaka University, and Ethan Shevach of the National Institutes of Health – for discoveries relating to regulatory T cells and the function of Foxp3, a master regulator of these immune cells.

For the prizes in physics and economics, to be announced Oct. 6 and 12 respectively, Thomson Reuters predicts winners from scientists who helped pave the way for making X-ray lasers and work that helped explain the impact of policy decisions on labor markets and consumer demand.

Science enthusiasts can weigh in with their own predictions by taking part in Thomson Reuters’ “People’s Choice” prizes at StateOfInnovation.com.

 

 

Read Full Post »

Recent Insights in Drug Development

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

A Better Class of Cancer Drugs
http://www.technologynetworks.com/medchem/news.aspx?ID=183124
An SDSU chemist has developed a technique to identify potential cancer drugs that are less likely to produce side effects.
A class of therapeutic drugs known as protein kinase inhibitors has in the past decade become a powerful weapon in the fight against various life-threatening diseases, including certain types of leukemia, lung cancer, kidney cancer and squamous cell cancer of the head and neck. One problem with these drugs, however, is that they often inhibit many different targets, which can lead to side effects and complications in therapeutic use. A recent study by San Diego State University chemist Jeffrey Gustafson has identified a new technique for improving the selectivity of these drugs and possibly decreasing unwanted side effects in the future.

Why are protein kinase–inhibiting drugs so unpredictable? The answer lies in their molecular makeup.

Many of these drug candidates possess examples of a phenomenon known as atropisomerism. To understand what this is, it’s helpful to understand a bit of the chemistry at work. Molecules can come in different forms that have exactly the same chemical formula and even the same bonds, just arranged differently. The different arrangements are mirror images of each other, with a left-handed and a right-handed arrangement. The molecules’ “handedness” is referred to as chirality. Atropisomerism is a form of chirality that arises when the spatial arrangement has a rotatable bond called an axis of chirality. Picture two non-identical paper snowflakes tethered together by a rigid stick.

Some axes of chirality are rigid, while others can freely spin about their axis. In the latter case, this means that at any given time, you could have one of two different “versions” of the same molecule.

Watershed treatment

As the name suggests, kinase inhibitors interrupt the function of kinases—a particular type of enzyme—and effectively shut down the activity of proteins that contribute to cancer.

“Kinase inhibition has been a watershed for cancer treatment,” said Gustafson, who attended SDSU as an undergraduate before earning his Ph.D. in organic chemistry from Yale University, then working there as a National Institutes of Health poctdoctoral fellow in chemical biology.

“However, it’s really hard to inhibit a single kinase,” he explained. “The majority of compounds identified inhibit not just one but many kinases, and that can lead to a number of side effects.”

Many kinase inhibitors possess axes of chirality that are freely spinning. The problem is that because you can’t control which “arrangement” of the molecule is present at a given time, the unwanted version could have unintended consequences.

In practice, this means that when medicinal chemists discover a promising kinase inhibitor that exists as two interchanging arrangements, they actually have two different inhibitors. Each one can have quite different biological effects, and it’s difficult to know which version of the molecule actually targets the right protein.

“I think this has really been under-recognized in the field,” Gustafson said. “The field needs strategies to weed out these side effects.”

Applying the brakes

So that’s what Gustafson did in a recently published study. He and his colleagues synthesized atropisomeric compounds known to target a particular family of kinases known as tyrosine kinases. To some of these compounds, the researchers added a single chlorine atom which effectively served as a brake to keep the atropisomer from spinning around, locking the molecule into either a right-handed or a left-handed version.

When the researchers screened both the modified and unmodified versions against their target kinases, they found major differences in which kinases the different versions inhibited. The unmodified compound was like a shotgun blast, inhibiting a broad range of kinases. But the locked-in right-handed and left-handed versions were choosier.

“Just by locking them into one or another atropisomeric configuration, not only were they more selective, but they  inhibited different kinases,” Gustafson explained.

If drug makers incorporated this technique into their early drug discovery process, he said, it would help identify which version of an atropisomeric compound actually targets the kinase they want to target, cutting the potential for side effects and helping to usher drugs past strict regulatory hurdles and into the hands of waiting patients.

 

Inroads Against Leukaemia
http://www.technologynetworks.com/medchem/news.aspx?ID=183594

 

Potential for halting disease in molecule isolated from sea sponges.
A molecule isolated from sea sponges and later synthesized in the lab can halt the growth of cancerous cells and could open the door to a new treatment for leukemia, according to a team of Harvard researchers and other collaborators led by Matthew Shair, a professor of chemistry and chemical biology.

“Once we learned this molecule, named cortistatin A, was very potent and selective in terms of inhibiting the growth of AML [acute myeloid leukemia] cells, we tested it in mouse models of AML and found that it was as efficacious as any other molecule we had seen, without having deleterious effects,” Shair said. “This suggests we have identified a promising new therapeutic approach.”

It’s one that could be available to test in patients relatively soon.

“We synthesized cortistatin A and we are working to develop novel therapeutics based on it by optimizing its drug-like properties,” Shair said. “Given the dearth of effective treatments for AML, we recognize the importance of advancing it toward clinical trials as quickly as possible.”

The drug-development process generally takes years, but Shair’s lab is very close to having what is known as a development candidate that could be taken into late-stage preclinical development and then clinical trials. An industrial partner will be needed to push the technology along that path and toward regulatory approval. Harvard’s Office of Technology Development (OTD) is engaged in advanced discussions to that end.

The molecule works, Shair explained, by inhibiting a pair of nearly identical kinases, called CDK8 and CDK19, that his research indicates play a key role in the growth of AML cells.

The kinases operate as part of a poorly understood, massive structure in the nucleus of cells called the mediator complex, which acts as a bridge between transcription factors and transcriptional machinery. Inhibiting these two specific kinases, Shair and colleagues found, doesn’t shut down all transcription, but instead has gene-specific effects.

“We treated AML cells with cortistatin A and measured the effects on gene expression,” Shair said. “One of the first surprises was that it’s affecting a very small number of genes — we thought it might be in the thousands, but it’s in the low hundreds.”

When Shair, Henry Pelish, a senior research associate in chemistry and chemical biology, and then-Ph.D. student Brian Liau looked closely at which genes were affected, they discovered many were associated with DNA regulatory elements known as “super-enhancers.”

“Humans have about 220 different types of cells in their body — they all have the same genome, but they have to form things like skin and bone and liver cells,” Shair explained. “In all cells, there are a relatively small number of DNA regulatory elements, called super-enhancers. These super-enhancers drive high expression of genes, many of which dictate cellular identity. A big part of cancer is a situation where that identity is lost, and the cells become poorly differentiated and are stuck in an almost stem-cell-like state.”

While a few potential cancer treatments have attacked the disease by down-regulating such cellular identity genes, Shair and colleagues were surprised to find that their molecule actually turned up the activity of those genes in AML cells.

“Before this paper, the thought was that cancer is ramping these genes up, keeping the cells in a hyper-proliferative state and affecting cell growth in that way,” Shair said. “But our molecule is saying that’s one part of the story, and in addition cancer is keeping the dosage of these genes in a narrow range. If it’s too low, the cells die. If they are pushed too high, as with cortistatin A, they return to their normal identity and stop growing.”

Shair’s lab became interested in the molecule several years ago, shortly after it was first isolated and described by other researchers. Early studies suggested it appeared to inhibit just a handful of kinases.

“We tested approximately 400 kinases, and found that it inhibits only CDK8 and CDK19 in cells, which makes it among the most selective kinase inhibitors identified to date,” Shair said. “Having compounds that precisely hit a specific target, like cortistatin A, can help reduce side effects and increase efficacy. In a way, it shatters a dogma because we thought it wasn’t possible for a molecule to be this selective and bind in a site common to all 500 human kinases, but this molecule does it, and it does it because of its 3-D structure. What’s interesting is that most kinase-inhibitor drugs do not have this type of 3-D structure. Nature is telling us that one way to achieve this level of specificity is to make molecules more like cortistatin A.”

Shair’s team successfully synthesized the molecule, which helped them study how it worked and why it affected the growth of a very specific type of cell. Later on, with funding and drug-development expertise provided by Harvard’s Blavatnik Biomedical Accelerator, Shair’s lab created a range of new molecules that may be better suited to clinical application.

“It’s a complex process to make [cortistatin A] — 32 chemical steps,” said Shair. “But we have been able to find less complex structures that act just like the natural compound, with better drug-like properties, and they can be made on a large scale and in about half as many steps.”

“Over the course of several years, we have watched this research progress from an intriguing discovery to a highly promising development candidate,” said Isaac Kohlberg, senior associate provost and chief technology development officer. “The latest results are a real testament to Matt’s ingenuity and dedication to addressing a very tough disease.”

While there is still much work to be done — in particular, to better understand how CDK8 and CDK19 regulate gene expression — the early results have been dramatic.

“This is the kind of thing you do science for,” Shair said, “the idea that once every 10 or 20 years you might find something this interesting, that sheds new light on important, difficult problems. This gives us an opportunity to generate a new understanding of cancer and also develop new therapeutics to treat it. We’re very excited and curious to see where it goes.”

 

Seeking A Better Way To Design Drugs

http://www.technologynetworks.com/medchem/news.aspx?ID=183338

NIH funds research at Worcester Polytechnic Institute to advance a new chemical process for more effective drug development and manufacturing.
The National Institutes of Health (NIH) has awarded $346,000 to Worcester Polytechnic Institute (WPI) for a three-year research project to advance development of a chemical process that could significantly improve the ability to design new pharmaceuticals and streamline the manufacturing of existing drugs.

Led by Marion Emmert, PhD, assistant professor of chemistry and biochemistry at WPI, the research program involves early-stage technology developed in her lab that may yield a more efficient and predictable method of bonding a vital class of structures called aromatic and benzylic amines to a drug molecule.

“Seven of the top 10 pharmaceuticals in use today have these substructures, because they are so effective at creating a biologically active compound,” Emmert said. “The current processes used to add these groups are indirect and not very efficient. So we asked ourselves, can we do it better? ”

For a drug to do its job in the body it must interact with a specific biological target and produce a therapeutic effect. First, the drug needs to physically attach or “bind” to the target, which is a specific part of a cell, protein, or molecule. As a result, designing a new drug is like crafting a three-dimensional jigsaw puzzle piece that fits precisely into an existing biological structure in the body. Aromatic and benzylic amines add properties to the drug that help it bind more efficiently to these biological structures.

Getting those aromatic and benzylic amines into the structure of a drug, however, is difficult. Traditionally, this requires a specialized chemical bond as precursor in a specific location of the drug’s molecular structure. “The current approach to making those bonds is indirect, requires several lengthy steps, and the outcome is not always precise or efficient,” Emmert said. “Only a small percentage of the bonds can be made in the proper place, and sometimes none at all.”

Emmert’s new approach uses novel reagents and metal catalysts to create a process that can attach amines directly, in the right place, every time. In early proof-of-principle experiments, Emmert has succeeded in making several amine bonds directly in one or two days, whereas the standard process can take two weeks with less accuracy. Over the next three years, with support from the NIH, Emmert’s team will continue to study the new catalytic processes in detail. They will also use the new process to synthesize Asacol, a common drug now in use for ulcerative colitis, and expect to significantly shorten its production.

“Some of our early data are promising, but we have a lot more work to do to understand the basic mechanisms involved in the new processes,” Emmert said. “We also have to adapt the process to molecules that could be used directly for drug development.”

 

Antiparasite Drug Developers Win Nobel

William Campbell, Satoshi Omura, and Youyou Tu have won this year’s Nobel Prize in Physiology or Medicine in recognition of their contributions to antiparasitic drug development.

By Karen Zusi and Tracy Vence | October 5, 2015

http://www.the-scientist.com//?articles.view/articleNo/44159/title/Antiparasite-Drug-Developers-Win-Nobel/

William Campbell, Satoshi Omura, and Youyou Tu have made significant contributions to treatments for river blindness, lymphatic filariasis, and malaria; today (October 5) these three scientists were jointly awarded the 2015 Nobel Prize in Physiology or Medicine in recognition of these advancements.

Tu is being recognized for her discoveries leading to the development of the antimalarial drug artemisinin. Campbell and Omura jointly received the other half of this year’s prize for their separate work leading to the discovery of the drug avermectin, which has been used to develop therapies for river blindness and lymphatic filariasis.

“These discoveries are now more than 30 years old,” David Conway, a professor of biology of the London School of Hygiene & Tropical Medicine, told The Scientist. “[These drugs] are still, today, the best two groups of compounds for antimalarial use, on the one hand, and antinematode worms and filariasis on the other.”

Omura, a Japanese microbiologist at Kitasato University in Tokyo, isolated strains of the soil bacteriaStreptomyces in a search for those with promising antibacterial activity. He eventually narrowed thousands of cultures down to 50.

Now research fellow emeritus at Drew University in New Jersey, Campbell spent much of his career at Merck, where he discovered effective antiparasitic properties in one of Omura’s cultures and purified the relevant compounds into avermectin (later refined into ivermectin).

“Bill Campbell is a wonderful scientist, a wonderful man, and a great mentor for undergraduate students,” said his colleague Roger Knowles, a professor of biology at Drew University. “His ability to speak about disease mechanisms and novel strategies to help [fight] these diseases. . . . that’s been a great boon to students.”

Tu began searching for a novel malaria treatment in the 1960s in traditional herbal medicine. She served as the head of Project 523, a program at the China Academy of Chinese Medical Sciences in Beijing aimed at finding new drugs for malaria. Tu successfully extracted a promising compound from the plant Artemisia annu that was highly effective against the malaria parasite. In recognition of her malaria research, Tu won a Lasker Award in 2011.

 

Optogenetics Advances in Monkeys

Researchers have selectively activated a specific neural pathway to manipulate a primate’s behavior.

By Kerry Grens | October 5, 2015

http://www.the-scientist.com//?articles.view/articleNo/44156/title/Optogenetics-Advances-in-Monkeys/

Scientists have used optogenetics to target a specific neural pathway in the brain of a macaque monkey and alter the animal’s behavior. As the authors reported in Nature Communications last month, such a feat had been accomplished only in rodents before.

Optogenetics relies on the insertion of a gene for a light-sensitive ion channel. When present in neurons, the channel can turn on or off the activity of a neuron, depending on the flavor of the channel. Previous attempts to use optogenetics in nonhuman primates affected brain regions more generally, rather than particular neural circuits. In this case, Masayuki Matsumoto of Kyoto University and colleagues delivered the channel’s gene specifically to one area of the monkey’s brain called the frontal eye field.

They found that not only did the neurons in this region respond to light shone on the brain, but the monkey’s behavior changed as well. The stimulation caused saccades—quick eye movements. “Our findings clearly demonstrate the causal relationship between the signals transmitted through the FEF-SC [frontal eye field-superior colliculus] pathway and saccadic eye movements,” Matsumoto and his colleagues wrote in their report.

“Over the decades, electrical microstimulation and pharmacological manipulation techniques have been used as tools to modulate neuronal activity in various brain regions, permitting investigators to establish causal links between neuronal activity and behaviours,” they continued. “These methodologies, however, cannot selectively target the activity (that is, the transmitted signal) of a particular pathway connecting two regions. The advent of pathway-selective optogenetic approaches has enabled investigators to overcome this issue in rodents and now, as we have demonstrated, in nonhuman primates.”

Read Full Post »

The History of Infectious Diseases and Epidemiology in the late 19th and 20th Century

Curator: Larry H Bernstein, MD, FCAP

 

Infectious diseases are a part of the history of English, French, and Spanish Colonization of the Americas, and of the Slave Trade.  The many plagues in the new and old world that have effected the course of history from ancient to modern times were known to the Egyptians, Greeks, Chinese, crusaders, explorers, Napoleon, and had familiar ties of war, pestilence, and epidemic. Our coverage is mainly concerned with the scientific and public health consequences of these events that preceded WWI and extended to the Vietnam War, and is highlighted by the invention of a public health system world wide.

The Armed Forces Institute of Pathology (AFIP) closed its’ doors on September 15, 2011. It was founded as the Army Medical Museum on May 21, 1862, to collect pathological specimens along with their case histories.

The information from the case files of the pathological specimens from the Civil War was compared with Army pensions records and compiled into the six-volume Medical and Surgical History of the War of the Rebellion, an early study of wartime medicine.

In 1900, museum curator Walter Reed led the commission which proved that a mosquito was the vector for Yellow Fever, beginning the mosquito eradication campaigns throughout most of the twentieth century.

WalterReed

WalterReed

Another museum curator, Frederick Russell, conducted clinical trials on the typhoid vaccine in 1907, resulting in the U.S. Army to be the first Army vaccinated against typhoid.

Increased emphasis on pathology during the twentieth century turned the museum, renamed the Armed Forces Institute of Pathology in 1949, into an international resource for pathology and the study of disease. AFIP’s pathological collections have been used, for example, in the characterization of the 1918-influenza virus in 1997.

Prior to moving to the Walter Reed Army Medical Center, the AFIP was located at the Army Medical Museum and Library on the Mall (1887-1969), and earlier as Army Medical Museum in Ford’s Theatre (1867-1886).

Army Medical Museum and Library on the Mall

Army Medical Museum and Library on the Mall

This institution, originally the Library of the Surgeon General’s Office (U.S. Army), gained its present name and was transferred from the Army to the Public Health Service in 1956. In 1962, it moved to its own Bethesda site after sharing space for nearly 100 years with other Army units, first at the former Ford’s Theatre building and then at the Army Medical Museum and Library on the Mall. Rare books and other holdings that had been sent to Cleveland for safekeeping during World War II were also reunited with the main collection at that time.

The National Museum of Health and Medicine, established in 1862, inspires interest in and promotes the understanding of medicine — past, present, and future — with a special emphasis on tri-service American military medicine. As a National Historic Landmark recognized for its ongoing value to the health of the military and to the nation, the Museum identifies, collects, and preserves important and unique resources to support a broad agenda of innovative exhibits, educational programs, and scientific, historical, and medical research. NMHM is a headquarters element of the U.S. Army Medical Research and Materiel Command. NMHM’s newest exhibit installations showcase the institution’s 25-million object collection, focusing on topics as diverse as innovations in military medicine, traumatic brain injury, anatomy and pathology, military medicine during the Civil War, the assassination of Abraham Lincoln (including the bullet that killed him), human identification and a special exhibition on the Museum’s own major milestone—the 150th anniversary of the founding of the Army Medical Museum. Objects on display will include familiar artifacts and specimens: the bullet that killed Lincoln and a leg showing the effects of elephantiasis, as well as recent finds in the collection—all designed to astound visitors to the new Museum.

Today, the National Library of Medicine houses the largest collection of print and non-print materials in the history of the health sciences in the United States, and maintains an active program of exhibits and public lectures. Most of the archival and manuscript material dates from the 17th century; however, the Library owns about 200 pre-1601 Western and Islamic manuscripts. Holdings include pre-1914 books, pre-1871 journals, archives and modern manuscripts, medieval and Islamic manuscripts, a collection of printed books, manuscripts, and visual material in Japanese, Chinese, and Korean; historical prints, photographs, films, and videos; pamphlets, dissertations, theses, college catalogs, and government documents.

The oldest item in the Library is an Arabic manuscript on gastrointestinal diseases from al-Razi’s The Comprehensive Book on Medicine (Kitab al-Hawi fi al-tibb) dated 1094. Significant modern collections include the papers of U.S. Surgeons General, including C. Everett Koop, and the papers of Nobel Prize-winning scientists, particularly those connected with NIH.

As part of its Profiles in Science project, the National Library of Medicine has collaborated with the Churchill Archives Centre to digitize and make available over the World Wide Web a selection of the Rosalind Franklin Papers for use by educators and researchers. This site provides access to the portions of the Rosalind Franklin Papers, which range from 1920 to 1975. The collection contains photographs, correspondence, diaries, published articles, lectures, laboratory notebooks, and research notes.

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

“Science and everyday life cannot and should not be separated. Science, for me, gives a partial explanation of life. In so far as it goes, it is based on fact, experience, and experiment. . . . I agree that faith is essential to success in life, but I do not accept your definition of faith, i.e., belief in life after death. In my view, all that is necessary for faith is the belief that by doing our best we shall come nearer to success and that success in our aims (the improvement of the lot of mankind, present and future) is worth attaining.”

–Rosalind Franklin in a letter to Ellis Franklin, ca. summer 1940

Smallpox

Although some disliked mandatory smallpox vaccination measures, coordinated efforts against smallpox went on in the United States after 1867, and the disease continued to diminish in the wealthy countries. By 1897, smallpox had largely been eliminated from the United States. In Northern Europe a number of countries had eliminated smallpox by 1900, and by 1914, the incidence in most industrialized countries had decreased to comparatively low levels. Vaccination continued in industrialized countries, until the mid to late 1970s as protection against reintroduction. Australia and New Zealand are two notable exceptions; neither experienced endemic smallpox and never vaccinated widely, relying instead on protection by distance and strict quarantines.

In 1966 an international team, the Smallpox Eradication Unit, was formed under the leadership of an American, Donald Henderson. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort, and adopted the new disease surveillance method promoted by Czech epidemiologist Karel Raška. Two-year old Rahima Banu of Bangladesh (pictured) was the last person infected with naturally occurring Variola major, in 1975

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

—World Health Organization, Resolution WHA33.3

Anthrax

Anthrax is an acute disease caused by the bacterium Bacillus anthracis. Most forms of the disease are lethal, and it affects both humans and other animals. Effective vaccines against anthrax are now available, and some forms of the disease respond well to antibiotic treatment.

Like many other members of the genus Bacillus, B. anthracis can form dormant endospores (often referred to as “spores” for short, but not to be confused with fungal spores) that are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, even Antarctica. When spores are inhaled, ingested, or come into contact with a skin lesion on a host, they may become reactivated and multiply rapidly.

Anthrax commonly infects wild and domesticated herbivorous mammals that ingest or inhale the spores while grazing. Ingestion is thought to be the most common route by which herbivores contract anthrax. Carnivores living in the same environment may become infected by consuming infected animals. Diseased animals can spread anthrax to humans, either by direct contact (e.g., inoculation of infected blood to broken skin) or by consumption of a diseased animal’s flesh.

Anthrax does not spread directly from one infected animal or person to another; it is spread by spores. These spores can be transported by clothing or shoes. The body of an animal that had active anthrax at the time of death can also be a source of anthrax spores. Owing to the hardiness of anthrax spores, and their ease of production in vitro, they are extraordinarily well suited to use (in powdered and aerosol form) as biological weapons.

Bacillus anthracis is a rod-shaped, Gram-positive, aerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria and put them into a mouse. The bacterium normally rests in endospore form in the soil, and can survive for decades in this state. Once ingested or placed in an open wound, the bacterium begins multiplying inside the animal or human and typically kills the host within a few days or weeks. The endospores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.

Robert Koch

Robert Koch

Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. One spore forms per one vegetative bacterium. Once formed, these spores are very hard to eradicate.

The lethality of the anthrax disease is due to the bacterium’s two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.

To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore.

Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol.

Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling. LF inactivates neutrophils so they cannot phagocytose bacteria. Anthrax causes vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.

Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere. Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. The body’s natural defenses presumably can destroy low levels of exposure. These people usually contract cutaneous anthrax if they catch anything.

Throughout history, the most dangerous form of inhalational anthrax was called woolsorters’ disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare, as almost no infected animals remain. The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. Gastrointestinal anthrax is exceedingly rare in the United States, with only one case on record, reported in 1942, according to the Centers for Disease Control and Prevention.

Various techniques are used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.

All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2.

  1. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a “medusa head” appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.

Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine, from Pasteur’s pioneering 19th-century work with cattle (the second effective vaccine ever) to the controversial 20th century use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.

If a person is suspected as having died from anthrax, every precaution should be taken to avoid skin contact with the potentially contaminated body and fluids exuded through natural body openings. The body should be put in strict quarantine and then incinerated. A blood sample should then be collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though culture of the organism is still the gold standard for diagnosis.

Full isolation of the body is important to prevent possible contamination of others. Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations should be used when handling the body. Disposable personal protective equipment and filters should be autoclaved, and/or burned and buried.

Anyone working with anthrax in a suspected or confirmed victim should wear respiratory equipment capable of filtering this size of particle or smaller. The US National Institute for Occupational Safety and Health – and Mine Safety and Health Administration-approved high-efficiency respirator, such as a half-face disposable respirator with a high-efficiency particulate air filter, is recommended.

All possibly contaminated bedding or clothing should be isolated in double plastic bags and treated as possible biohazard waste. The victim should be sealed in an airtight body bag. Dead victims who are opened and not burned provide an ideal source of anthrax spores. Cremating victims is the preferred way of handling body disposal.

Until the 20th century, anthrax infections killed hundreds of thousands of animals and people worldwide each year. French scientist Louis Pasteur developed the first effective vaccine for anthrax in 1881.

louis-pasteur

louis-pasteur

As a result of over a century of animal vaccination programs, sterilization of raw animal waste materials, and anthrax eradication programs in United States, Canada, Russia, Eastern Europe, Oceania, and parts of Africa and Asia, anthrax infection is now relatively rare in domestic animals. Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001.

Anthrax outbreaks occur in some wild animal populations with some regularity. The disease is more common in countries without widespread veterinary or human public health programs. In the 21st century, anthrax is still a problem in less developed countries.

  1. anthracis bacterial spores are soil-borne. Because of their long lifespan, spores are present globally and remain at the burial sites of animals killed by anthrax for many decades. Disturbed grave sites of infected animals have caused reinfection over 70 years after the animal’s interment.

Cholera

This is an acute diarrheal infection that can kill within a matter of hours if untreated. Oral rehydration therapy — drinking water mixed with salts and sugar. But researchers at EPFL — the Swiss Federal Institute of Technology in Lausanne — say using rice starch instead of sugar with the rehydration salts could reduce bacterial toxicity by almost 75 percent. That would make the microbe less likely to infect a patient’s family and friends if they are exposed to any body fluids.

The World Health Organization says cholera, a water-borne bacterium, infects three to five million people every year, and the severe dehydration it causes leads to as many as 120,000 deaths.

Cholera is an acute diarrheal disease caused by the water borne bacteria Vibrio cholerae O1 or O139 (V. cholerae). Infection is mainly through ingestion of contaminated water or food. The V cholerae passes through the stomach, colonizes the upper part of the small intestine, penetrates the mucus layer, and secretes cholera toxin which affects the small intestine.

Clinically, the majority of cholera episodes are characterized by a sudden onset of massive diarrhea and vomiting accompanied by the loss of profuse amounts of protein-free fluid with electrolytes. The resulting dehydration produces tachycardia, hypotension, and vascular collapse, which can lead to sudden death. The diagnosis of cholera is commonly established by isolating the causative organism from the stools of infected individuals

There are an estimated 3–5 million cholera cases and 100 000–120 000 deaths due to cholera every year.

Up to 80% of cases can be successfully treated with oral rehydration salts.

Effective control measures rely on prevention, preparedness and response.

Provision of safe water and sanitation is critical in reducing the impact of cholera and other waterborne diseases.

Oral cholera vaccines are considered an additional means to control cholera, but should not replace conventional control measures.

During the 19th century, cholera spread across the world from its original reservoir in the Ganges delta in India. Six subsequent pandemics killed millions of people across all continents. The current (seventh) pandemic started in South Asia in 1961, and reached Africa in 1971 and the Americas in 1991. Cholera is now endemic in many countries.

INDIA-ENVIRONMENT-POLUTION

INDIA-ENVIRONMENT-POLUTION

In its extreme manifestation, cholera is one of the most rapidly fatal infectious illnesses known. Within 3–4 hours of onset of symptoms, a previously healthy person may become severely dehydrated and if not treated may die within 24 hours (WHO, 2010). The disease is one of the most researched in the world today; nevertheless, it is still an important public health problem despite more than a century of study, especially in developing tropical countries. Cholera is currently listed as one of three internationally quarantinable diseases by the World Health Organization (WHO), along with plague and yellow fever (WHO, 2000a).

Two serogroups of V. cholerae – O1 and O139 – cause outbreaks. V. cholerae O1 causes the majority of outbreaks, while O139 – first identified in Bangladesh in 1992 – is confined to South-East Asia.

Non-O1 and non-O139 V. cholerae can cause mild diarrhoea but do not generate epidemics.

The main reservoirs of V. cholerae are people and aquatic sources such as brackish water and estuaries, often associated with algal blooms. Recent studies indicate that global warming creates a favorable environment for the bacteria.

Socioeconomic and demographic factors enhance the vulnerability of a population to infection and contribute to epidemic spread. Such factors also mandate the extent to which the disease will reach epidemic proportions and also modulate the size of the epidemic.Known population level (local-level) risk factors of cholera include poverty, lack of development, high population density, low education, and lack of previous exposure. Cholera diffuses rapidly in environments that lack basic infrastructure with regard to access to safe water and proper sanitation. The cholera vibrios can survive and multiply outside the human body and can spread rapidly in environments where living conditions are overcrowded and where there is no safe disposal of solid waste, liquid waste, and human feces.

Mapping the locations of cholera victims, John Snow was able to trace the cause of the disease to a contaminated water source. Surprisingly, this was done 20 years before Koch and Pasteur established the beginnings of microbiology (Koch, 1884).

John Snow's  map

John Snow’s map

Yellow Fever

Yellow fever virus was probably introduced into the New World via ships carrying slaves from West Africa. Throughout the 18th and 19th centuries, regular and devastating epidemics of yellow fever occurred across the Caribbean, Central and South America, the southern United States and Europe. The Yellow Fever Commission, founded as a consequence of excessive disease mortality during the Spanish– American War (1898), concluded that the best way to control the disease was to control the mosquito. William Gorgas successfully eradicated yellow fever from Havana by destroying larval breeding sites and this strategy of source reduction was then successfully used to reduce disease problems and thus finally permit the construction of the Panama Canal in 1904. Success was due largely to a top-down, military approach involving strict supervision and discipline (Gorgas, 1915). In 1946, an intensive Aedes aegypti eradication campaign was initiated in the Americas, which succeeded in reducing vector populations to undetectable levels throughout most of its range.

The production of an effective vaccine in the 1930s led to a change of emphasis from vector control to vaccination for the control of yellow fever. Vaccination campaigns almost eliminated urban yellow fever but incomplete coverage, as with incomplete anti-vectorial measures previously, meant the disease persisted, and outbreaks occurred in remote forest areas.

It was acknowledged by the Health Organization of the League of Nations (the forerunner to the World Health Organization (WHO)) that yellow fever was a severe burden on endemic countries. The work of Soper and the Brazilian Cooperative Yellow Fever Service (Soper, 1934, 1935a, b) began to determine the geographical extent of the disease, specifically in Brazil. Regional maps of disease outbreaks were published by Sawyer (1934), but it was not until after the formation of the WHO that a global map of yellow fever endemicity was first constructed (van Rooyen and Rhodes, 1948). This map was based on expert opinion (United Nations Relief and Rehabilitation Administration/Expert Commission on Quarantine) and serological surveys. The present-day distribution map for yellow fever is still essentially a modified version of this map.

global yellow fever risk map

global yellow fever risk map

Yellow fever is conspicuously absent from Asia. Although there is some evidence that other flaviviruses may offer cross-protection against yellow fever (Gordon-Smith et al., 1962), why yellow fever does not occur in Asia is still unexplained.

It has been estimated that the currently circulating strains of YFV arose in Africa within the last 1,500 years and emerged in the Americas following the slave trade approximately 300–400 years ago. These viruses then spread westwards across the continent and persist there to this day in the jungles of South America.

The 17D live-attenuated vaccine still in use today was developed in 1936, and a single dose confers immunity for at least ten years in 95% of the cases. In a bid to contain the spread of the disease, travellers to countries within endemic areas or those thought to be ‘at risk’ require a certificate of vaccination. The yellow fever certificate is the only internationally regulated certification supported by the WHO. The effectiveness of the vaccine reduces the need for anti-vectorial campaigns directed specifically against yellow fever. As the same major vector is involved, control of Aedes aegypti for dengue reduction will also reduce yellow fever transmission where both diseases co-occur, especially within urban settings.

Dengue

Probable epidemics of dengue fever have been recorded from Africa, Asia, Europe and the Americas since the early 19th century (Armstrong, 1923). Although it is rarely fatal, up to 90% of the

population of an infected area can be incapacitated during the course of an epidemic (Armstrong, 1923; Siler et al., 1926). Widespread movements of troops and refugees during and after World War II introduced vectors and viruses into many new areas. Dengue fever has unsurprisingly been mistaken for yellow fever as well as other diseases including influenza, measles, typhoid and malaria. It is rarely fatal and survivors appear to have lifelong immunity to the homologous serotype.

Far more serious is dengue haemorrhagic fever (DHF), where additional symptoms develop, including haemorrhaging and shock. The mortality from DHF can exceed 30% if appropriate care is unavailable. The most significant risk factor for DHF is when secondary infection with a different serotype occurs in people who have already had, and recovered from, a primary dengue infection.

Dengue has adapted to changes in human demography very effectively. The main vector of dengue is the anthropophilic Aedes aegypti, which is found in close association with human settlements throughout the tropics, breeding mainly in containers in and around, and feeding almost exclusively on humans. As a result, dengue is essentially a disease of tropical urban areas. Before 1970, only nine countries had experienced DHF epidemics, but by 1995 this number had increased fourfold (WHO, 2001). Dengue case numbers have increased considerably since the 1960s; by the end of the 20th century an estimated 50 million cases of dengue fever and 500 000 cases of DHF were occurring every year (WHO, 2001).

The appearance of DHF stimulated large amounts of dengue research, which established the existence of the four serotypes and the range of competent vectors, and led to the adoption of Aedes aegypti control programs in some areas (particularly South-East Asia) (Kilpatrick et al., 1970).

There have been several attempts to estimate the economic impact of dengue: the 1977 epidemic in Puerto Rico was thought to have cost between $6.1 and $15.6 million ($26–$31 per clinical case) (Von Allmen et al., 1979), while the 1981 Cuban epidemic (with a total of 344 203 reported cases) cost about $103 million (around $299 per case) (Kouri et al., 1989).

There is no cure for dengue fever or for DHF. Currently, the only treatment is symptomatic, but this can reduce mortality from DHF to less than 1% (WHO, 2002). Unfortunately, the extent of dengue epidemics means that local public health services are often overwhelmed by the demands for treatment.

Malaria

Malaria is a serious and sometimes fatal disease caused by a parasite that infects a mosquito. People who get malaria are typically very sick with high fevers, shaking chills, and flu-like illness. About 1,500 cases of malaria are diagnosed in the United States each year. The vast majority of cases in the United States are in travelers and immigrants returning from countries where malaria transmission occurs, many from sub-Saharan Africa and South Asia. Malaria has been noted for more than 4,000 years. It became widely recognized in Greece by the 4th century BCE, and it was responsible for the decline of many of the city-state populations. Hippocrates noted the principal symptoms. In the Susruta, a Sanskrit medical treatise, the symptoms of malarial fever were described and attributed to the bites of certain insects. A number of Roman writers attributed malarial diseases to the swamps.

Following their arrival in the New World, Spanish Jesuit missionaries learned from indigenous Indian tribes of a medicinal bark used for the treatment of fevers. With this bark, the Countess of Chinchón, the wife of the Viceroy of Peru, was cured of her fever. The bark from the tree was then called Peruvian bark and the tree was named Cinchona after the countess. The medicine from the bark is now known as the antimalarial, quinine. Along with artemisinins, quinine is one of the most effective antimalarial drugs available today.

quinquin acalisaya

quinquin acalisaya

Cinchona officinalis is a medicinal plant, one of several Cinchona species used for the production of quinine, which is an anti-fever agent. It is especially useful in the prevention and treatment of malaria. Cinchona calisaya is the tree most cultivated for quinine production.

There are a number of other alkaloids that are extracted from this tree. They include cinchonine, cinchonidine and quinidine  (Wikipedia)

Charles Louis Alphonse Laveran, a French army surgeon stationed in Constantine, Algeria, was the first to notice parasites in the blood of a patient suffering from malaria in 1880. Laveran was awarded the Nobel Prize in 1907.

Alphonse Laveran

Alphonse Laveran

Camillo Golgi, an Italian neurophysiologist, established that there were at least two forms of the disease, one with tertian periodicity (fever every other day) and one with quartan periodicity (fever every third day). He also observed that the forms produced differing numbers of merozoites (new parasites) upon maturity and that fever coincided with the rupture and release of merozoites into the blood stream. He was awarded a Nobel Prize in Medicine for his discoveries in neurophysiology in 1906.

malaria_lifecycle.

malaria_lifecycle.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

The Italian investigators Giovanni Batista Grassi and Raimondo Filetti first introduced the names Plasmodium vivax and P. malariae for two of the malaria parasites that affect humans in 1890. Laveran had believed that there was only one species, Oscillaria malariae. William H. Welch, reviewed the subject and, in 1897, he named the malignant tertian malaria parasite P. falciparum. In 1922, John William Watson Stephens described the fourth human malaria parasite, P. ovale. P. knowlesi was first described by Robert Knowles and Biraj Mohan Das Gupta in 1931 in a long-tailed macaque, but the first documented human infection with P. knowlesi was in 1965.

Anopheles mosquito

Anopheles mosquito

Ronald Ross, a British officer in the Indian Medical Service, was the first to demonstrate that malaria parasites could be transmitted from infected patients to mosquitoes in 1997. In further work with bird malaria, Ross showed that mosquitoes could transmit malaria parasites from bird to bird. This necessitated a sporogonic cycle (the time interval during which the parasite developed in the mosquito). Ross was awarded the Nobel Prize in 1902.

Ronald Ross_1899

Ronald Ross_1899

A team of Italian investigators led by Giovanni Batista Grassi, collected Anopheles claviger mosquitoes and fed them on malarial patients. The complete sporogonic cycle of Plasmodium falciparum, P. vivax, and P. malariae were demonstrated. Mosquitoes infected by feeding on a patient in Rome were sent to London in 1999, where they fed on two volunteers, both of whom developed malaria.

The construction of the Panama Canal was made possible only after yellow fever and malaria were controlled in the area. These two diseases were a major cause of death and disease among workers in the area. In 1906, there were over 26,000 employees working on the Canal. Of these, over 21,000 were hospitalized for malaria at some time during their work. By 1912, there were over 50,000 employees, and the number of hospitalized workers had decreased to approximately 5,600. Through the leadership and efforts of William Crawford Gorgas, Joseph Augustin LePrince, and Samuel Taylor Darling, yellow fever was eliminated and malaria incidence markedly reduced through an integrated program of insect and malaria control.

Gorgas-William-Crawford, MD

Gorgas-William-Crawford, MD

During the U.S. military occupation of Cuba and the construction of the Panama Canal at the turn of the 20th century, U.S. officials made great strides in the control of malaria and yellow fever. In 1914 Henry Rose Carter and Rudolph H. von Ezdorf of the USPHS requested and received funds from the U.S. Congress to control malaria in the United States. Various activities to investigate and combat malaria in the United States followed from this initial request and reduced the number of malaria cases in the United States. USPHS established malaria control activities around military bases in the malarious regions of the southern United States to allow soldiers to train year round.

U.S. President Franklin D. Roosevelt signed a bill that created the Tennessee Valley Authority (TVA) on May 18, 1933. The law gave the federal government a centralized body to control the Tennessee River’s potential for hydroelectric power and improve the land and waterways for development of the region. An organized and effective malaria control program stemmed from this new authority in the Tennessee River valley. Malaria affected 30 percent of the population in the region when the TVA was incorporated in 1933. The Public Health Service played a vital role in the research and control operations and by 1947, the disease was essentially eliminated. Mosquito breeding sites were reduced by controlling water levels and insecticide applications.

Chloroquine was discovered by a German, Hans Andersag, in 1934 at Bayer I.G. Farbenindustrie A.G. laboratories in Eberfeld, Germany. He named his compound resochin. Through a series of lapses and confusion brought about during the war, chloroquine was finally recognized and established as an effective and safe antimalarial in 1946 by British and U.S. scientists.

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

A German chemistry student, Othmer Zeidler, synthesized DDT in 1874, for his thesis. The insecticidal property of DDT was not discovered until 1939 by Paul Müller in Switzerland. Various militaries in WWII utilized the new insecticide initially for control of louse-borne typhus. DDT was used for malaria control at the end of WWII after it had proven effective against malaria-carrying mosquitoes by British, Italian, and American scientists. Müller won the Nobel Prize for Medicine in 1948.

Paul Muller

Paul Muller

Malaria Control in War Areas (MCWA) was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic. Many of the bases were established in areas where mosquitoes were abundant. MCWA aimed to prevent reintroduction of malaria into the civilian population by mosquitoes that would have fed on malaria-infected soldiers, in training or returning from endemic areas. During these activities, MCWA also trained state and local health department officials in malaria control techniques and strategies.

The National Malaria Eradication Program, a cooperative undertaking by state and local health agencies of 13 Southeastern states and the CDC, originally proposed by Louis Laval Williams, commenced operations on July 1, 1947. By the end of 1949, over 4,650,000 housespray applications had been made. In 1947, 15,000 malaria cases were reported. By 1950, only 2,000 cases were reported. By 1951, malaria was considered eliminated from the United States.

With the success of DDT, the advent of less toxic, more effective synthetic antimalarials, and the enthusiastic and urgent belief that time and money were of the essence, the World Health Organization (WHO) submitted at the World Health Assembly in 1955 an ambitious proposal for the eradication of malaria worldwide. Eradication efforts began and focused on house spraying with residual insecticides, antimalarial drug treatment, and surveillance, and would be carried out in 4 successive steps: preparation, attack, consolidation, and maintenance. Successes included elimination in nations with temperate climates and seasonal malaria transmission.

Some countries such as India and Sri Lanka had sharp reductions in the number of cases, followed by increases to substantial levels after efforts ceased, while other nations had negligible progress (such as Indonesia, Afghanistan, Haiti, and Nicaragua), and still others were excluded completely from the eradication campaign(sub-Saharan Africa). The emergence of drug resistance, widespread resistance to available insecticides, wars and massive population movements, difficulties in obtaining sustained funding from donor countries, and lack of community participation made the long-term maintenance of the effort untenable.

The goal of most current National Malaria Prevention and Control Programs and most malaria activities conducted in endemic countries is to reduce the number of malaria-related cases and deaths. To reduce malaria transmission to a level where it is no longer a public health problem is the goal of what is called malaria “control.”

The natural ecology of malaria involves malaria parasites infecting successively two types of hosts: humans and female Anopheles mosquitoes. In humans, the parasites grow and multiply first in the liver cells and then in the red cells of the blood. In the blood, successive broods of parasites grow inside the red cells and destroy them, releasing daughter parasites (“merozoites”) that continue the cycle by invading other red cells.

Anopheles mosquito

Anopheles mosquito

The blood stage parasites are those that cause the symptoms of malaria. When certain forms of blood stage parasites (“gametocytes”) are picked up by a female Anopheles mosquito during a blood meal, they start another, different cycle of growth and multiplication in the mosquito.

After 10-18 days, the parasites are found (as “sporozoites”) in the mosquito’s salivary glands. When the Anopheles mosquito takes a blood meal on another human, the sporozoites are injected with the mosquito’s saliva and start another human infection when they parasitize the liver cells.

Malaria. Wikipedia

Malaria. Wikipedia

A Plasmodium from the saliva of a female mosquito moving across a mosquito cell

Thus the mosquito carries the disease from one human to another (acting as a “vector”). Differently from the human host, the mosquito vector does not suffer from the presence of the parasites.

All the clinical symptoms associated with malaria are caused by the asexual erythrocytic or blood stage parasites. When the parasite develops in the erythrocyte, numerous known and unknown waste substances such as hemozoin pigment and other toxic factors accumulate in the infected red blood cell. These are dumped into the bloodstream when the infected cells lyse and release invasive merozoites. The hemozoin and other toxic factors such as glucose phosphate isomerase (GPI) stimulate macrophages and other cells to produce cytokines and other soluble factors which act to produce fever and rigors associated with malaria.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

Plasmodium falciparum-infected erythrocytes, particularly those with mature trophozoites, adhere to the vascular endothelium of venular blood vessel walls and when they become sequestered in the vessels of the brain it is a factor in causing the severe disease syndrome known as cerebral malaria, which is associated with high mortality.

Following the infective bite by the Anopheles mosquito, a period of time (the “incubation period”) goes by before the first symptoms appear. The incubation period in most cases varies from 7 to 30 days. The shorter periods are observed most frequently with P. falciparum and the longer ones with P. malariae.

malaria_lifecycle.

malaria_lifecycle.

Antimalarial drugs taken for prophylaxis by travelers can delay the appearance of malaria symptoms by weeks or months, long after the traveler has left the malaria-endemic area. (This can happen particularly with P. vivax and P. ovale, both of which can produce dormant liver stage parasites; the liver stages may reactivate and cause disease months after the infective mosquito bite.)

The Influenza Pandemic of 1918

The Nation’s Health

If you had lived in the early twentieth century, your life expectancy would
have been much shorter than it is today. Today, life expectancy for men is 75 years;
for women, it is 80 years. In 1918, life expectancy for men was only 53 years.

Women’s life expectancy at 54 was only marginally better.

Why was life expectancy so much shorter?

During the early twentieth century, communicable diseases—that is diseases
which can spread from person to person—were widespread. Influenza and
pneumonia along with tuberculosis and gastrointestinal infections such
as diarrhea killed Americans at an alarming rate but
non-communicable diseases such as cancer and heart disease also
exacted a heavy toll. Accidents, especially in the nation’s unregulated factories
and workshops, were also responsible for maiming and killing many workers.

High infant mortality further shortened life expectancy. In 1918, one in
five American children did not live beyond their fifth birthday. In some
cities, the situation was even worse, with thirty percent of all infants dying
before their first birthday. Childhood diseases such as diphtheria, measles,
scarlet fever and whooping cough contributed significantly to these high
death rates.

osler_at_a_bedside

osler_at_a_bedside

By 1900, an increasing number of physicians were receiving clinical
training. This training provided doctors with new insights into disease
and specific types of diseases. [Credit: National Library of Medicine]

scarlet_fever

scarlet_fever

Quarantine signs such as this one warned visitors away from homes
with scarlet fever and other infectious diseases. [Credit: National
Library of Medicine]

Rat Proofing

Cities often sponsored Clean-Up Days. Here, Public Health Service
employees clean up San Francisco’s streets in a campaign to
eradicate bubonic plague. [Credit: Office of the Public Health
Service Historian]

cleanup days

cleanup days

A young woman is seated with a baby on her lap in the center
of the photo.  On the right are two young children.  One child is
standing.  The other is seated in a crib.  A woman in a long
white apron stands by the stove on the left side of the photo.
She is pulling a bottle out of a pan on the stove.

nurse_helps_with_baby_formula

nurse_helps_with_baby_formula

A public health nurse teaches a young mother how to sterilize
a bottle. [Credit: National Library of Medicine]

Seeking Medical Care

Feeling Sick in 1918?

If you became sick in nineteenth-century America, you might consult
a doctor, a druggist, a midwife, a folk healer, a nurse or even
your neighbor. Most of these practitioners would visit you in your home.

By 1918, these attitudes toward health care were beginning to
change. Some physicians had begun to set up offices where patients
could receive medical care and hospitals, which emphasized sterilization
and isolation, were also becoming popular.

However, these changes were not yet universal and many Americans
still lived their entire lives without visiting a doctor.

How Did Ordinary People View Disease?

Folk Medicine:

In 1918, folk healers could be found all over America. Some of these
healers believed that diseases had a physical cause such as cold
weather but others believed it had a supernatural cause such as a curse.

Treatments advocated by these healers ran the gamut. Herbal remedies
were especially popular. Other popular remedies included cupping,
which entailed attaching a heated cup to the surface of the skin,
and acupuncture. Many people also wore magical objects which they
believed protected the wearer from illness.

During the influenza pandemic of 1918 when scientific medicine
failed to provide Americans with a cure or preventative, many people
turned to folk remedies and treatments.

Scientific Medicine

In the 1880s, building on developments which had been in the
making since the 1830s, a growing number of scientists and
physicians came to believe that disease was spread by
minute pathogenic organisms or germs.

Often called the bacteriological revolution, this new theory
radically transformed the practice of medicine. But while this was a
major step forward in understanding disease, doctors and scientists
continued to have only a rudimentary understanding of the differences
between different types of microbes. Many practicing physicians
did not understand the differences between bacteria and viruses
and this sharply limited their ability to understand disease
causation and disease prevention.

Drugs and Druggists:

Although the early twentieth century witnessed growing attempts
to regulate the practice of medicine, many druggists assumed
duties we associate today with physicians. Some druggists, for
example, diagnosed and prescribed treatments which they
then sold to the patient. Some of these treatments included opiates;
few actually cured diseases.

Desperate times called for desperate remedies and during the
influenza pandemic, many patients turned to these and other drugs
in the hopes that they would provide a cure.

Nurses:

Between 1890 and 1920, nursing schools multiplied and trained
nurses began to replace practical nurses. Isolation practices
sterility, and strict routines, practices associated with professionally
trained nurses, increasingly became standard during this period. In 1918, nurses served as the physician’s hand, assisting doctors as
they made the rounds. During the pandemic, many nurses acted
independently of doctors, treating and prescribing for patients.

Physicians:

Throughout the eighteenth and much of the nineteenth centuries,
pretty much anyone had the right to call oneself a physician. By the
late nineteenth century, growing calls for reform had begun to
transform the profession.

In 1900, every state in the Union had some type of medical registration
law with about half of all states requiring physicians to possess a
medical diploma and pass an exam before they received a license
to practice. However, grandfather clauses which exempted many older
physicians meant that many physicians who practiced in 1918
had been poorly trained.

quack_doctor

quack_doctor

Poor training and loose regulations meant that some doctors were
little more than quacks. [Credit: National Library of Medicine]

drug_ad

drug_ad

Drug advertisers routinely promised quick and painless cures.
[Credit: National Library of Medicine]

While access to the profession was tightening, women and minorities,
including African-Americans, entered the profession in growing
numbers during the early twentieth century.

What Did Doctors Really Know?

Growing understanding of bacteriology enabled early twentieth-
century physicians to diagnose diseases more effectively than their
predecessors but diagnosis continued to be difficult. Influenza was
especially tricky to diagnose and many physicians may have incorrectly
diagnosed their patients, especially in the early stages of the pandemic.

Bacteriology did not revolutionize the treatment of disease. In the
pre-antibiotic era of 1918, physicians continued to rely heavily
on traditional therapeutics. During the pandemic, many physicians
used traditional treatments such as sweating which had their
roots in humoral medicine.

Reflecting the uneven structure of medical education, the level and
quality of care which physicians provided varied wildly.

The Public Health Service

Founded in 1798, the Marine Hospital Service originally provided
health care for sick and disabled seaman. By the late nineteenth
century, the growth of trade, travel and immigration networks
had led the Service to expand its mission to include protecting
the health of all Americans.

In a nation where federal and state authorities had consistently
battled for supremacy, the powers of the Public Health Service
were limited. Viewed with suspicion by many state and local
authorities, PHS officers often found themselves fighting state
and local authorities as well as epidemics—even when they had
been called in by these authorities.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

A network of hospitals in the nation’s ports provided seamen with
access to healthcare. [Credit: Office of the Public Health Service Historian]

In 1918, there were fewer than 700 commissioned officers in the PHS.
Charged with the daunting task of protecting the health of some
106 million Americans, PHS officers were stationed in not only
the United States but also abroad.

Because few diseases could be cured, the prevention of disease
was central to the PHS mission. Under the leadership of Surgeon
General Rupert Blue, the PHS advocated the use of scientific
research, domestic and foreign quarantine, marine hospitals
and statistics to accomplish this mission. hen an epidemic emerged,
the Public Health Service’s epidemiologists tracked the disease,
house by house. The 1918 influenza pandemic occurred too
rapidly for the PHS to develop a detailed study of the pandemic.

typhoid_map

typhoid_map

This map was used to trace a smaller typhoid epidemic which erupted in
Washington, DC in 1906. [Credit: Office of the Public Health Service Historian]

The spread of disease within the US was a serious concern. However,
PHS officers were most concerned about the importation of disease into
the United States. To prevent this, ships could be, and often were,
quarantined by the PHS.

fever-quaranteen-station-1880

fever-quaranteen-station-1880

Travelers and immigrants to the United States were also required
to undergo a medical exam when entering the country. In 1918 alone,
700,000 immigrants underwent a medical exam at the hands of PHS
officers. Within the United States, PHS officers worked directly with
state and local departments of health to track, prevent and arrest
epidemics as they emerged. During 1918, PHS officers found themselves
battling not only influenza but also polio, typhus, typhoid, smallpox
and a range of other diseases. In 1918, the PHS operated research
laboratories stretching from Hamilton, Montana to Washington DC.
Scientific researchers at these laboratories ultimately discovered
both the causes and cures of diseases ranging from Rocky Mountain
Spotted Fever to pellagra.

Sewers and Sanitation:

In the nineteenth century, most physicians and public health experts
believed that disease was caused not by microorganisms but rather by dirt itself.

Sanitarians, as these people were called, argued that cleaning dirt-
infested cities and building better sewage systems would both prevent
and end many epidemics. At their urging, cities and towns across the United
States built better sewage systems and provided citizens with access to
clean water. By 1918, these improved water and sewage systems had greatly
contributed to a decline in gastrointestinal infections and a significant
reduction in mortality rates among infants, children and young adults.

But because diseases are caused by microorganisms, not dirt, these
tactics were not completely effective in ending all epidemics.

Sanitation: Controlling problems at source

Box 1: Sharing toilets in Uganda

A recent survey by the Ministry of Health in Uganda suggested that there is only one toilet for every 700 Ugandan pupils, compared to one for every 328 pupils in 1995. Out of 8000 schools surveyed, only 33% of the 8000 schools sampled have separate latrines for girls. The deterioration in sanitary conditions was attributed to increased enrolment in schools. UNICEF surveyed 90 primary schools in crisis-affected districts of north and west Uganda: only 2% had adequate latrine facilities (IRIN, 1999).

Box 2: Sanitation and diarrhoeal disease

Gwatkin and Guillot (1999) have claimed that diarrhoea accounts for 11% of all deaths in the poorest 20% of all countries. This toll could be reduced by key measures: better sanitation to reduce the cause of water linked diarrhoea; and more widespread use of oral rehydration therapy (ORT) to treat its effects. Improving water supplies, sanitation facilities and hygiene practices reduces diarrhoea incidence by 26%. Even more impressive, deaths due to diarrhoea are reduced by 65% with these same improvements (Esrey et al., 1991). Of the 2.2 million people that die from diarrhoea each year, many of those deaths are caused by one bacteria – Shigella. Simple hand washing with soap and water reduces Shigella and other diarrhoea transmission by 35% (Kotloff et al., 1999; Khan, 1982). ORT is effective in reducing deaths due to diarrhoea but does not prevent it.

http://www.who.int/water_sanitation_health/sanitproblems/en/index1.html

Garbage-A-polluted-creek

Garbage-A-polluted-creek

Influenza Strikes

Throughout history, influenza viruses have mutated and caused
pandemics or global epidemics. In 1890, an especially virulent influenza
pandemic struck, killing many Americans. Those who survived that
pandemic and lived to experience the 1918 pandemic tended to be
less susceptible to the disease.

From Kansas to Europe and back again, wave after wave, the
unfolding of the pandemic, mobilizing to fight influenza, the
pandemic hits, protecting yourself, communication, fading of
the pandemic.

Influenza ward

Influenza ward

When it came to treating influenza patients, doctors, nurses and
druggists were at a loss. [Credit: Office of the Public Health Service Historian]

The influenza pandemic of 1918-1919 killed more people than the
Great War, known today as World War I (WWI), at somewhere
between 20 and 40 million people. It has been cited as the most
devastating epidemic in recorded world history. More people died of
influenza in a single year than in four-years of the Black Death Bubonic
Plague from 1347 to 1351. Known as “Spanish Flu” or “La Grippe”
the influenza of 1918-1919 was a global disaster.

Grim Reaper

Grim Reaper

The Grim Reaper by Louis Raemaekers

In the fall of 1918 the Great War in Europe was winding down and
peace was on the horizon. The Americans had joined in the fight,
bringing the Allies closer to victory against the Germans. Deep within
the trenches these men lived through some of the most brutal conditions
of life, which it seemed could not be any worse. Then, in pockets
across the globe, something erupted that seemed as benign as the
common cold. The influenza of that season, however, was far more
than a cold. In the two years that this scourge ravaged the earth,
a fifth of the world’s population was infected. The flu was most deadly
for people ages 20 to 40. This pattern of morbidity was unusual for
influenza which is usually a killer of the elderly and young children.
It infected 28% of all Americans (Tice). An estimated 675,000
Americans died of influenza during the pandemic, ten times as
many as in the world war. Of the U.S. soldiers who died in Europe,
half of them fell to the influenza virus and not to the enemy (Deseret
News). An estimated 43,000 servicemen mobilized for WWI died
of influenza (Crosby). 1918 would go down as unforgettable year
of suffering and death and yet of peace. As noted in the Journal
of the American Medical Association final edition of 1918:   “The 1918
has gone: a year momentous as the termination of the most cruel war
in the annals of the human race; a year which marked, the end at
least for a time, of man’s destruction of man; unfortunately a year in
which developed a most fatal infectious disease causing the death
of hundreds of thousands of human beings. Medical science for
four and one-half years devoted itself to putting men on the firing
line and keeping them there. Now it must turn with its whole might to
combating the greatest enemy of all–infectious disease,” (12/28/1918).

From Kansas to Europe and Back Again:

scourge ravaged the earth

scourge ravaged the earth

Where did the 1918 influenza come from? And why was it so lethal?

In 1918, the Public Health Service had just begun to require state
and local health departments to provide them with reports about
diseases in their communities. The problem? Influenza wasn’t
a reportable disease.

But in early March of 1918, officials in Haskell County in Kansas
sent a worrisome report to the Public Health Service.Although
these officials knew that influenza was not a reportable disease,
they wanted the federal government to know that “18 cases
of influenza of a severe type” had been reported there.

By May, reports of severe influenza trickled in from Europe. Young
soldiers, men in the prime of life, were becoming ill in large
numbers. Most of these men recovered quickly but some developed
a secondary pneumonia of “a most virulent and deadly type.”

Within two months, influenza had spread from the military to the
civilian population in Europe. From there, the disease spread outward—to Asia, Africa, South America and, back again, to North America.

Wave After Wave:

In late August, the influenza virus probably mutated again and
epidemics now erupted in three port cities: Freetown, Sierra
Leone; Brest, France, and Boston, Massachusetts. In Boston,
dockworkers at Commonwealth Pier reported sick in massive
numbers during the last week in August. Suffering from fevers
as high as 105 degrees, these workers had severe muscle and
joint pains. For most of these men, recovery quickly followed. But
5 to 10% of these patients developed severe and massive
pneumonia. Death often followed.

Public health experts had little time to register their shock at the
severity of this outbreak. Within days, the disease had spread
outward to the city of Boston itself. By mid-September, the epidemic
had spread even further with states as far away as California, North
Dakota, Florida and Texas reporting severe epidemics.

The Unfolding of the Pandemic:

The pandemic of 1918-1919 occurred in three waves. The first
wave had occurred when mild influenza erupted in the late
spring and summer of 1918. The second wave occurred with an
outbreak of severe influenza in the fall of 1918 and the final wave
occurred in the spring of 1919.

In its wake, the pandemic would leave about twenty million dead
across the world. In America alone, about 675,000 people in
a population of 105 million would die from the disease.

Find out what happened in your state during the Pandemic

Mobilizing to Fight Influenza:

Although taken unaware by the pandemic, federal, state and local
authorities quickly mobilized to fight the disease.

On September 27th, influenza became a reportable disease. However,
influenza had become so widespread by that time that most states
were unable to keep accurate records. Many simply failed to
report to the Public Health Service during the pandemic, leaving
epidemiologists to guess at the impact the disease may have
had in different areas.

World War I had left many communities with a shortage of trained
medical personnel. As influenza spread, local officials urgently
requested the Public Health Service to send nurses and doctors.
With less than 700 officers on duty, the Public Health Service was
unable to meet most of these requests. On the rare occasions when
the PHS was able to send physicians and nurses, they often became
ill en route. Those who did reach their destination safely often found
themselves both unprepared and unable to provide real assistance.

In October, Congress appropriated a million dollars for the Public
Health Service. The money enabled the PHS to recruit and pay
for additional doctors and nurses. The existing shortage of doctors
and nurses, caused by the war, made it difficult for the PHS to locate and hire qualified practitioners. The virulence of the disease also meant that many nurses and doctors contracted influenza
within days of being hired.

Confronted with a shortage of hospital beds, many local officials
ordered that community centers and local schools be transformed
into emergency hospitals. In some areas, the lack of doctors meant
that nursing and medical students were drafted to staff these
makeshift hospitals.

The Pandemic Hits:

Entire families became ill. In Philadelphia, a city especially hard hit,
so many children were orphaned that the Bureau of Child Hygiene
found itself overwhelmed and unable to care for them.

As the disease spread, schools and businesses emptied. Telegraph
and telephone services collapsed as operators took to their
beds. Garbage went uncollected as garbage men reported sick.
The mail piled up as postal carriers failed to come to work.

State and local departments of health also suffered from high
absentee rates. No one was left to record the pandemic’s spread
and the Public Health Service’s requests for information went
unanswered.

As the bodies accumulated, funeral parlors ran out of caskets
and bodies went uncollected in morgues.

Protecting Yourself From Influenza:

In the absence of a sure cure, fighting influenza seemed an
impossible task.

In many communities, quarantines were imposed to prevent
the spread of the disease.Schools, theaters, saloons, pool
halls and even churches were all closed. As the bodies
mounted, even funerals were held out doors to protect mourners
against the spread of the disease.

Emergency Hospital for Influenza Patients

An Emergency Hospital for Influenza Patients

The effect of the influenza epidemic was so severe that the
average life span in the US was depressed by 10 years.
The influenza virus had a profound virulence, with a mortality
rate at 2.5% compared to the previous influenza epidemics, which
were less than 0.1%. The death rate for 15 to 34-year-olds of
influenza and pneumonia were 20 times higher in 1918 than in
previous years (Taubenberger). People were struck
with illness on the street and died rapid deaths.

One anecdote shared of 1918 was of four women playing bridge
together late into the night. Overnight, three of the women died
from influenza (Hoagg). Others told stories of people on their way
to work suddenly developing the flu and dying within hours
(Henig). One physician writes that patients with seemingly
ordinary influenza would rapidly “develop the most viscous
type of pneumonia that has ever been seen” and later when
cyanosis appeared in the patients, “it is simply a struggle for air
until they suffocate,” (Grist, 1979). Another physician recalls
that the influenza patients “died struggling to clear their airways
of a blood-tinged froth that sometimes gushed from their nose
and mouth,” (Starr, 1976). The physicians of the time were
helpless against this powerful agent of influenza. In 1918 children
would skip rope to the rhyme (Crawford):

I had a little bird,

Its name was Enza.

I opened the window,

And in-flu-enza.

schools inspected -

schools inspected –

The influenza pandemic circled the globe. Most of humanity felt the
effects of this strain of the influenza virus. It spread following
the path of its human carriers, along trade routes and shipping lines.
Outbreaks swept through North America, Europe, Asia, Africa, Brazil
and the South Pacific (Taubenberger). In India the mortality rate was
extremely high at around 50 deaths from influenza per 1,000
people (Brown). The Great War, with its mass movements of men
in armies and aboard ships, probably aided in its rapid diffusion
and attack. The origins of the deadly flu disease were unknown but
widely speculated upon. Some of the allies thought of the epidemic as a
biological warfare tool of the Germans. Many thought it was a result of
the trench warfare, the use of mustard gases and the generated “smoke
and fumes” of the war. A national campaign began using the ready
rhetoric of war to fight the new enemy of microscopic proportions. A
study attempted to reason why the disease had been so devastating
in certain localized regions, looking at the climate, the weather and
the racial composition of cities. They found humidity to be linked with
more severe epidemics as it “fosters the dissemination of the bacteria,”
(Committee on Atmosphere and Man, 1923). Meanwhile the new
sciences of the infectious agents and immunology were
racing to come up with a vaccine or therapy to stop the epidemics.

The experiences of people in military camps encountering the
influenza pandemic: An excerpt for the memoirs of a survivor at
Camp Funston of the pandemic Survivor A letter to a fellow physician
describing conditions during the influenza epidemic at Camp Devens.

A collection of letters of a soldier stationed in Camp Funston Soldier

The origins of this influenza variant is not precisely known. It is thought
to have originated in China in a rare genetic shift of the influenza virus.
The recombination of its surface proteins created a virus novel to
almost everyone and a loss of herd immunity. Recently the virus
has been reconstructed from the tissue of a dead soldier and is
now being genetically characterized.

The name of Spanish Flu came from the early affliction and large
mortalities in Spain (BMJ,10/19/1918) where it allegedly killed 8
million in May (BMJ, 7/13/1918). However, a first wave of influenza
appeared early in the spring of 1918 in Kansas and in military
camps throughout the US. Few noticed the epidemic in the midst of
the war. Wilson had just given his 14 point address. There was
virtually no response or acknowledgment to the epidemics in March
and April in the military camps. It was unfortunate that no steps were
taken to prepare for the usual recrudescence of the virulent influenza
strain in the winter. The lack of action was later criticized when the
epidemic could not be ignored in the winter of 1918 (BMJ, 1918).
These first epidemics at training camps were a sign of what was
coming in greater magnitude in the fall and winter of 1918 to the
entire world.

The war brought the virus back into the US for the second wave
of the epidemic. It first arrived in Boston in September of 1918
through the port busy with war shipments of machinery and supplies.
The war also enabled the virus to spread and diffuse. Men across
the nation were mobilizing to join the military and the cause. As they
came together, they brought the virus with them and to those they
contacted. The virus  killed almost 200,00 in October of 1918
alone. In November 11 of 1918 the end of the war enabled a resurgence.
As people celebrated Armistice Day with parades and large parties, a
complete disaster from the public health standpoint, a rebirth of
the epidemic occurred in some cities. The flu that winter was beyond
imagination as millions were infected and thousands died. Just as
the war had effected the course of influenza, influenza affected
the war. Entire fleets were ill with the disease and men on the front
were too sick to fight. The flu was devastating to both sides, killing
more men than their own weapons could.

With the military patients coming home from the war with battle wounds
and mustard gas burns, hospital facilities and staff were taxed
to the limit. This created a shortage of physicians, especially in the
civilian sector as many had been lost for service with the military.
Since the medical practitioners were away with the troops, only
the medical students were left to care for the sick. Third and forth
year classes were closed and the students assigned jobs as
interns or nurses (Starr,1976). One article noted that “depletion has
been carried to such an extent that the practitioners are brought
very near the breaking point,” (BMJ, 11/2/1918). The shortage was
further confounded by the added loss of physicians to the epidemic.
In the U.S., the Red Cross had to recruit more volunteers to contribute
to the new cause at home of fighting the influenza epidemic. To respond
with the fullest utilization of nurses, volunteers and medical supplies, the
Red Cross created a National Committee on Influenza. It was involved
in both military and civilian sectors to mobilize all forces to fight Spanish
influenza (Crosby, 1989). In some areas of the US, the nursing shortage
was so acute that the Red Cross had to ask local businesses to
allow workers to have the day off if they volunteer in the hospitals
at night (Deseret News). Emergency hospitals were created to
take in the patients from the US and those arriving sick from overseas.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

red_cross_public_health_nurse

red_cross_public_health_nurse

The pandemic affected everyone. With one-quarter of the US and
one-fifth of the world infected with the influenza, it was  impossible
to escape from the illness. Even President Woodrow Wilson suffered
from the flu in early 1919 while negotiating the crucial treaty of
Versailles to end the World War (Tice). Those who were
lucky enough to avoid infection had to deal with the public health
ordinances to restrain the spread of the disease.

The public health departments distributed gauze masks to be worn
in public. Stores could not hold sales, funerals were limited
to 15 minutes. Some towns required a signed certificate to
enter and railroads would not accept passengers without
them. Those who ignored the flu ordinances had to pay steep
fines enforced by extra officers (Deseret News). Bodies pilled up
as the massive deaths of the epidemic ensued. Besides the
lack of health care workers and medical supplies, there was a shortage
of coffins, morticians and gravediggers (Knox). The conditions in 1918
were not so far removed from the Black Death in the era of the
bubonic plague of the Middle Ages.

iowa_flu

iowa_flu

In 1918-19 this deadly influenza pandemic erupted during the final
stages of World War I. Nations were already attempting to deal with
the  effects and costs of the war. Propaganda campaigns and war
restrictions and rations had been implemented by governments.
Nationalism pervaded as people accepted government authority.
This allowed the public health departments to easily step in and
implement their restrictive measures. The war also gave science
greater importance as governments relied on scientists, now armed
with the new germ theory and the development of antiseptic surgery,
to design vaccines and reduce mortalities of disease and battle
wounds. Their new technologies could preserve the men on
the front and ultimately save the world. These conditions
created by World War I, together with the current social attitudes
and ideas, led to the relatively calm response of the public and
application of scientific ideas. People allowed for strict measures
and loss of freedom during the war as they submitted to the
needs of the nation ahead of their personal needs. They had
accepted the limitations placed with rationing and drafting.
The responses of the public health officials reflected the new
allegiance to science and the wartime society. The medical
and scientific communities had developed new theories and
applied them to prevention, diagnostics and treatment of the
influenza patients.

The Medical and Scientific Conceptions of Influenza

Scientific ideas about influenza, the disease and its origins,
shaped the public health and medical responses. In 1918
infectious diseases were beginning to be unraveled. Pasteur
and Koch had solidified the germ theory of disease through
clear experiments clever science. The bacillus responsible
for many infections such as tuberculosis and anthrax  had
been visualized, isolated and identified. Koch’s postulates
had been developed to clearly link a disease to a specific
microbial agent.

Robert Koch

Robert Koch

The petri dish was widely used to grow sterile cultures of bacteria
and investigate bacterial flora. Vaccines had been created for
bacterial infections and even the unseen rabies virus by
serial passage techniques. The immune system was explained by
Paul Erhlich and his side-chain theory. Tests of antibodies such as
Wasserman and coagulation experiments were becoming commonplace.
Science and medicine were on their way to their complete entanglement
and fusion as scientific principles and methodologies made their way
into clinical practice, diagnostics and therapy.

The Clinical Descriptions of Influenza

Patients with the influenza disease of the epidemic were generally
characterized by common complaints associated with the flu. They had
body aches, muscle and joint pain, headache, a sore throat and a
unproductive cough with occasional harsh breathing (JAMA, 1/25/1919).

The most common sign of infection was the fever, which ranged from
100 to 104 F and lasted for a few days. The onset of the epidemic influenza
was peculiarly sudden, as people were struck down with dizziness, weakness
and pain while on duty or in the street (BMJ, 7/13/1918). After  the
disease was established the mucous membranes became reddened
with sneezing. In some cases there was a hemorrhage of the
mucous membranes of the nose and bloody noses were commonly
seen. Vomiting occurred on occasion, and also sometimes diarrhea
but more commonly there was constipation (JAMA, 10/3/1918).

The danger of an influenza infection was its tendency to progress into
the often fatal secondary bacterial infection of pneumonia. In the
patients that did not rapidly recover after three or four days of fever, there
is an “irregular pyrexia” due to bronchitis or broncopneumonia (BMJ,
7/13/1918). The pneumonia would often appear after a period of
normal temperature with a sharp spike and expectorant of bright
red blood. The lobes of the lung became speckled with “pneumonic
consolidations.” The fatal cases developed toxemia and vasomotor
depression (JAMA, 10/3/1918). It was this tendency for secondary
complications that made this influenza infection so deadly.

pneumonia

pneumonia

hospital ward in 1918

hospital ward in 1918

A military hospital ward in 1918

In the medical literature characterizing the influenza disease, new
diagnostic techniques are frequently used to describe the clinical
appearance. The most basic clinical guideline was the temperature,
a record of which was kept in a table over time. Also closely
monitored was the pulse rate. One clinical account said that
“the pulse was remarkably slow,” (JAMA, 4/12/1919) while others
noted that the pulse rate did not increase as expected. With the
pulse, the respiration rate was measured and reported to provide
clues of the clinical progression.
Patients were also occasionally “roentgenographed” or chest x-rayed,
(JAMA, 1/25/1919). The discussion of clinical influenza also often
included analysis of the blood. The number of white blood cells were
counted for many patients. Leukopenia was commonly associated
with influenza. The albumin was also measured, since it was noted that
transient albuminuria was frequent in influenza patients. This was
done by urine analysis. The Wassermann reaction was another
added new test of the blood for antibodies (JAMA, 10/3/1918).
These new measurements enabled to physicians to have an
image of action and knowledge using scientific instruments. They
could record precisely the progress of the influenza infection and perhaps
were able to forecast its outcome.

The most novel of these tests were the blood and sputum cultures.
Building on the germ theory of disease, the physicians and their
associated research scientists attempted to find the culprit for this
deadly infection. Physicians would commonly order both blood and sputum
cultures of their influenza and pneumonia patients mostly for research
and investigative purposes. At the military training camp
Camp Lewis during a influenza epidemic, “in all cases of pneumonia.
a sputum study, white blood and differential count, blood culture
and urine examinations were made as routine,” (JAMA, 1/25/1919).

The bacterial flora of the nasopharynx of some patients was also cultured
since droplet infection was where the disease disseminated. The
collected swabs and specimens were inoculated onto blood agar of
petri dishes. The grown up bacterial colonies were closely studied to
find the causal organism. Commonly found were pneumococcus,
streptococcus, staphylococcus and Bacillus influenzae (JAMA, 4/12/1919).

pneumonia

pneumonia

These new laboratory tests used in the clinical setting brought in a solid
scientific, biological link to the practice of medicine. Medicine had
become fully scientific and technologic in its understanding and
characterization of the influenza epidemic.

Treatment and Therapy

The therapeutic remedies for influenza patients varied from the
newly developed drugs to oils and herbs. The therapy was much less
scientific than the diagnostics, as the drugs had no clear explanatory
theory of action. The treatment was largely symptomatic, aiming to
reduce fever or pain. Aspirin, or acetylsalicylic acid was a common remedy.
For secondary pneumonia doses of epinephrin were given. To
combat the cyanosis physicians gave oxygen by mask or some
injected it under the skin (JAMA, 10/3/1918). Others used salicin which
reduced pain, discomfort and fever and claimed to reduce the infectivity
of the patient. Another popular remedy was cinnamon in powder or oil form
with milk to reduce temperature (BMJ, 10/19/1918). Finally, salt of quinine
was suggested as a treatment. Most physicians agreed that the patient should
be  kept in bed (BMJ, 7/13/1918). With that was the advice of plenty of
fluids and nourishment. The application of cold to the head, with
warm packs or warm drinks was also advised. Warm baths were used
as a hydrotherapeutic method in hospitals but were discarded for
lack of success (JAMA, 10/3/1918). These treatments, like the
suggested prophylactic measures of the public health officials, seemed to
originate in the common social practices and not in the growing field of
scientific medicine. It seems that as science was entering the medical
field, it served only for explanatory, diagnostic and preventative
measures such as vaccines and technical tests. This science had
little use once a person was ill.

However, a few proposed treatment did incorporate scientific ideas
of germ theory and the immune system. O’Malley and Hartman
suggested to treat influenza patients with the serum of convalescent
patients. They utilize the theorized antibodies to boost the immune
system of sick patients. Other treatments were “digitalis,” the
administration of isotonic glucose and sodium bicarbonate intravenously
which was done in military camps (JAMA, 1/4/1919). Ross and
Hund too utilized ideas about the immune system and properties of the
blood to neutralize toxins and circulate white blood cells. They believed
that the best treatment for influenza should aim to: “…neutralize or render
the intoxicant inert…and prevent the blood destruction with its destructive
leukopenia and lessened coagulability,” (JAMA, 3/1/1919). They tried
to create a therapeutic immune serum to fight infection. These therapies
built on current scientific ideas and represented the highest
biomedical, technological treatment like the antitoxin to diphtheria.

influenza

influenza

In July, an American soldier said that while influenza caused a heavy
fever, it “usually only confines the patient to bed for a few days.” The
mutation of the virus changed all that. [Credit: National Library of Medicine]

recovering_from_influenza

recovering_from_influenza

An old cliché maintained that influenza was a wonderful disease as
it killed no one but provided doctors with lots of patients. The 1918
pandemic turned this saying on its head. [Credit: The Etiology of
Influenza in 1918]

During the 1890 influenza epidemic, Pfeiffer found what he
determined to be the microbial agent to cause influenza.
In the sputum and respiratory tract of influenza patients in 1892,
he isolated the bacteria Bacillus influenzae , which was
accepted as the true “virus” though it was not found in localized
outbreaks (BMJ, 11/2/1918). However, in studies of the 1907-8
epidemic in the US, Lord had found the bacillus in only 3 of 20 cases.
He also found the bacillus in 30% of cultures of sputum from TB patients.
Rosenthal further refuted the finding when he found the bacillus in 1 of 6
healthy people in 1900 (JAMA, 1/18/1919). The bacillus was also
found to be present in all cases of whooping cough and many cases
of measles, chronic bronchitis and scarlet fever (JAMA, 10/5/1918).
The influenza pandemic provided scientists the opportunity to confirm
or refute this contested microbe as the cause of influenza. The sputum
studies from the Camp Lewis epidemic found only a few influenza cases
harvesting the influenza bacilli and mostly type IV pneumococcus . They
concluded that “the recent epidemic at Camp Lewis was an acute
respiratory infection and not an epidemic due to Bacillus influenzae ,”
(JAMA, 1/25/1919). This finding along with others suggested to most
scientists that the Pfeiffer’s Bacillus was not the cause of influenza.

In the 1918-19 influenza pandemic, there was a great drive to find the
etiological agent responsible for the deadly scourge. Scientists in their
labs were working hard, using the cultures obtained from physician clinics,
to isolate the etiological agent for influenza. As a report early in the
epidemic said, “the ‘influence’ of influenza is still veiled in mystery, ”
(JAMA, 10/5/1918). The nominated bacillus influenzae bacteria
seemed to be incorrect and scientists scrambled to isolate the true cause.
In the journals, many authors speculated on the type of agent- was
it a new microbe, was it a bacteria, was it a virus? One journal offered
that “the severity of the present pandemic, the suddenness of onset…
led to the suggestion that the disease cannot be influenza but some other
and more lethal infection,” (BMJ, 11/2/1918). However, most accepted that
the epidemic disease was influenza based on the familiar symptoms
and known pattern of disease. The respiratory disease of influenza was
understood to give warning in the late spring of its potential effects
upon its recrudescence once the weather turned cold in the winter
(BMJ, 10/19/1918).One article with foresight stated that ” there can
be no question that the virus of influenza is a living organism…

flu virus EM

flu virus EM

it is possibly beyond the range of microscopic vision,” (BMJ, 11/16/1918). Another
article confirmed the idea of an “undiscovered virus” and noted that pneumococci
and streptococci were responsible for “the gravity of the secondary pulmonary
complications,” (BMJ, 11/2/1918). The article went on to offer the idea of a
symbiosis of virus and secondary bacterial infection combining to make it
such a severe disease.

The investigators as they attempted to find the responsible agent for the influenza
pandemic were developing ideas of infectious microbes and the concept of the
virus. The idea of the virus as an infectious agent had been around for years.
The articles of the period refer to the “virus” in their discussion but do not
consistently use it to be an infectious microbe, distinctive from bacteria. The
term virus has the same usage and application as bacillus. In 1918, a virus
was defined scientifically to be a submicroscopic infectious entity which could
be filtered but not grown in vitro . In the 1880s Pasteur developed an attenuated
vaccine for the rabies virus by serial passage way ahead of his time. Ivanoski’s
work on the tobacco mosaic virus in 1890s lead to the discovery of the virus.
He found an infectious agent that acted as a micro-organism as it multiplied
yet which passed through the sterilizing filter as a nonmicrobe. By the 1910s
several viruses, defined as filterable infectious microbes, had been identified
as causing infectious disease (Hughes). However, the scientists were still
conceptually behind in defining a virus; they distinguished it only by size
from a bacteria and not as an obligate parasite with a distinct life cycle
dependent on infecting a host cell.

The influenza epidemic afforded the opportunity to research the etiological
agent and develop the idea of the virus. Experiments by Nicolle and Le Bailly in
Paris were the earliest suggestions that influenza was caused by a “filter-passing
virus,” (BMJ, 11/2/1918). They filtered out the bacteria from bronchial expectoration
of an influenza patient and injected the filtrate into the eyes and nose of two monkeys.
The monkeys developed a fever and a marked depression. The filtration was later
administered to a volunteer subcutaneously who developed typical signs of influenza.
They reasoned that the inoculated person developed influenza from the filtrate since
no one else in their quarters developed influenza (JAMA, 12/28/1918). These scientists
followed Koch’s postulates as they isolated the causal agent from patients with the
illness and used it to reproduce the same illness in animals. Through these studies,
the scientists proved that influenza was due to a submicroscopic infectious agent
and not a bacteria, refuting the claims of Pfeiffer and advancing virology. They were
on their way to discerning the virus and characterizing the orthomyxo viruses that
lead to the disease of influenza.

These scientific experiments which unravel the cause of influenza, had immediate
preventative applications. They would assist in the effort to create a effective
vaccine to prevent influenza. This was the ultimate goal of most studies, since
vaccines were thought to be the best preventative solution in the early 20th century.
Several experiments attempted to produce vaccines, each with a different
understanding of the etiology of fatal influenza infection. A Dr. Rosenow invented
a vaccine to target the multiple bacterial agents involved from the serum of patients.
He aimed to raise the immunity to against the bacteria, the “common causes of death,
“and not the cause of the initial symptoms by inoculating with the proportions found
in the lungs and sputum (JAMA, 1/4/1919). The vaccines made for the British forces
took a similar approach and were “mixed vaccines” of pneumococcus and
lethal streptococcus. The vaccine development therefore focused on the culture
results of what could be isolated from the sickest patients and lagged behind the
scientific progress.

Fading of the Pandemic:

In November, two months after the pandemic had erupted, the Public Health Service
began reporting that influenza cases were declining.

Communities slowly lifted their quarantines. Masks were discarded. Schools were
re-opened and citizens flocked to celebrate the end of World War I.

Communities and the disease continued to be a threat throughout the spring of 1919.

By the time the pandemic had ended, in the summer of 1919, nearly 675,000
Americans were dead from influenza. Hundred of thousands more were orphaned
and widowed.

The Legacy of the Pandemic

No one knows exactly how many people died during the 1918-1919 influenza
pandemic. During the 1920s, researchers estimated that 21.5 million people died
as a result of the 1918-1919 pandemic. More recent estimates have estimated
global mortality from the 1918-1919 pandemic at anywhere between 30 and 50
million. An estimated 675,000 Americans were among the dead.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969. These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Research, forgetting the pandemic of 1918-1919, scientific milestones, 20th century influenza or global pandemics.
The Influenza Pandemic occurred in three waves in the United States throughout
1918 and 1919.

More Americans died from influenza than died in World War I. [Credit: National Library of Medicine]

All of these deaths caused a severe disruption in the economy. Claims against life
insurance policies skyrocketed, with one insurance company reporting a 745 percent
rise in the number of claims made. Small businesses, many of which had been unable to operate during the pandemic, went bankrupt.

Joseph goldberger

Joseph goldberger

Joseph Goldberger, one of the leading researchers in the PHS, studied influenza
during the pandemic. But Goldberger had multiple interests and influenza research
became less important to him in the years following 1918. [Credit: Office of the Public
Health Service Historian]

In the summer and fall of 1919, Americans called for the government to research
both the causes and impact of the pandemic. In response, both the federal government
and private companies, such as Metropolitan Life Insurance, dedicated money
specifically for flu research.

In an attempt to determine the effect influenza had different communities, the Public
Health Service conducted several small epidemiological studies. These studies,
however, were conducted after the pandemic and most PHS officers
admitted that the data which was collected was probably inaccurate.

PHS scientists continued to search for the causative agent of influenza in their
laboratories as did their fellow scientists in and outside the United States.

But while there was a burst of enthusiasm for funding flu research in
1918- 1919, the funds allocated for this research were actually fairly meager.
As time passed, Americans became less interested in the pandemic and its
causes. And even when funding for medical research dramatically increased
after World War II, funding for research on the 1918-1919 pandemic remained
limited.

Forgetting the 1918-1919 Pandemic:

In the years following 1919, Americans seemed eager to forget the pandemic.
Given the devastating impact of the pandemic, the reasons for this forgetfulness
are puzzling.

It is possible, however, that the pandemic’s close association with World War I
may have caused this amnesia. While more people died from the pandemic than
from World War I, the war had lasted longer than the pandemic and caused
greater and more immediate changes in American society.

Influenza also hit communities quickly. Often it disappeared within a few weeks of
its arrival. As one historian put it, “the disease moved too fast, arrived, flourished
and was gone before…many people had time to fully realize just how great
was the danger.” Small wonder, then, that many Americans forgot about the
pandemic in the years which followed.

Scientific Milestones in Understanding and Preventing Influenza:

In the early stages of the pandemic, many scientists believed that the agent
responsible for influenza was Pfeiffer’s bacillus. Autopsies and research conducted
during the pandemic ultimately led many scientists to discard this theory.

In late October of 1918, some researchers began to argue that influenza was
caused by a virus. Although scientists had understood that viruses could cause
diseases for more than two decades, virology was still very much in its infancy at
this time.

It was not until 1933 that the influenza A virus, which causes almost every type
of endemic and pandemic influenza, was isolated. Seven years later, in 1940,
the influenza B virus was isolated. The influenza C virus was finally isolated in 1950.

Influenza vaccine was first introduced as a licensed product in the United States in
1944. Because of the rapid rate of mutation of the influenza virus, the
effectiveness of a given vaccine usually lasts for only a year or two.

By the 1950s, vaccine makers were able to prepare and routinely release vaccines
which could be used in the prevention or control of future pandemics. During the
1960s, increased understanding of the virus enabled scientists to develop both
more potent and purer vaccines.

Mass production of influenza vaccines continued, however, to require several
months lead time.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969.

These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics
were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Tuberculosis

Mycobacterium tuberculosis was first discovered in 1882 by Robert Koch and is one of almost 200 mycobacterial species which have been detected by molecular techniques. The genus Actinobacteria (given its own family, the Mycobacteriaceae) includes pathogens known to cause serious diseases in mammals, including tuberculosis (MTBC) and leprosy (M. leprae). Mycobacteria are grouped neither as Gram-positive nor Gram-negative bacteria. MTBC consists of M. tuberculosis, M. bovis, M. bovis BCG (bacillus Calmette-Guérin), M. africanum M. caprae, M. microti, M. canettii and M. pinnipedii, all of which share genetic homology, with no significant variation between sequences (∼0.01 to 0.03%), although differences in phenotypes are present. Cells in the genus have a typical rod, or slightly curved-shape, with dimensions of 0.2 to 0.6 μm by 1 to 10 μm.

Mycobacterium tuberculosis has a waxy mycolic acid lipid complex coating on its cell surface. The cells are impervious to Gram staining, so a common staining procedure used is Ziehl-Neelsen (ZN) staining. The outer compartment of the cell wall contains lipid-linked polysaccharides, is water-soluble, and interacts with the immune system. The inner wall is impermeable. Mycobacteria have some unique qualities that are divergent from members of the Gram-positive group, such as the presence of mycolic acids in the cell wall.

MTBC and M. leprae replication occurs in the tissues of warm-blooded human hosts. This air-borne pathogen is transmitted from an active pulmonary tuberculosis patient by coughing. Droplet nuclei, approximately 1 to 5 μm in size “meander” in the air and are transmitted to susceptible individuals by inhalation. Mycobacteria are incapable of replicating in or on inanimate objects. The risk of infection is dependent to the load of the bacillus that has been inhaled, level of infectiousness, contact perimeter and the immune competency of potential hosts. Due to the size of the droplets inhaled into the lungs, the infection penetrates the defense systems of the bronchi and enters the terminal alveoli. Invading bacteria are then engulfed by alveolar macrophage and dendritic cells.

The cell-mediated immune response alleviates the multiplication of M. tuberculosis and halts infection. Infected individuals with strong immune systems are generally able to combat the infection within 2 to 8 weeks post-infection, when the active cell-mediated immune response stops further multiplication of M. tuberculosis. Tuberculosis infection shows several significant clinical manifestations in pulmonary and extra-pulmonary sites. Prolonged coughing, severe weigh-lost, night sweats, low-grade fever, dyspnoea and chest pain are clinical symptoms indicated from pulmonary infections.

Fort Bayard, N.M., T.B. service assignment

Fort Gayard, NM

Fort Gayard, NM

Fort Bayard, NM Post Hospital circa 1890

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

Tuberculosis, (Pvt.) Richard Johnson said, was “regarded as a much dreaded disease that was easily contracted by association.” In fact, so many hospital corpsmen requested transfers out that the Surgeon General established a policy that no such requests would be considered until after two years of service. Consequently, Johnson noted, “During my time there we had a high percentage of desertions.” For example, all four of the men who arrived with Johnson, within a year—“two of them,” he dryly observed, “owing me money.”

Four years later another young man arrived at Fort Bayard. He, too, remarked on the long journey by rail through the “desert waste of New Mexico,” and then the wagon ride over “dry desolate foothills,” to the post. But his reaction was different from Johnson’s. Capt. Earl Bruns moved from being a patient to a physician at the hospital. For Bruns Fort Bayard was “a veritable oasis in the desert, studded with shade trees, green lawns, shrubbery, and flowers.” He credited the hospital commander, Colonel (Col.) George E. Bushnell, writing that, “[i]n this one spot one man had made the desert bloom like a rose.”

Johnson’s and Bruns’ different views from 1904 and 1908, respectively, may reflect the fact that Johnson was healthy and assigned grudgingly to work at the tuberculosis hospital, whereas Bruns had few other options and came in hopes of regaining his health—or it may reflect the improvements Bushnell made during his first years in command. But every week for the more than twenty years that Fort Bayard was an Army tuberculosis hospital, workers and patients arrived with dread and foreboding, or joy and relief—or a mix of them all.

The approach Fort Bayard and George Bushnell took to tuberculosis was similar to how physicians manage the disease today in that it involved isolating the patient, treating the disease, and educating the patient and his family on how to maintain their health. The hospital offered patients sanctuary from the demands, fears, and prejudices regarding tuberculosis in the outside world. Fort Bayard treated tuberculosis patients with prolonged bed rest, fresh air, and a healthy diet, but undertaking this “rest treatment”—confining oneself to bed for months—proved difficult if not impossible for many patients. Fort Bayard involved patients’ adaptation to new lifestyles as people with tuberculosis. Finally, Fort Bayard managed patients’ transition back to the outside world.

One of the most striking aspects of Fort Bayard was that many of the medical staff had tuberculosis themselves, including George Bushnell. Tuberculosis weakened Bushnell’s lungs and shaped his life in numerous ways. He tired easily, had to carefully monitor his health, and as Earl Bruns observed, “was never a well man.” Bushnell had active tuberculosis five times in his life: the fourth time in 1919 with a breakdown from the strain of wartime work; and the fifth and the final illness in 1924 that lead to his death at age 70. In 1911 he advised his superiors that, “I did not consider myself strong enough to carry on the work of commanding this Hospital and keeping myself in condition for active duty.” The War Department generally required officers in poor physical condition to retire, but the Surgeon General secured a waiver for Bushnell, because “the interests of the service would suffer by his retirement.” After a leave of absence in 1909–10, Bushnell’s annual reports on the competency of his officers included his own name on the list of those competent for hospital duty, but “unfit for active field service.”

“What would our sanatorium movement and our anti-tuberculosis crusade amount to,” wrote tuberculosis expert Adolphus Knopf, “were it not for the labors of tuberculous physicians, or one-time tuberculous physicians, who, because of their infirmity, had become interested in tuberculosis?” Well-known leaders in the antituberculosis movement such as Edward Trudeau and Lawrence Flick established their sanatoriums after they recovered from tuberculosis in order to offer others the treatment. Twenty-one of the first thirty recipients of the Trudeau Medal, established in 1926 for outstanding work in tuberculosis, had the disease. James Waring, a tuberculosis physician who arrived at a Colorado Springs sanatorium on a stretcher in 1908, later wrote, “It has been my good fortune to serve three separate and extended ‘hitches’ as a ‘bed patient,’ the time so spent numbering in all about nine years.” He, like many physicians, saw his personal experience as an asset in his practice. The three key figures in the Army tuberculosis program during World War I were Bushnell, Bruns, and Gerald Webb of Colorado Springs who started a tuberculosis sanatorium after his wife died of the disease.

Bushnell turned tuberculosis into an asset for the Army Medical Department, making Fort Bayard a center of national expertise on the disease. His personal experience with chronic pulmonary tuberculosis gave him good rapport and credibility with many of his patients. Medical officer Earl Bruns wrote that, “[H]e went among the patients and talked to them individually” and thereby provided “a living example of a cure due to rational treatment.” Bruns described how Bushnell spent his days attending to patients, carrying out administrative duties, and devoted hours to supervising the work in the gardens and grounds of Fort Bayard.

(Who’s Who in America, 1924-25. E. H. Bruns in American Review of Tuberculosis, June 1925. 0. B. Webb in Outdoor Life, Sept. 1924. Lancet. Lond., 1924. Jour. Am. Med. Ass’n., 1924, p. 374.)

General George M. Sternberg

In addition to being an Army surgeon, Sternberg was also a noted bacteriologist who, in 1880, had translated Antoine Magnin’s The Bacteria, which presented the latest research in germ theory. Sternberg’s work contributed to preparing American understanding of Robert Koch’s pronouncement in 1882 of the existence of the tubercle bacillus (Ott 1996:55). Over the next two decades Koch’s analysis gained converts, leading to the universally accepted belief that tuberculosis represented a bacterium infection that could be diagnosed and then monitored by microscopic inspection of patient’s sputum.

Sternberg was no doubt aware of the efforts of Edward Livingston Trudeau. Beginning in the 1870s, when he undertook his own recovery from consumption by withdrawing to the Adirondack Mountains, Trudeau had become an advocate of extended bed rest in remote, healthful environments. Quickly accepting Koch’s research, Trudeau argued that those afflicted by the tubercle bacillus could best be healed when removed from cities and placed under the care of physicians who carefully monitored their weight and sputum and who prescribed constant bed rest with exposure to fresh air. Preferring the term “sanatorium,” derived from the Latin word “to heal,” to “sanitarium,” derived from the Latin term for health, Trudeau founded his Adirondack Cottage Sanatorium at Saranac, New York, in 1885. This spawned the opening of hundreds of similar institutions throughout the country (Caldwell 1988:70).

In 1899, Fort Bayard remained within the Army under the auspices of the Army Medical Department. The Army’s decision to retain the fort, even after it had outlived its military usefulness, grew from the strong interest that General George M. Sternberg, Surgeon General of the Army, had in pulmonary tuberculosis and its treatment.
Sternberg was also aware of the relatively good health that the Army’s soldiers had enjoyed serving in the higher elevations of the American West. Members of Zebulon Pike’s expedition of 1810 and of Fremont’s exploratory parties of the 1840s had witness their health improve while in the Rocky Mountains.

………………………………………………………………………………………………………………………………………………..

Upon assuming command in 1904, Bushnell, who had studied botany for years, immediately began to plant flowers, shrubs, and trees. When President Theodore Roosevelt created the Gila Forest Reserve in 1905, Bushnell ensured that Fort Bayard, which adjoined the Reserve, was part of a government reforestation project. The first year alone the Forest Service gave the hospital 250 seedlings of Himalayan cedar and yellow pine. Bushnell also got approval to fence in land for pasturing dairy cattle and arranged to recultivate long-neglected garden plots. The first year he predicted that the garden would generate “about 1300 dollars worth of produce.” After the quartermaster located an underground water source, Bushnell redoubled his cultivation efforts, planting trees, flowers, and grass to mitigate the wind and dust, and “to beautify the Post.” In later years Bushnell successfully grew beans from ancient cave dwellers (Anasazi beans), and made a less successful effort to grow Giant Sequoia from California.28 By 1910 Fort Bayard had four acres of vegetable gardens, a greenhouse, an orchard of 200 fruit trees, and alfalfa fields and hay fields for the dairy herd of 115 Holsteins, which the Silver City Enterprise proclaimed “one of the finest in the west.” The hospital also raised all of the beef consumed at the hospital (thereby avoiding Daniel Appel’s purchasing problems) and consumed pork at small expense by feeding the pigs the waste food. The hospital laboratory raised its own Belgian hares and guinea pigs for experiments.

Bushnell oversaw years of construction at Fort Bayard. In the wake of Florence Nightingale’s writings, nineteenth-century sanitation practices stressed cleanliness and ventilation, giving rise to pavilion style hospitals, narrow one- or two-story buildings lined with windows to provide patients with ample ventilation. In March 1904, Bushnell sent the Surgeon General plans for an “open court building” in modified pavilion style (Figure 2-1).

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

The building consisted of a quadrangle of long, narrow dressing rooms around an open court with porches along both the exterior and interior of the building. The rooms could be used for sleeping in inclement weather and the porches allowed patients to seek sun or shade as they wished. Wide doors enabled the easy movement of beds between the rooms and the porches. “The object of this style of building is to facilitate sleeping out of doors, which is now considered so important in modern sanatoria for the treatment of tuberculosis,” Bushnell explained.

The United States escaped the cauldron of WWI until April 1917. But after years of trying to maintain neutrality, President Woodrow Wilson’s administration mobilized the nation to fight in the most deadly enterprise the world had ever seen. Modern industrialized warfare would kill millions of soldiers, sailors, and civilians and unleash disease and famine across the globe. Typhus flourished in Eastern Europe and a lethal strain of influenza exploded out of the Western Front in 1918, producing one of the worst pandemics in history. Although eclipsed by such fierce epidemics, tuberculosis also fed on the war.

He was ordered to the office of The Surgeon General on June 2, 1917, and placed in charge of the Division of Internal Medicine and on June 13 there appeared S. G. O. Circular No. 20, Examinations for pulmonary tuberculosis in the military service, establishing a standard method of examination of the lungs for tuberculosis. Through his efforts a reexamination of all personnel already in the service was made by tuberculosis examiners and about 24,000 were rejected on that score. He had charge of the location, construction, and administration of all army tuberculosis hospitals, of which eight were built with a capacity of 8,000 patients.

With his relief from service in 1919 he took up his residence on a small farm at Bedford, Mass., where he prepared his Study of the Epidemiology of Tuberculosis (1920) and later Diseases of the Chest (1925) in collaboration with Dr. Joseph H. Pratt of Boston. As chief delegate of the National Tuberculosis Association he attended the first meeting of the International Union Against Tuberculosis in London in 1921. During the winter of 1922-23 he delivered a series of lectures on military medicine at Harvard University. In the summer of 1923 he moved to California and took up his residence at Pasadena.

………………………………………………………………………………………………………………………………………………..

In eighteen months the Selective Service registered twenty-five million men for the draft, examined ten million for military service, and enlisted more than four million soldiers, sailors, and Marines. To the dismay of many people, medical screening boards across the nation soon discovered that American men were not as strong and healthy as they had assumed. Of those eligible for military service, 30 percent were physically unfit; a number of them deemed ineligible to serve had tuberculosis. Therefore, in 1917 Surgeon General William Gorgas called George Bushnell to Washington, DC, to establish the Office of Tuberculosis in the Division of Internal Medicine, leaving Bushnell’s protégé, Earl Bruns, in charge of Fort Bayard. Given the Medical Department’s mission to maintain a strong and healthy fighting force, Bushnell’s new job was to minimize the incidence of tuberculosis among active-duty soldiers and avoid the high cost of disability pensions for men who incurred the disease during military service. It was a tall order.

Wartime tuberculosis had already received attention in 1916, when reports circulated that the French army had sent home 86,000 men with the disease, raising the specter that life in the trenches would generate hundreds of thousands of cases. One investigator found that tuberculosis rates in the British army were double those in peacetime, reversing the prewar downward trend. The head of the New York City Public Health Department, Hermann Biggs, declared that “tuberculosis
offers a problem of stupendous magnitude in France.” Subsequent studies revealed that only 20 percent or less of the French soldiers sent home with tuberculosis actually had the disease; others were either misdiagnosed or had had tuberculosis prior to entering the military and therefore had not contracted it in the trenches. The reports nevertheless galvanized public health officials to address the tuberculosis problem. The Rockefeller Foundation, for example, in cooperation with the American Red Cross, established a Commission for the Prevention of Tuberculosis in France to help the French and protect any Americans from contracting tuberculosis “over there.”

Bushnell established four “tuberculosis screens” by (1) examining all volunteers and draftees before enlistment, (2) checking recruits again in the training camps, (3) examining soldiers already in the Army for tuberculosis, and (4) screening military personnel at discharge to ensure they returned to civil life in sound condition. To implement these activities, Bushnell developed a protocol under which physicians could quickly examine men for tuberculosis as part of the larger physical examination process. He standardized the procedures for examinations throughout the Army, and crafted a narrow definition of what constituted a tuberculosis diagnosis to enable the Army to enlist as many young men as possible. Despite these efforts, soldiers developed active cases of tuberculosis throughout the war. Bushnell’s office also created eight more tuberculosis hospitals in the United States and designated three hospitals with the American Expeditionary Forces (AEF) in France to care for soldiers who developed active tuberculosis in the camps and trenches. Short of resources and knowledge, however, the Army Medical Department at times struggled just to provide beds for tuberculosis patients, let alone deliver the individual care Bushnell and his staff had provided at Fort Bayard before the war.

Overburdened medical personnel worked long hours, in often poor conditions. Thousands of tuberculosis patients resented the diagnosis and protested the conditions in which at times they were virtually warehoused. The draft, which brought millions of young men into government control and responsibility, also exposed the Army Medical Department to public scrutiny. Congress launched an investigation in 1919. World War I, which so dramatically changed the world, profoundly altered the Army’s tuberculosis program as well. It also challenged George Bushnell’s expertise. The Army’s tuberculosis expert had founded his policies on assumptions that, although widely held at the time, proved to be inaccurate and costly in lives and treasure. Wartime tuberculosis, therefore, shows the power of disease to overwhelm both knowledge and institutions.

Bushnell and his contemporaries were familiar with the concept of immunity and the power of vaccination, and the Army Medical Department vaccinated soldiers for smallpox and typhoid. Extending this concept of immunity to tuberculosis, medical officers differentiated between primary infection in childhood and secondary infection later in life. Observing that tuberculosis was often fatal for infants and young children, they reasoned that for survivors, an early infection of tuberculosis bacilli immunized a person against the disease later in life.
A “primary infection,” wrote Bushnell, gave a person some immunity, which “while not sufficient in many cases to prevent extension of disease [within the body]…is sufficient to counteract new infections from without.”8 In an article on “The Tuberculous Soldier,” the revered physician William Osler agreed. For years autopsies had uncovered healed tuberculosis lesions in people who had died in accidents or of other diseases. Although it was not known how many men between the ages of eighteen and forty harbored the tubercle bacillus, Osler wrote, “We do know that it is exceptional not to find a few [lesions] in the bodies of men between these ages dead of other diseases.” Thus, he argued, “In a majority of cases the germ enlists with the soldier. A few, very few, catch the disease in infected billets or barracks.”9 Bushnell reasoned if adults developed tuberculosis, “they do it on account of failure of their resistance.”

At one point Bushnell told the chief surgeon of the AEF, “Personally I have no fear of the contagion of tuberculosis between adults and see no reason why patients of this kind should not be treated in the ordinary hospital.” He asserted that the “really cruel persecution of the consumptive…through the fear that he will infect others, is based on what I must characterize as highly exaggerated notions of the danger of such infection.” This, too, was the prevailing view. Boston bacteriologist Edward O. Otis, who served as a medical officer during the war, wrote that “Undue fear of the communicability of pulmonary tuberculosis from one adult to another is unwarranted in the present state of our knowledge.”
Bushnell reasoned that if men infected with tuberculosis could indeed easily spread it to others, there would be much more tuberculosis in the Army than there was. British physician Leslie Murry, reasoned that although the crowded and damp conditions of trench warfare would have unfavorable effects on soldiers’ health, living outside with plenty of fresh air and good food and hygienic practices would improve their resistance to tuberculosis. Public health specialist George Thomas Palmer countered that although reactivation may not be higher in the military than in civil life, the United States had enough men without tuberculosis to bar anyone suspected of it from the military and thereby avoid an “added financial burden to the nation.” The challenge was to keep tuberculosis out of the Army and tuberculars off the disability rolls, but not to exclude so many men as to impair the nation’s ability to amass an army.

Bushnell’s views of tuberculosis immunity, contagion, interaction with military life, and the risk of overdiagnosis shaped the Army Medical Department programs for screening recruits. He knew he could not guarantee that all tuberculosis could be eliminated from the Army, but asserted that, “a sufficiently rigid selection of promising material in itself practically excludes tuberculosis.” In addition to enlisting the strongest men, Bushnell believed that a massive screening program would pay for itself by eliminating those who would later cost the government in medical services and disability benefits.

But the nation at war did not have the time or resources for the meticulous one-hour examination practiced at Fort Bayard, so Bushnell developed a protocol for civilian and military physicians to examine volunteers, draftees, trainees, and soldiers for tuberculosis in a matter of minutes. Circular No. 20 detailed how physicians should examine recruits, and became the single most important Army tuberculosis document during the war. The circular explained that the apices, or the tops of lungs, were the most common location for tuberculosis lesions, and that “the only trustworthy sign of activity in apical tuberculosis is the presence of persistent moist rales.” Circular No. 20 directed that “the presence of tubercle bacilli in the sputum is a cause for rejection,” and that “no examination for tuberculosis is complete without auscultation following a cough.” It recommended that a sputum sample “be coughed up in [the examiner’s] presence,” to ensure that it was actually from the examinee.

The last one-third of the document detailed X-ray examinations, summarizing eight different kinds of conditions that may appear and that would be grounds for rejection, and which conditions would not. By 1915, a Fort Bayard medical officer stated that X-ray technology “has become one of the most valued procedures in the diagnosis of pulmonary tuberculosis.” Medical officers F. E. Diemer and R. D. MacRae at Camp Lewis, Washington, argued in the pages of the JAMA that X-rays should be the primary diagnostic tool, not an “adjunct.” World War I ultimately, however, did encourage X-ray technology by revealing its power to thousands of physicians, stimulating the search for technical advances, and demonstrating the importance of specialization in reading X-rays. By the end of the war, the Army Medical Department had shipped to France hundreds of X-ray machines for use in Army hospitals and at the bedside, and developed various modes of X-ray equipment, including X-ray ambulances

Calculating that it would require 600 examiners for the screening process, the Medical Department turned to training general practitioners from civil life who knew little about tuberculosis. Bushnell’s office established a six-week tuberculosis course to prepare physicians. The first course at the Army Medical School in Washington, DC, was so popular that instructors offered it at several other training camps in the country. General Hospital No. 16, operating in conjunction with Yale Medical School, also offered a course on hospital administration to train medical officers to run tuberculosis hospitals.

Public health officials and the National Tuberculosis Association asked to be informed of any tuberculous individuals being sent to their communities, including the name and address of the “party assuming responsibility for such continued treatment and care.” The journal American Medicine published an article by British tuberculosis specialist Halliday Sutherland, who expressed concern that if men declined treatment and returned home they could spread tuberculosis to their families. He suggested that the U.S. Army retain men diagnosed with tuberculosis so that the government could provide treatment and discipline them if they resisted. Members of Congress opposed simply discharging men with tuberculosis. Representative Carl Hayden of Arizona argued that such men had given up their civilian lives upon induction into the Army, only to discover “that they were afflicted with a dread disease which prevents them from earning a livelihood.” He suggested that “some provision should be made for the care of such men until they are able to provide for themselves.”

While Bushnell’s policies succeeded in suppressing tuberculosis rates in the Army, the narrow definition of a tuberculosis diagnosis explicitly allowed men with healed lesions in their lungs to serve, and the rapid screening system caused some examiners to miss cases of active disease. Bushnell recognized that “a standard, though imperfect, is believed to be an indispensable adjunct in Army tuberculosis work not only to support the examiner but also to secure the necessary uniformity of practice in the matter of discharge for tuberculosis.” Nationwide, local draft boards and training camps rejected more than 88,000 men for tuberculosis, about 2.3 percent of the 3.8 million men examined. Postwar assessments calculated that of the more than two million soldiers who went to France to serve in the AEF, only 8,717 were evacuated with a diagnosis of tuberculosis, an incidence of only 0.4 percent.

In early 1918 a strep infection in the training camps in the United States caused medical officers to send hundreds of trainees to Army hospitals misdiagnosed with tuberculosis, crowding hospitals and generating paperwork and confusion. For a time, therefore, the Office of The Surgeon General ordered that no one should be discharged for tuberculosis from the training camps unless he had bacilli in his sputum—meaning the very severe cases. More than 50 percent of the patients being sent back to the United States from France with a diagnosis of tuberculosis did not actually have the disease. Bushnell viewed such overdiagnoses as “evil,” because it took men out of the AEF and overburdened tuberculosis hospitals and naval transports, which had to segregate suspected tuberculosis cases in isolation rooms or on open decks.

Faced with what he called “leaking” of soldiers from the AEF due to erroneous tuberculosis diagnoses, Bushnell turned to a specialist for assistance, Gerald B. Webb (Figure 4-3), from Colorado Springs.61 An Englishman by birth, Webb had married an American, and when she developed tuberculosis the couple traveled to Colorado Springs, Colorado, for treatment. His wife struggled with the disease for ten years until her death in 1903, and afterward Webb stayed on in Colorado Springs, remarrying and building a medical practice specializing in tuberculosis. In addition to his medical practice, Webb pioneered research into the body’s immune function, searched for a tuberculosis vaccine, and was a founder of the American Association of Immunologists (1913). Still somewhat bored in Colorado Springs, Webb volunteered for the Medical Corps soon after the United States declared war and helped organize and run tuberculosis screening boards at Camp Russell, Wyoming, and Camp Bowie, Texas. Bushnell
appointed him senior tuberculosis consultant for the AEF. After meeting with Bushnell in Washington and attending the Army War Course for senior officers at Columbia University, Webb sailed to France in March 1918.

Gerald B. Webb, World War I, Gerald B. Webb Papers.

Gerald B. Webb, World War I,

Gerald B. Webb, World War I,

Photograph courtesy of Special Collections, Tutt Library, Colorado College, Colorado Springs, Colorado.

Immunity in tuberculosis: Further experiments Unknown Binding – 1914

Webb instituted a screening process similar to that in the United States, distributing Circular No. 20, and preparing an illustrated version for medical officers in the field. He established a policy directing that only patients with sputum positive for tuberculosis should be sent back to the United States. Others would be tagged “tuberculosis observation” and sent to one of three hospitals designated as tuberculosis observation centers. There, specialists—Bushnell’s “good tuberculosis men”—would distinguish tuberculosis signs from other lung problems such as bronchitis and pneumonia, determining that he was free of disease, and send only patients who were indeed positive for tuberculosis back to the homeland.

Webb traveled to field and base hospitals throughout France. He would typically spend three days at a hospital, examining patients, leading conferences, giving lectures, and, according to his biographer, Helen Clapesattle, “preaching his gospel of fresh air and absolute rest.” He recruited a radiologist to teach the proper reading of X-ray plates, and advocated the early detection of tuberculosis, explaining, “Just as the wounded do better if they are got to the surgeons quickly, so the tuberculosis-wounded are more likely to recover if they are spotted and sent to the doctors early.”

In the 1930s, as Webb had concluded in 1919, scientists came to recognize that early tuberculosis infections did not provide protection and that adults could be reinfected with tuberculosis and develop active disease. In the meantime, with his AEF work done, in January 1919 Webb returned to his family and medical practice in Colorado Springs. The National Tuberculosis Association recognized Webb’s war work by electing him president in 1920, and Webb set the Association on a course of tuberculosis research on the immunity question and the standardization of X-ray diagnostics. He did not return to military service, but was a mentor for young physicians Esmond Long and James Waring, who would be leaders in the Army Medical Department’s tuberculosis program during the next war.

May 1941, as the United States stood on the brink of another world war, Benjamin Goldberg, president of the American College of Chest Physicians, recited some stunning figures at the association’s annual meeting in Cleveland, Ohio. He calculated that from 1919 to 1940 the Veterans Administration had admitted 293,761 tuberculosis patients to its hospitals. These patients had received government care and benefits for a total of 1,085,245 patient-years, at a cost of $1,185,914,489.56. Goldberg’s remarks reveal that although tuberculosis rates in the United States were declining 3 to 4 percent annually during the interwar years, the government’s burden to care for tuberculosis patients remained heavy. The Army was only three-quarters the size it was before World War I (131,000 versus 175,000 strength) and experienced no major epidemics, so that suicide and automobile accidents became the leading causes of death in the peacetime Army. Although hospital admissions of active duty personnel for tuberculosis declined during the decade, tuberculosis admissions at Fitzsimons Hospital in Denver remained constant due to a steady stream of patients who were veterans of the war. Tuberculosis, in fact, became a leading cause of disability discharges from the Army and, with nervous and mental disorders, generated the greatest amount of veterans’ benefits between the wars,

The story of tuberculosis in the Army after World War I, then, is one of increasing demand and decreasing resources, a dynamic that left Fitzsimons financially strapped even before the country entered the Great Depression. An examination of Fitzsimons’ postwar environment—the modern hospital and technology, the ever-changing landscape of veterans’ benefits, and new, invasive treatments for tuberculosis—illuminates these stresses.

President Franklin Delano Roosevelt proclaimed a “limited national emergency” on 8 September 1939, a week after Germany invaded Poland. But due to underfunding during the interwar period, one observer wrote that, “to prepare for war the Medical Department had to start almost from scratch.”1 Given the lean years of the 1920s and 1930s and the Army Medical Department’s policy of discharging officers with tuberculosis from duty, Surgeon General James C. Magee had to turn to the civilian sector for a tuberculosis expert. He recruited Esmond R. Long, M.D., Ph.D., director of the Henry Phipps Institute for the Study, Prevention and Treatment of Tuberculosis in Philadelphia. He could not have made a better choice. Long was also professor of pathology at the University of Pennsylvania, director of medical research for the National Tuberculosis Association, and the youngest person to be awarded the Trudeau Medal at age forty-two years (in 1932) for his tuberculosis research.2 He would now become the Army’s point man on the disease and stand at the front lines of the Medical Department’s struggle with tuberculosis beginning before Pearl Harbor to well after V-J (Victory-Japan) Day.

His mission to reduce the effect of tuberculosis on the Army differed from that of Colonel (Col.) George Bushnell in the previous war because disease was less of a threat. In fact, World War II would be the first war in which more American personnel died of battle wounds than of disease. Of 405,399 recorded fatalities, battle deaths outnumbered those from disease and nonbattle injuries more than two to one: 291,557 to 113,842.3 Malaria, sexually transmitted diseases, and respiratory infections did sicken millions of soldiers, sailors, Marines, and airmen, but most survived. Thanks in part to sulfa drugs and, beginning in 1943, penicillin to treat bacterial infections, the Army Medical Department had only 14,904 deaths of 14,998,369 disease admissions worldwide, a 0.1 percent death rate.4 Tuberculosis declined, too, representing only 1 percent of Army hospital admissions for diseases—1.2 per 1,000 cases per year—a rate much lower than the 12 per 1,000 cases per year during World War I. The Medical Department concluded that “tuberculosis was not a major cause of non-effectiveness during the war.”

But Sir Arthur S. McNalty, chief medical officer of the British Ministry of Health (1935–40), called tuberculosis “one of the camp followers of war.” War abetted tuberculosis, he explained, because of the “lowering of bodily resistance and increased physical or mental strain or both.”6 It also found fertile ground in crowded barracks and camps, and ran rampant in the World War II prison camps and Nazi concentration camps. And just one active case of tuberculosis per thousand in the Army meant thousands of tuberculosis sufferers among the 11 million Americans in uniform, each of whom consumed Medical Department resources: the average hospital stay per case during the war was 113 days.7

But if tuberculosis was a camp follower, Esmond Long (Figure 8-1) was a tuberculosis follower.8 He tracked it down, studied it, and tried to prevent its spread at every stage of American involvement in the war. With war looming in 1940, the National Research Council asked Long to chair the Division of Medical Sciences, Subcommittee for Tuberculosis, to advise the government on preventing and controlling tuberculosis in both civilian and military populations during war mobilization. Once the United States entered the war, Long received a commission as a colonel in the Medical Corps and moved his family from Philadelphia to Washington, DC. Working out of the Office of The Surgeon General, Long set up a screening process with the Selective Service to keep tuberculosis out of the Army and then traveled to more than ninety induction camps to ensure adherence to the procedures. He also oversaw the expansion of tuberculosis treatment facilities in the United States, inspected Fitzsimons and other Army tuberculosis hospitals, advised medical officers on treating patients, kept abreast of research developments in the labs, monitored outbreaks of tuberculosis in the theaters of war, and wrote articles for medical and lay periodicals to publicize the Army’s antituberculosis program.

In 1945 Long traveled to the European theater to inspect hospitals caring for tubercular refugees and liberated prisoners of war (POWs). There he saw the horrors of the concentration camps at Buchenwald and Dachau where Army medical personnel cared for thousands of former prisoners sick and dying of typhus, starvation, and tuberculosis. After the war Long organized the tuberculosis control program for the Allied occupation of Germany, and returned annually in the 1950s to assess its progress. He split his time between the Army Medical Department and the Veterans Administration (VA) to supervise the transition of the federal tuberculosis treatment program from the War Department to the VA. He also helped organize and evaluate the antibiotic trials, which ultimately led to an effective cure for tuberculosis. After returning to civilian life Long continued to study tuberculosis in the Army, and he wrote the key tuberculosis chapters for the Army Medical Department’s official history of the war.

With Long as a guide, this chapter shows how war once again served as handmaiden to disease around the globe. This time the Army Medical Department assumed not only national but international responsibilities for the control of tuberculosis in military and civilian populations, among friend and foe. Long and the Army Medical Department did succeed in demoting tuberculosis from the leading cause of disability discharge for American World War I personnel (13.5 percent of discharges), to thirteenth position during the years 1942–45 (1.9 percent of all discharges), behind conditions such as psychoneuroses, ulcers, respiratory diseases, arthritis, and other diseases.9 But this achievement required continued vigilance, an Army-wide surveillance program, and dedicated personnel and resources. The first step was to keep tuberculosis out of the Army.

After war broke out in Europe, Congress passed the National Defense Act of 1940, which established the first peacetime military draft in U.S. history, increasing Army strength eightfold from 210,000 in September 1939 to almost 1.7 million (1,686,403) by December 1941. This resulted in a 75 percent rise in the number of patients in military hospitals, straining the Medical Department, which had only seven general hospitals and 119 station hospitals in 1939.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Figure.. Esmond R. Long, who directed the Army tuberculosis program during World War II. Photograph courtesy of the National Library of Medicine, Image #B017302.

“Good Tuberculosis Men”

Soon appropriating freely, pledging “all of the resources of the country“ to meet the crisis, the War Department was constantly readjusting to meet the escalating emergency.

The National Research Council Committee on Medicine, Subcommittee on Tuberculosis, chaired by Long, met for the first time on 24 July 1940 and prioritized its responsibilities: first, develop recommendations on how to screen draft registrants for tuberculosis; second, screen civilians in federal service and wartime industries; third, figure out how to care for people rejected by the draft for the disease; and finally, help civilian and military agencies prepare for tuberculosis in war refugee populations. In its first nine-hour meeting, the subcommittee decided on centralized tuberculosis screening centers at 200 recruiting stations and generated a list of tuberculosis specialists nationwide to evaluate recruits and interpret X-rays at those centers. Subcommittee members stressed the importance of maintaining good records for processing any subsequent benefits claims and, most importantly, called for X-ray screening of all inductees—not just those who looked like they might have tuberculosis.

The War Department leadership initially rejected such comprehensive screening of inductees as expensive and time-consuming. The fact that tuberculosis death rates in the country had fallen two-thirds from 140 per 100,000 people in 1917 to 45 per 100,000 people in 1941, and in the Army from 4.6 per 1,000 in 1922 to 1.4 per 1,000 in 1940, may have led to complacency. But Long, his colleagues, and the national tuberculosis community, mindful of the cost to the nation in sickness, death, and disability benefits in the previous war, persisted. The American College of Chest Surgeons asked in July 1940, “Shall We Spread or Eliminate Tuberculosis in the Army?” and their president, Benjamin Goldberg, reported that the VA had spent almost $1.2 billion on tuberculosis patients through 1940. One medical officer calculated that 31 percent of all veterans who died as a result of World War I service and whose dependents received benefits, had died of tuberculosis. Even the lay press chimed in with a TIME magazine article, “TB Warning,” that stressed the importance of chest X-rays.16 Advocates pointed out that X-ray technology was more available and less expensive than in the previous war, and radiologists were more plentiful and skillful. They were also confident that new technology, such as the development of a lens that allowed the direct and rapid photography of a fluoroscopic image and new 4 x 5 inch films, which made storage and transport easier than that of the 11 x 14 inch films, rendered screening more practical than in 1917–18.

The Army Medical Department agreed with the National Research Council subcommittee. Since 1934 it had required X-rays for all Army personnel assigned overseas, but it had not yet convinced the War Department on universal screening. In June 1941, Brigadier General (Brig. Gen.) Charles Hillman, Chief, Office of The Surgeon General Professional Service Division, told the National Tuberculosis chairman, C. M. Hendricks, that “the desirability of routine X-rays had long been recognized by the Surgeon General’s Office,” but “considerations other than medical entered the picture and the character of induction

Camp Follower: Tuberculosis in World War II 277

Examinations had to be adapted to the limitations of time, place, and available equipment.” When Fitzsimons informed Hillman later that new recruits were arriving at the hospital with tuberculosis, he responded almost plaintively. “I am working with the Adjutant General to devise some method by which every volunteer for enlistment in the Regular Army will have a chest X-ray and serological test before acceptance.” He asked for all available evidence of sick recruits, explaining that “data on Regular Army men of short service now in Fitzsimons with tuberculosis will help me get the thing across.” As the data and advice accumulated, in January 1942, the Adjutant General required that all voluntary applicants and reenlisting men be given chest X-rays. Finally, on 15 March 1942, mobilization regulations made chest X-rays mandatory in all induction physicals.

With universal screening in place, Long, as chief of the tuberculosis branch in the Office of The Surgeon General, oversaw the screening process and faced a task similar to that of George Bushnell in 1917–18, finding that fine line between excluding as much tuberculosis as possible from the Army without rejecting too few or too many men. Conscious of his predecessor’s miscalculations, Long was careful not to criticize Bushnell’s tuberculosis program, at one point noting that World War I medical officers were “not to be reproached for not having knowledge that came into existence only later, any more than the chief of the Army air service in 1917 is to be reproached because more efficient airplanes are available now than then.”

The wartime emergency produced a public health campaign regarding tuberculosis and other disease threats. A War Department pamphlet, What Every Citizen Should Know about Wartime Medicine, presented the issue as one of maintaining troop health and limiting public costs. “The strenuous activity of soldiering is likely to cause extension of an incipient (early) tuberculous invasion of the lungs, or to precipitate the breakdown and reactivation of arrested cases,” it explained. Such illness could result in disability “and the necessity of providing long care of these patients in military hospitals where they must remain isolated from nontuberculous patients.” The Public Health Service also created a tuberculosis office to handle the expected increase in tuberculosis, and, as the National Research Council Subcommittee recommended, gave war industry workers chest examinations.

As military and civilian screening boards found thousands of people with active tuberculosis and sent many of them to tuberculosis sanatoriums and hospitals, they generated what a public health nurse referred to as “potentially the greatest case finding program that workers in tuberculosis control have ever known.” At the same time, however, war mobilization drew civilian medical personnel into the military, reducing staffing in home front institutions. Army medical personnel ultimately numbered more than 688,000, including 48,000 physicians in the Medical Corps, 14,000 dentists in the Dental Corps, and 56,000 nurses in the Army Nurse Corps—a large portion of the nation’s medical professionals.27 To maintain his nursing staff, VA Director Frank Hynes even asked the Army Nurse Corps in May 1942 not to hire VA nurses away from his hospitals.

Army tuberculosis rates during World War II, while lower than during World War I, did show a similar “U” curve with high rates at the beginning of the war as the Selective Service built up the military forces and cases that had eluded screening became active during training or combat (Figure 8-2). Tuberculosis rates fell as radiologists became more proficient at identifying tuberculosis infections, and then another sharp, higher increase in cases at the end of the war as discharge examinations found people who had developed active tuberculosis during their service. Postwar studies also revealed a seemingly paradoxical phenomenon that during the war military personnel serving overseas had lower tuberculosis rates than those serving in the United States, yet higher rates when they returned home.

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

Chart comparing the incidence curves of tuberculosis in the Army during World War I and World War II. From Esmond R. Long, “Tuberculosis,” in John Boyd Coates, Robert S. Anderson, and W. Paul Havens, eds., Internal Medicine in World War II, Medical Department, U.S. Army in World War II, vol. 2, Infectious Diseases (Washington, DC: Office of The Surgeon General, Department of the Army, 1961), chart 17, p. 335. Available at http://history.amedd.army.mil/booksdocs/wwii/infectiousdisvolii/chapter11chart17.pdf.

The Medical Department of the United States Army in the World War. Communicable and Other Diseases. Washington: U. S. Government Printing Office, 1928, vol. IX, pp. 171-202.
Letter, The Adjutant General, to Commanding Generals of all Corps Areas and Departments, 25 Oct. 1940, subject: Chest X-rays on Induction Examinations.
M. R. No. 1-9, Standards of Physical Examination During Mobilization, 31 Aug. 1940 and 15 Mar. 1942
Long, E. R.: Exclusion of Tuberculosis. Physical Standards for Induction and Appointment.[Official record.]

Long, E. R., and Stearns, W. II.: Physical Examination at Induction; Standards With Respect to Tuberculosis Induction and Their Application as Illustrated by a Review of 53,400 X-ray Films of Men in the Army of the United States. Radiology 41: 144-150, August 1943.
Long, Esmond R., and Jablon, Seymour: Tuberculosis in the Army of the United States in World War II. An Epidemiological Study with an Evaluation of X-ray Screening. Washington: U. S. Government Printing Office, 1955.

It is estimated that, before roentgen examination became mandatory (MR No. 1-9, 15 March 1942), one. million men had been accepted without this form of examination. Where roentgen examination was practiced, it resulted in a rejection rate of about 1 percent for tuberculosis. Applying this figure, it can be estimated that some 10,000 men were accepted who would have been rejected if they had been subjected to chest roentgen-ray study. Various studies have shown that approximately one-half of these would have been cases of active

http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Troops who developed tuberculosis were not discovered until their separation examinations, conducted when they were once again in the United States.

In the end, the screening process rejected 171,300 men for tuberculosis as the primary cause (thousands more had tuberculosis in addition to the disqualifying condition), and Long calculated that this saved the government millions of dollars in hospitalization costs. After the war, however, Long identified two factors that allowed tuberculous men into the Army: the failure to screen all inductees until March 1942, and the 4 x 5 inch stereoscopic (fluorographic) films, which were used in the interest of speed but which Long believed caused examiners to miss about 10 percent of minimal tuberculosis lesions in recruits. To better understand the latter problem he had two radiologists read the same X-rays and found substantial disagreement between their findings. Long therefore concluded that “if the induction films had each been read by two different radiologists, undoubtedly many more of the men who had tuberculosis at entry could have been excluded from service.” The Army ultimately discharged 15,387 enlisted men for tuberculosis during the war, which earned it thirteenth position as a cause of disability discharge.

American military forces fought in nine theaters of war—five in the Pacific and Asia, the other four in North Africa, the Mediterranean, Europe, and the Middle East. The Allies gave priority to defeating Germany and Italy in Europe beginning with operations in North Africa and the Mediterranean. After fighting in Tunisia in 1942–43, the Allies invaded Sicily on 10 July 1943, and moved up the Italian peninsula. By April 1944—in preparation for the D-Day invasion on 6 June 1944—the United States had more than 3 million soldiers in Europe, supported by 258,000 medical personnel managing a total of 318 hospitals with 252,050 beds. The war against Japan got off to a slower start as U.S. military forces developed the means to execute an island war across vast expanses of ocean. After fighting began in the Southwest Pacific, military forces grew from 62,500 troops in March 1942 to 670,000 in the summer of 1944 with 60,140 medical personnel. Even though military personnel developed tuberculosis in all of the nine theaters, the numbers were not high and tuberculosis was not a major military problem. In the Southwest Pacific theater, for example, only sixty-four of more than 40,000 hospital admissions were for the disease.

Tuberculosis was of the greatest consequence in the North Africa and Mediterranean theaters, in part due to poor screening early in the war, but also because, according to historian Charles Wiltse, it was the theater “in which the lessons of ground combat were learned by the Medical Department as much as by the line troops.” In general, medical personnel learned the importance of treating battle casualties as promptly as possible and keeping hospitals and clearing stations mobile and far forward to shorten evacuation and turnaround times. With regard to tuberculosis, the Medical Department had to relearn the World War I lesson of the importance of having skilled practitioners—or “good tuberculosis men”—in theater. They also ascertained which treatments were appropriate close to the battle lines and which were not, and when and how best to evacuate tubercular patients to the United States.

When soldiers with tuberculosis began to appear at Army medical stations in North Africa in late 1942, Major General (Maj. Gen.) Paul R. Hawley, chief of medical services for the European theater of operations, called for a tuberculosis specialist. On Long’s recommendation, Hawley appointed Col. Theodore Badger (Figure 8-3) as a senior consultant in tuberculosis on 2 January 1943. A professor of medicine at the Harvard School of Medicine, Badger had served in the Navy during World War I, and then attended Yale and Harvard where he earned his medical degree. Chief of medical service of the 5th General Hospital (GH), organized out of Harvard, Badger would play a role similar to that played by Gerald Webb during World War I—medical specialist, teacher, and troubleshooter.

Assessing the tuberculosis situation in the Mediterranean theater, Badger identified five hazards: (1) the development of active disease in American troops who had not been X-rayed upon induction; (2) association with British troops and civilians who had not been screened for tuberculosis; (3) drinking of nonpasteurized and possibly infected milk that could transmit tuberculosis; (4) battlefield conditions that could activate soldiers’ latent infections; and (5) the undetermined effects of other respiratory infections.41 Badger soon got the Army to use pasteurized milk and to establish X-ray centers with the proper equipment and trained staff, but he was not able to examine the thousands of American soldiers in the war zone. To gauge the extent of the tuberculosis problem he therefore arranged for a mobile X-ray unit to conduct spot surveys of troops in the field. Three examinations of some 3,000 troops each found only about 1 percent with signs of tuberculosis. To avoid losing manpower, Badger reported in mid-1943 that “up to the present time no individual has been removed from duty because of X-ray findings, and follow-up study has, so far, not indicated the necessity for it.” Badger planned to recheck those with suspicious films every few months to see if the signs had advanced. Badger recommended that patients with pleural effusion, the accumulation of fluid between the layers of the membranes that line the lungs and chest cavity that often indicates tuberculosis, be evacuated back to the United States. He also ended the practice of transporting some tuberculosis patients sitting up

. As the first true air war, World War II saw the introduction of air evacuation when Army aeromedical squadrons deployed in early 1943. After successful trials in the Pacific and North Africa, air evacuation increased so that during the Battle of the Bulge (1944–45), some patients arrived in U.S. hospitals within three days of being wounded. Some medical officers were concerned about the effects of transporting tuberculosis patients by air where they would be exposed to high speeds, jolting, and reduced air pressure. Tuberculosis specialists in New Mexico and Colorado therefore studied 143 white, male military patients, twenty-two-years old to twenty-eight-years old, with active tuberculosis flown to Army hospitals in nonpressurized air ambulances for any signs of trouble. Fearing the worst, they instead found that “severe discomfort, pulmonary hemorrhage, and spontaneous pneumothorax did not occur in the series either during or following the flight,” and concluded that air transport up to 10,000 feet was safe and preferable to time-consuming travel by water. By the end of the war the consensus was that rapid air evacuation to the United States also reduced the need to give a tuberculosis patient a pneumothorax in the field.

From the roof of Fitzsimons’ new building in April 1943, Rocky Mountain News reporter John Stephenson could see the Rocky Mountain Arsenal, the Denver Ordnance Plant, and Lowry Field, “places where the Army studies how to kill people.” But, he wrote, “The Army is merciful. It lets the right-hand of justice know what the left hand of mercy is doing at Fitzsimons General Hospital.” The largest Army hospital in the world, Fitzsimons had 322 buildings on 600 acres, paved streets with traffic lights, a post office, barbershop, pharmacy school, dental school, print shop, bakery, fire department, and chapel. It was, wrote Stephenson, “a city of 10,000.”61 No longer a liability, Fitzsimons was the pride of the Army Medical Department. One Army inspector reported that “it is apparent that no expense has been spared in this extraordinary building or in the general equipment and maintenance of the whole hospital plant.”62 As Congressman Lawrence Lewis had hoped, Fitzsimons’ mission now extended beyond caring for tuberculosis patients to meeting the general medical and surgical needs of the wider military community in the Denver region.

During the war the hospital maintained about 3,500 beds, reaching its highest daily patient population after the war—3,719 on 3 February 1946. The annual occupancy rate, calculated in patient days, increased from 603,683 in 1942 to a high of 1,097,760 for 1945, about 85 percent capacity.

With the reduction of tuberculosis in the Army over the years, the percentage of tuberculosis patients among all those at Fitzsimons had declined from 80 percent to 90 percent in the 1920s to 40 percent to 50 percent in the late 1930s. As the Army grew it now rose again. During the war Fitzsimons admitted more than 8,100 patients with tuberculosis. In fact, in 1943, only eighteen patients had battle injuries; the rest were in the hospital for illness and noncombat injuries. Unlike during the previous war, however, this Medical Department had a network of more than fifty veterans’ hospitals to which it could transfer patients too disabled by tuberculosis or other disease or injury to return to duty. Now, instead of allowing patients to stay in the service and receive the benefit of hospitalization with the hopes that they would recover and return to duty, the Medical Department discharged patients to VA hospitals as soon as they were determined to be unfit for military service, thereby reserving capacity for active-duty personnel. Maj. D. P. Greenlee had returned from a training course in penicillin therapy at Bushnell General Hospital in Utah to supervise the administration of the new drug on a variety of infections. He soon reported a cure rate of 93 percent. There were fewer victories in tuberculosis treatment.

During the war about one-quarter of all tuberculosis patients were treated with pneumothorax. During the war Fitzsimons surgeon Col. John B. Grow and other surgeons tried lung resection to treat tuberculosis, with few patient deaths. In 1946, however, when Grow’s staff contacted thirty patients who had had such surgery, they found that half of them were doing well, but three others had died, seven were seriously ill, and the rest were still under treatment. “It was felt that pulmonary resection in the presence of positive sputum was extremely hazardous and the indications were consequently narrowed down.”

Outside the operating rooms, the “City of 10,000” had a rich social life with people arriving at the post from all corners of the country. With Congressman Lewis’s acquisition of the School for Medical Technicians, Fitzsimons assumed the role of medical trainer, offering six- to twelve-week courses in technical training for dental, laboratory, X-ray, surgical, clinical, and pharmacy assistants. By 1946 the School had graduated more than 28,000 such technicians to serve around the world. The Women’s Army Corps arrived at Fitzsimons in February 1944 when 165 women attended the medical technicians school as part of the first coeducational class.74 Members of the Women’s Army Corps, rehabilitation aides, Education Department staff, dietitians, as well as nurses increased the female presence at Fitzsimons, as did activities of welfare organizations such as the Gold Star Mothers, the Red Cross, and the Junior League. Fitzsimons’ patients and staff also enjoyed visits from celebrities, including Jack Benny, Miss America, Gary Cooper, Dorothy Lamour, and other entertainers such as the big band leader Fred Waring and his Pennsylvanians, the Denver Symphony Orchestra, and an African American Methodist Church children’s choir from Denver. Like communities across the country, the hospital participated in war bond campaigns and had a huge war garden that produced thousands of ears of sweet corn and bushels of other vegetables.

Despite national mobilization and generous congressional funding, the Army could not escape the strain on its hospitals. By July 1944, Fitzsimons had reached capacity so the Medical Department designated two more hospitals as specialty centers for tuberculosis. Earl Bruns’ widow Caroline, who lived in Denver at the time, was no doubt pleased when the department named Bruns General Hospital in Santa Fe, New Mexico, in honor of her husband. Bruns along with Moore General Hospital in Swannanoa, North Carolina, cared for enlisted patients with minimal or suspected tuberculosis.

As Allied troops liberated France in 1944 and crossed into Germany they encountered thousands of refugees or “displaced persons”—escaped prisoners from Nazi concentration camps, exhausted and terrified Jews, slave laborers, political prisoners, Allied POWs, and other victims. The Nazi camps that held these people served as incubators for diseases such as tuberculosis and typhus, and the frightened, sick, and starved refugees inundated Army hospitals in late 1944 and early 1945. Theodore Badger reported one of the first waves that arrived on 18 December 1944 when 304 men, most of them Russians, came to the 50th GH in Commercy, France. They had been in the Nazi labor camps for the mines and heavy industries, where thousands died and survivors were malnourished and sick. All of the 304 had tuberculosis, 90 percent with moderate or advanced disease. Four were dead on arrival, eight more died in the first week, and one-third of the patients would die by May.96 Alarmed, Gen. Hawley, Chief Surgeon of the European Theater of Operations, ordered that all displaced civilians and recovered military personnel be examined for signs of tuberculosis “to establish the gravity of the situation.” The situation was dire. At one time the 46th GH had more than 1,000 tuberculosis patients, all recovered Allied POWs, causing Esmond Long to remark that the hospital “had the largest number of tuberculosis patients of any Army hospital in the world.”

The 46th GH from Portland, Oregon, which had cared for tuberculosis patients in the Mediterranean theater, also stood on the front lines of the tuberculosis problem in Europe. Serving at Besancon, France, the hospital would receive the Meritorious Service Unit Plaque and Col. J. G. Strohm, the commanding officer, the Bronze Star Medal for service during the liberation of France. During the spring of 1945, the 46th GH admitted 2,472 Russians, forty-one Poles, and 128 Yugoslav POWs and former slave laborers freed by American forces. The influx began on 12 March and within four days the 46th GH had admitted 1,200 such patients.

“The hospital staff was agast [sic] at the terrible physical condition of these people,” reported the hospital commander.99 When Badger visited the 46th GH in March 1945 he said the patients “constitute one of the most seriously affected groups with tuberculosis and malnutrition that I have ever seen,” explaining that most of them suffered “acute fulminating, rapidly fatal disease, mixed with chronic, slowly progressive, fibrotic tuberculosis. ”Medical personnel (Figure 8-4) cared for these patients as best they could, comforting many of them as they died. They began the rest treatment with some men but, as Badger reported, convincing Allied POWs to submit to absolute bed rest after months of confinement was “practically impossible.” Badger was able to report that after a month “those men who did not die of acute tuberculosis showed marked improvement.”

46th General Hospital nurses who cared for former prisoners of war.

46th General Hospital nurses who cared for former prisoners of war.

Figure 8-4. 46th General Hospital nurses who cared for former prisoners of war. Photograph courtesy of Oregon Health Sciences University, Historical Collections and Archives, Portland, Oregon.

26th Gen Hospital WWII, North Africa

26th Gen Hospital WWII, North Africa

In late 1944 Hawley requested 100,000 additional hospital beds for the displaced persons and POWs he expected to encounter after the German surrender, but Gen. George Marshall and Secretary of War Henry L. Stimson denied the request, believing they could not spare resources of that magnitude. The European Theater, they decided, must use German medical personnel and hospitals to care for the prisoners. Only after the war did American hospital units transfer their equipment and supplies to German civilians and Allies for their use.

The liberation of Europe also freed American POWs, who, not surprisingly, had higher rates of tuberculosis than other American military personnel. Captured British medical officer Capt. A. L. Cochrane cared for some of them in the prison where he was confined and noted sardonically that imprisonment was “an excellent place to study tuberculosis; [and] to learn the vast importance of food in human health and happiness.” German prison guards gave POWs only 1,000 to 1,500 calories per day, so Red Cross food parcels, which provided an additional 1,500 daily calories per person, were critical to preventing malnutrition and physical breakdown. Cochrane observed that the American and British POWs received the most parcels and had the lowest tuberculosis rates in the camp, while the Russians received nothing at all and had the highest rates. During the eighteen months that French POWs received the Red Cross parcels, he noted, just two men of 1,200 developed tuberculosis but when parcels for the French ceased to arrive in 1945, their tuberculosis rate rose to equal that of the Russians. The situation, he concluded, showed the “vast importance of nutrition in the incidence of tuberculosis.” Not all Americans got their parcels, though. William H. Balzer, with an American artillery unit, was captured in February 1943, and remembered how German guards stole the Americans’ packages.
Balzer survived imprisonment but never recovered from the ordeal. Severely disabled (70 percent), he died in 1960 on his forty-sixth birthday.

Exact tuberculosis rates among American POWs are not known because the rush of events surrounding the liberation of prisoners from German and Japanese control prevented a systematic X-ray survey. Rates did appear to be higher, though, for prisoners of the Japanese than for prisoners of the Germans. Long reported that about 0.6 percent of recovered troops from European POW camps had tuberculosis, whereas data from the Pacific theater suggested that 1 percent of recovered prisoners had tuberculosis. Moreover, an analysis of the chest X-rays done at West Coast debarkation hospitals revealed that 101 (or 2.7 percent) of 3,742 former POWs of the Japanese showed evidence of active tuberculosis. John R. Bumgarner was a tuberculosis ward officer at Sternberg General Hospital in Manila, the Philippines, before the war. A POW for forty-two months after the Japanese invasion, he described his experience in Parade of the Dead. Bumgarner did what he could to care for many of the 13,000 prisoners in the camp, but knew that “my patients were poorly diagnosed and poorly treated.” The narrow cots were so close together, he wrote, “the crowding and the breathing of air loaded with this bacilliary miasma from coughing ensured that those mistakenly segregated would be infected.”

Bumgarner was able to stay relatively healthy throughout his imprisonment. His luck ended, however, because “on my way home across the Pacific I had the first symptoms of tuberculosis.” Severe chest pain and subsequent X-rays at Letterman Hospital in San Francisco revealed active disease. “I had gone through more than four years of hell—now this!” Discharged on disability for tuberculosis in September 1946 he began to work at the Medical College of Virginia but soon had a lung hemorrhage. This time it took eight years of rest, with surgery and new antibiotic treatment for him to recover. By 1956, however, Bumgarner had married his sweetheart, Evelyn, and begun a medical career in cardiology that lasted for thirty years.

Tuberculosis continued to take its toll on POWs for years after the war. The VA followed POWs as a special group because, explained Long, of “the hardships that many of these men endured, and the notorious tendency for tuberculosis to make its appearance years after the acquisition of infection.” A follow-up study published in 1954 reported that for American POWs during the six years after liberation tuberculosis was the second highest cause of death, after accidents.

If the challenges Army medical personnel faced in caring for sick and starving POWs and refugees were unprecedented, the scale of disease and suffering they encountered in the Nazi concentration camps was almost unimaginable. Allied troops had heard about secret and deadly camps but were not prepared for what they found. As the Allies converged on Berlin from the East and the West, the Nazis evacuated thousands of prisoners—most of them Jews seized from across Europe, as well as POWs—to interior camps to hide their crimes and prevent the inmates from falling into Allied hands. These evacuations became death marches as SS (abbreviation of Schutzstaffel, which stood for “defense squadron”) guards beat and murdered people, and failed to feed them for days on end. Survivors were crowded into camps such as Buchenwald and Dachau making them even more chaotic and deadly. Americans, therefore, liberated camps that were riven with disease, especially typhus, tuberculosis, and malnutrition.

The Allies liberated Buchenwald on 11 April 1945. The following day the world learned that Franklin Roosevelt had died. Americans then liberated Dachau on 29 April, the day Italian partisans executed Mussolini in Milan, and the next day Hitler killed himself in his bunker. Dachau (Figure 8-5) had been the first of hundreds of concentration camps in the German Reich to which the Nazis sent political enemies, the disabled, people accused of socially deviant behavior, and, increasingly after the Kristallnacht pogroms of 1938, Jewish men, women, and children. In January1945 Dachau held 67,000 prisoners, but with troops of the Seventh U.S. Army approaching the SS began evacuating and killing prisoners. Capt. Marcus J. Smith, a medical officer in his thirties, arrived at Dachau on 30 April 1945, the day after liberation, part of a small team trained to treat persons displaced by the war. Horror greeted him outside the camp in a train of forty boxcars loaded with more than two thousand corpses. Smith called the frost that had formed on the bodies in the intense cold, “Nature’s shroud.” Inside Dachau he encountered more grotesque piles of naked, skeletal bodies of prisoners and scattered, mutilated bodies of German guards.

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Figure 8-5. Dachau survivors gather by the moat to greet American liberators, 29 April 1945. Photograph courtesy of the United States Holocaust Memorial Museum, Washington, DC.
Smith found more than 30,000 prisoners, mostly Jews of forty nationalities, and all men except for about 300 women the SS had kept in a brothel. They were in desperate condition. Typhus and dysentery raged, at least half of the prisoners were starving, and hundreds had advanced tuberculosis. “The well, the sick, the dying, and the dead lie next to each other in these poorly ventilated, unheated, dark, stinking buildings,” Smith told his wife. The men were “malnourished and emaciated, their diseases in all stages of development: early, late, and terminal.” He wondered, “What am I going to write in my notebook?” and then started a list of needed supplies: clothes, shoes, socks, towels, bedding, beds, soap, toilet paper, more latrines, and new quarters. He almost despaired. “What are we going to do with the starving patients? How will we care for them without sterile bandages, gloves, bedpans, urinals, thermometers, and all the basic material? How do we manage without an organization? No interns, no nursing staff, no ambulances, no bathtubs, no laboratories, no charts, and no orderlies, no administrator, and no doctors.… I feel helpless and empty. I cannot think of anything like this in modern medical history.”

American efforts did prevent a deadly typhus epidemic from sweeping postwar Europe and helped contain tuberculosis rates in Germany, but the Nazis had created a human catastrophe so immense that even the most dedicated efforts would at times fall short.

Faced with horror on such a scale, Smith and other Army Medical Department personnel assigned to the concentration camps threw themselves into the work of cleansing, comforting, treating, and nurturing their patients. American commanders called in at least six Army evacuation hospitals (EH) to care for the sick and dying in the liberated camps. EH No. 116 and EH No. 127 began arriving at Dachau on 2 May with some forty medical officers, forty nurses, and 220 enlisted men. Consulting with Smith and his team, the units set up in the former SS guard barracks. They tore out partitions to create larger wards, scrubbed the walls and floors with Cresol solution, sprayed them with dichloro-diphenyl-trichloroethane (DDT), and then set up cots to create two hospitals of 1,200 beds each. Medical staff also discovered physician-prisoners who had cared for the sick and injured as well as they could, and could now advise and assist, and in some cases translate for the medical staff. In two days the hospitals were ready to admit patients by triage, segregating them by disease and prognosis. Laurence Ball, the EH No. 116 commander, noted that more than 900 patients had “two or more diseases, such as malnutrition, typhus, diarrhea, and tuberculosis.” Staff bathed and deloused them, gave them clean pajamas, and put them to bed.

Death by overeating was but one of the dangers that the prisoners faced. During May 1945, American hospitals at Dachau had more than 4,000 typhus patients and lost 2,226 to typhus and other diseases. Typhus, a rickettsial disease transmitted by body lice, had a mortality rate as high as 40 percent. With no medical cure, treatment consisted of supportive care—keeping patients clean and nourished—to mitigate effects of prolonged fever, such as the breakdown of tissue into gangrene. The Americans knew that typhus had taken three million lives in Eastern Europe after World War I, but now they had a means of prevention and better weapons—a typhus vaccine and DDT. On 2 May, the day the evacuation hospitals arrived, the commander of the Seventh Army imposed quarantines for typhus and tuberculosis, and summoned the U.S. Typhus Commission, which had controlled a typhus outbreak in Naples, Italy. A typhus team arrived the next day and began to immunize American personnel and dust them with DDT. On 7 May staff began to vaccinate inmates but kept typhus patients isolated for at least twenty-one days from the onset of illness to prevent transmission to others. This meant that the Americans did not immediately enter the inner camp barracks—the worst, most typhus-infested part of the camp—nor did they quickly relieve crowding there for fear of spreading typhus-bearing lice. It took over a week for personnel to prepare more spacious and clean quarters.

Smith wrote his lists, reported to his wife, and kept track of the daily death toll, finding comfort as the number of people who died daily fell from 200 during the first week to twenty by the end of May. Another medical officer performed autopsies. He chose ten of the dead bodies, five from the death train and five from the camp yard, to see what had caused their deaths. All had typhus and extreme malnutrition, eight had advanced tuberculosis, and some bodies had signs of fractures and head injuries.

Survivors in Dachau, 1 May 1945

Survivors in Dachau, 1 May 1945

By the end of May, conditions at Dachau had improved. Typhus was abating and American officials began to release groups of inmates by nationality. Beyond Dachau, the U.S. Typhus Commission tracked down new cases of typhus in civilian and military populations, deloused one million people, sprayed fifteen tons of DDT, and created a cordon sanitaire on the Rhine requiring all who crossed from Germany to be vaccinated and dusted to prevent the spread of disease. Thus the Army averted a broader typhus epidemic.138 The tuberculosis situation was more complicated and presented the Americans with a conundrum. What to do with thousands of people suffering from a long-term, infectious, and deadly disease?

As with the American POWs, tuberculosis continued to follow Dachau survivors into their new lives. Thousands of Jewish survivors emigrated to what would become the state of Israel. Fifteen years after liberation, the Israeli Minister of Health reported that although concentration camp survivors comprised only 25 percent of the population, they accounted for 65 percent of the tuberculosis cases in the country. Tuberculosis continued to thrive in Europe as well.

Historian Albert Cowdrey has credited the American actions with preventing a number of postwar scourges: “No one can prove that a great typhus epidemic, mass deaths of prisoners of war, or widespread outbreaks of disease among the German population would have taken place without the efforts of Army doctors of the field forces and the military government.” But, he continued, “conditions were ripe for such tragedies to occur, and Army medics brought both professional knowledge and military discipline to forestalling what might have been the last calamities of the war in Europe.” Thus, as usual, in public health the good news is no news at all.

Thousands of men survived the Vietnam War because of the quality of their hospital care. US hospitals in Vietnam were the best that could be deployed, incorporating several improvements from previous field hospitals. Army doctors were better trained, and they had good facilities at the semi-permanent base camps. As a result, more advanced surgical procedures were possible: more laparotomies, thoracotomies, vascular repairs (including even aortic and carotid repairs), advanced neurosurgery for head wounds, and other medical procedures. Blood transfusions were performed, with massive quantities of blood available for seriously wounded patients; some patients received as many as 50 units of blood. Advances in equipment resulted in the development of intensive care units with mechanical ventilators. There were far more medications available for particular diseases than in earlier conflicts.

With about 30 physicians assigned, the 12th could keep four or five operating tables going all day, and two or three all night. A common practice was delayed primary closure for wounds with a high likelihood of infection. Instead of stitching the wound closed immediately, dirt and contaminants were flushed out, bleeding was controlled, dead flesh was removed (debrided), the wound was packed with sterile gauze, and antibiotics were administered. For a few days the patient healed, while nurses changed the bandages and made sure the wound did
not get worse. Then doctors removed any remaining contaminants or dead flesh and stitched up the wound. This procedure reduced the incidence of infection compared to immediate wound closure at a risk of a larger scar.

In any given year in Vietnam, about one soldier in three was hospitalized for disease. The main causes for hospitalization were malaria, psychiatric problems, and ordinary fevers. Although many men fell sick, competent care was available and most recovered quickly and returned to duty.

The war spurred advances in surgery and medical trauma research. New surgical techniques allowed limbs that previously would have been amputated to remain functional. Nurse anesthetist Rosemary Sue Smith recalled the development of new blood-handling procedures:

We started separating blood into its components, because we were getting a lot of aggregates that were causing a lot of disseminated intravascular coagulopathy in patients, and causing a lot of blood clots, and pulmonary thrombosis, and a lot of ARDS, Adult Respiratory Distress Syndrome, which started in Da Nang and was called Da Nang Lung initially. It has developed into today being called Adult Respiratory Distress Syndrome, and they did a lot of research on this, and they were having us separate our blood into its components, into fresh frozen plasma and into platelets, and then we started doing blood tests to see which the patients would need. If their platelets were low, or if their blood clotting factors were low, we would just give them the particular products. We actually started breaking these products down and administering them in the Vietnam War, and it’s carried over into civilian life now. They’re used today in acute trauma to prevent disseminated intravascular coagulopathy and prevent Adult Respiratory Distress Syndrome on massive traumas that have to be naturally resuscitated with blood and blood products.

In the 1960s, intensive care was still quite new and the 12th had only one (later two) intensive care wards fully equipped and staffed. A key piece of equipment was the ventilator, then called “respirator.” Ventilators worked on pure oxygen until 1969, when research revealed physiological problems from prolonged breathing of pure oxygen. Early ventilators required considerable maintenance; valves needed frequent cleaning or the machines broke down.

Antibiotics were important because of the wide variety of bacteria and large number of penetrating wounds; in the face of a possible systemic infection (the development of sepsis), antibiotics were delivered through an IV. Nurse Rosie Parmeter recalled having to prepare antibiotics to be delivered through an IV several times a day for each patient, a necessary but time-consuming task.

About two-thirds of patients cared for by the 12th were US military; the other third were mainly Vietnamese but also included nonmilitary Americans and Free World Military Assistance Forces personnel. Staff regularly dealt with the Vietnamese, both military and civilian, enemy and friendly. There were wards set aside for enemy prisoners (who were stabilized, then transferred to hospitals at POW camps) and civilians. Wounded South Vietnamese Army soldiers were also stabilized and transferred to hospitals run by the Army of the Republic of Vietnam (ARVN). Civilian patients often stayed longer because the war swamped the available hospitals for Vietnamese civilians.

Through the years of the Vietnam War, US forces sustained 313,616 wounded in action; at peak strength, there were 26 American hospitals. The 12th Evacuation Hospital was at Cu Chi for 4 years and treated just over 37,000 patients. Records for the 12th are incomplete, but the average died-of-wounds rate in Vietnam was about 2.8% of patients who reached a hospital alive. Applied to the 12th, that rate amounted to about 1,036 patients, including prisoners and Vietnamese as well as Americans. But over 36,000 people survived and could return home because of the treatment they received at the 12th Evac.

Sources:

Fort Bayard,  by David Kammer, Establishment of Fort Bayard Army Post
http://newmexicohistory.org/places/fort-bayard
George Ensign Bushnell, Colonel, Medical Corps, U. S. Army
THE ARMY MEDICAL BULLETIN, NUMBER 50 (OCTOBER 1939)
http://history.amedd.army.mil/biographies/bushnell
Chapter One, The Early Years: Fort Bayard, New Mexico
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=332041d7-dbd2-4edf-823f-29a66c0b65ef
Dachau concentration camp (Wikipedia)
http://en.wikipedia.org/wiki/Dachau_concentration_camp
Office of Medical History – United States Army
Esmond R. Long, M. D., TUBERCULOSIS IN WORLD WAR I
Chapter 14 – Tuberculosis
http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Chapter Four, Tuberculosis in World War I
Chapter Five, “A Gigantic Task”: Treating and Paying for Tuberculosis in the Interwar Period
Chapter Six, “Good Tuberculosis Women”: Tuberculosis Nursing during the Interwar Period
Chapter Seven, Surviving the Great Depression: Fitzsimons and the New Deal
Chapter Eight, Camp Follower: Tuberculosis in World War II
http://www.cs.amedd.army.mil/FileDownload aspx?
Good Tuberculosis Men”: The Army Medical Department’s Struggle with Tuberculosis Carol R. Byerly
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=986faf8a-b833-46a8-a251-00f72c91da2f

The Global Distribution of Yellow Fever and Dengue
D.J. Rogers1, A.J. Wilson1, S.I. Hay1,2, and A.J. Graham1
Adv Parasitol. 2006 ; 62: 181–220. http://dx.doi.org:/10.1016/S0065-308X(05)62006-4.

http://www.historyofvaccines.org/content/timelines/yellow-fever

History of yellow fever
http://en.wikipedia.org/wiki/History_of_yellow_fever

Additional Reading:

Open Wound: The Tragic Obsession of William Beaumont.
Jason Karlawish

The Great Influenzs. John M. Barry.
Penguin. 2004.
Univ Mich Press. 2011.

Flu. The story of the great influenza pandemic of 1918 and
the search for the virus that caused it.
Gina Kolata.
Touchstone. 1999

Pestilence. A Medieval Tale of Plague.
Jeani Rector
The HorrorZime. 2012

Knife Man: The extraordinary life of John Hunter, Father of Modern Surgery
Wendy Moore.
Broadway Books. 2005

Hospital.
Julie Salamon.
Penguin Press. 2008.

Overdosed America.

John Abramson.
Harper. 2004.

Sick.
Jonathen Cohn.
Harper Collins. 2007.
.

Read Full Post »

Antimalarial flow synthesis closer to commercialisation.

Read Full Post »