Feeds:
Posts
Comments

Archive for the ‘Technology Advance Assessment of’ Category

Treatment for Chronic Leukemias [2.4.4B]

Larry H. Bernstein, MD, FCAP, Author, Curator, Editor

http://pharmaceuticalintelligence.com/2015/8/11/larryhbern/Treatment-for-Chronic-Leukemias-[2.4.4B]

2.4.4B1 Treatment for CML

Chronic Myelogenous Leukemia Treatment (PDQ®)

http://www.cancer.gov/cancertopics/pdq/treatment/CML/Patient/page4

Treatment Option Overview

Key Points for This Section

There are different types of treatment for patients with chronic myelogenous leukemia.

Six types of standard treatment are used:

  1. Targeted therapy
  2. Chemotherapy
  3. Biologic therapy
  4. High-dose chemotherapy with stem cell transplant
  5. Donor lymphocyte infusion (DLI)
  6. Surgery

New types of treatment are being tested in clinical trials.

Patients may want to think about taking part in a clinical trial.

Patients can enter clinical trials before, during, or after starting their cancer treatment.

Follow-up tests may be needed.

There are different types of treatment for patients with chronic myelogenous leukemia.

Different types of treatment are available for patients with chronic myelogenous leukemia (CML). Some treatments are standard (the currently used treatment), and some are being tested in clinical trials. A treatment clinical trial is a research study meant to help improve current treatments or obtain information about new treatments for patients with cancer. When clinical trials show that a new treatment is better than the standard treatment, the new treatment may become the standard treatment. Patients may want to think about taking part in a clinical trial. Some clinical trials are open only to patients who have not started treatment.

Six types of standard treatment are used:

Targeted therapy

Targeted therapy is a type of treatment that uses drugs or other substances to identify and attack specific cancer cells without harming normal cells. Tyrosine kinase inhibitors are targeted therapy drugs used to treat chronic myelogenous leukemia.

Imatinib mesylate, nilotinib, dasatinib, and ponatinib are tyrosine kinase inhibitors that are used to treat CML.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Chemotherapy

Chemotherapy is a cancer treatment that uses drugs to stop the growth of cancer cells, either by killing the cells or by stopping them from dividing. When chemotherapy is taken by mouth or injected into a vein or muscle, the drugs enter the bloodstream and can reach cancer cells throughout the body (systemic chemotherapy). When chemotherapy is placed directly into the cerebrospinal fluid, an organ, or a body cavity such as the abdomen, the drugs mainly affect cancer cells in those areas (regional chemotherapy). The way the chemotherapy is given depends on the type and stage of the cancer being treated.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Biologic therapy

Biologic therapy is a treatment that uses the patient’s immune system to fight cancer. Substances made by the body or made in a laboratory are used to boost, direct, or restore the body’s natural defenses against cancer. This type of cancer treatment is also called biotherapy or immunotherapy.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

High-dose chemotherapy with stem cell transplant

High-dose chemotherapy with stem cell transplant is a method of giving high doses of chemotherapy and replacing blood-forming cells destroyed by the cancer treatment. Stem cells (immature blood cells) are removed from the blood or bone marrow of the patient or a donor and are frozen and stored. After the chemotherapy is completed, the stored stem cells are thawed and given back to the patient through an infusion. These reinfused stem cells grow into (and restore) the body’s blood cells.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Donor lymphocyte infusion (DLI)

Donor lymphocyte infusion (DLI) is a cancer treatment that may be used after stem cell transplant.Lymphocytes (a type of white blood cell) from the stem cell transplant donor are removed from the donor’s blood and may be frozen for storage. The donor’s lymphocytes are thawed if they were frozen and then given to the patient through one or more infusions. The lymphocytes see the patient’s cancer cells as not belonging to the body and attack them.

Surgery

Splenectomy

What`s new in chronic myeloid leukemia research and treatment?

http://www.cancer.org/cancer/leukemia-chronicmyeloidcml/detailedguide/leukemia-chronic-myeloid-myelogenous-new-research

Combining the targeted drugs with other treatments

Imatinib and other drugs that target the BCR-ABL protein have proven to be very effective, but by themselves these drugs don’t help everyone. Studies are now in progress to see if combining these drugs with other treatments, such as chemotherapy, interferon, or cancer vaccines (see below) might be better than either one alone. One study showed that giving interferon with imatinib worked better than giving imatinib alone. The 2 drugs together had more side effects, though. It is also not clear if this combination is better than treatment with other tyrosine kinase inhibitors (TKIs), such as dasatinib and nilotinib. A study going on now is looking at combing interferon with nilotinib.

Other studies are looking at combining other drugs, such as cyclosporine or hydroxychloroquine, with a TKI.

New drugs for CML

Because researchers now know the main cause of CML (the BCR-ABL gene and its protein), they have been able to develop many new drugs that might work against it.

In some cases, CML cells develop a change in the BCR-ABL oncogene known as a T315I mutation, which makes them resistant to many of the current targeted therapies (imatinib, dasatinib, and nilotinib). Ponatinib is the only TKI that can work against T315I mutant cells. More drugs aimed at this mutation are now being tested.

Other drugs called farnesyl transferase inhibitors, such as lonafarnib and tipifarnib, seem to have some activity against CML and patients may respond when these drugs are combined with imatinib. These drugs are being studied further.

Other drugs being studied in CML include the histone deacetylase inhibitor panobinostat and the proteasome inhibitor bortezomib (Velcade).

Several vaccines are now being studied for use against CML.

2.4.4.B2 Chronic Lymphocytic Leukemia

Chronic Lymphocytic Leukemia Treatment (PDQ®)

General Information About Chronic Lymphocytic Leukemia

Key Points for This Section

  1. Chronic lymphocytic leukemia is a type of cancer in which the bone marrow makes too many lymphocytes (a type of white blood cell).
  2. Leukemia may affect red blood cells, white blood cells, and platelets.
  3. Older age can affect the risk of developing chronic lymphocytic leukemia.
  4. Signs and symptoms of chronic lymphocytic leukemia include swollen lymph nodes and tiredness.
  5. Tests that examine the blood, bone marrow, and lymph nodes are used to detect (find) and diagnose chronic lymphocytic leukemia.
  6. Certain factors affect treatment options and prognosis (chance of recovery).
  7. Chronic lymphocytic leukemia is a type of cancer in which the bone marrow makes too many lymphocytes (a type of white blood cell).

Chronic lymphocytic leukemia (also called CLL) is a blood and bone marrow disease that usually gets worse slowly. CLL is one of the most common types of leukemia in adults. It often occurs during or after middle age; it rarely occurs in children.

http://www.cancer.gov/images/cdr/live/CDR755927-750.jpg

Anatomy of the bone; drawing shows spongy bone, red marrow, and yellow marrow. A cross section of the bone shows compact bone and blood vessels in the bone marrow. Also shown are red blood cells, white blood cells, platelets, and a blood stem cell.

Anatomy of the bone. The bone is made up of compact bone, spongy bone, and bone marrow. Compact bone makes up the outer layer of the bone. Spongy bone is found mostly at the ends of bones and contains red marrow. Bone marrow is found in the center of most bones and has many blood vessels. There are two types of bone marrow: red and yellow. Red marrow contains blood stem cells that can become red blood cells, white blood cells, or platelets. Yellow marrow is made mostly of fat.

Leukemia may affect red blood cells, white blood cells, and platelets.

Normally, the body makes blood stem cells (immature cells) that become mature blood cells over time. A blood stem cell may become a myeloid stem cell or a lymphoid stem cell.

A myeloid stem cell becomes one of three types of mature blood cells:

  1. Red blood cells that carry oxygen and other substances to all tissues of the body.
  2. White blood cells that fight infection and disease.
  3. Platelets that form blood clots to stop bleeding.

A lymphoid stem cell becomes a lymphoblast cell and then one of three types of lymphocytes (white blood cells):

  1. B lymphocytes that make antibodies to help fight infection.
  2. T lymphocytes that help B lymphocytes make antibodies to fight infection.
  3. Natural killer cells that attack cancer cells and viruses.
Blood cell development. CDR526538-750

Blood cell development. CDR526538-750

http://www.cancer.gov/images/cdr/live/CDR526538-750.jpg

Blood cell development; drawing shows the steps a blood stem cell goes through to become a red blood cell, platelet, or white blood cell. A myeloid stem cell becomes a red blood cell, a platelet, or a myeloblast, which then becomes a granulocyte (the types of granulocytes are eosinophils, basophils, and neutrophils). A lymphoid stem cell becomes a lymphoblast and then becomes a B-lymphocyte, T-lymphocyte, or natural killer cell.

Blood cell development. A blood stem cell goes through several steps to become a red blood cell, platelet, or white blood cell.

In CLL, too many blood stem cells become abnormal lymphocytes and do not become healthy white blood cells. The abnormal lymphocytes may also be called leukemia cells. The lymphocytes are not able to fight infection very well. Also, as the number of lymphocytes increases in the blood and bone marrow, there is less room for healthy white blood cells, red blood cells, and platelets. This may cause infection, anemia, and easy bleeding.

This summary is about chronic lymphocytic leukemia. See the following PDQ summaries for more information about leukemia:

  • Adult Acute Lymphoblastic Leukemia Treatment.
  • Childhood Acute Lymphoblastic Leukemia Treatment.
  • Adult Acute Myeloid Leukemia Treatment.
  • Childhood Acute Myeloid Leukemia/Other Myeloid Malignancies Treatment.
  • Chronic Myelogenous Leukemia Treatment.
  • Hairy Cell Leukemia Treatment

Older age can affect the risk of developing chronic lymphocytic leukemia.

Anything that increases your risk of getting a disease is called a risk factor. Having a risk factor does not mean that you will get cancer; not having risk factors doesn’t mean that you will not get cancer. Talk with your doctor if you think you may be at risk. Risk factors for CLL include the following:

  • Being middle-aged or older, male, or white.
  • A family history of CLL or cancer of the lymph system.
  • Having relatives who are Russian Jews or Eastern European Jews.

Signs and symptoms of chronic lymphocytic leukemia include swollen lymph nodes and tiredness.

Usually CLL does not cause any signs or symptoms and is found during a routine blood test. Signs and symptoms may be caused by CLL or by other conditions. Check with your doctor if you have any of the following:

  • Painless swelling of the lymph nodes in the neck, underarm, stomach, or groin.
  • Feeling very tired.
  • Pain or fullness below the ribs.
  • Fever and infection.
  • Weight loss for no known reason.

Tests that examine the blood, bone marrow, and lymph nodes are used to detect (find) and diagnose chronic lymphocytic leukemia.

The following tests and procedures may be used:

Physical exam and history : An exam of the body to check general signs of health, including checking for signs of disease, such as lumps or anything else that seems unusual. A history of the patient’s health habits and past illnesses and treatments will also be taken.

Complete blood count (CBC) with differential : A procedure in which a sample of blood is drawn and checked for the following:

The number of red blood cells and platelets.

The number and type of white blood cells.

The amount of hemoglobin (the protein that carries oxygen) in the red blood cells.

The portion of the blood sample made up of red blood cells.

Results from the Phase 3 Resonate™ Trial

Significantly improved progression free survival (PFS) vs ofatumumab in patients with previously treated CLL

  • Patients taking IMBRUVICA® had a 78% statistically significant reduction in the risk of disease progression or death compared with patients who received ofatumumab1
  • In patients with previously treated del 17p CLL, median PFS was not yet reached with IMBRUVICA® vs 5.8 months with ofatumumab (HR 0.25; 95% CI: 0.14, 0.45)1

Significantly prolonged overall survival (OS) with IMBRUVICA® vs ofatumumab in patients with previously treated CLL

  • In patients with previously treated CLL, those taking IMBRUVICA® had a 57% statistically significant reduction in the risk of death compared with those who received ofatumumab (HR 0.43; 95% CI: 0.24, 0.79; P<0.05)1

Typical treatment of chronic lymphocytic leukemia

http://www.cancer.org/cancer/leukemia-chroniclymphocyticcll/detailedguide/leukemia-chronic-lymphocytic-treating-treatment-by-risk-group

Treatment options for chronic lymphocytic leukemia (CLL) vary greatly, depending on the person’s age, the disease risk group, and the reason for treating (for example, which symptoms it is causing). Many people live a long time with CLL, but in general it is very difficult to cure, and early treatment hasn’t been shown to help people live longer. Because of this and because treatment can cause side effects, doctors often advise waiting until the disease is progressing or bothersome symptoms appear, before starting treatment.

If treatment is needed, factors that should be taken into account include the patient’s age, general health, and prognostic factors such as the presence of chromosome 17 or chromosome 11 deletions or high levels of ZAP-70 and CD38.

Initial treatment

Patients who might not be able to tolerate the side effects of strong chemotherapy (chemo), are often treated with chlorambucil alone or with a monoclonal antibody targeting CD20 like rituximab (Rituxan) or obinutuzumab (Gazyva). Other options include rituximab alone or a corticosteroid like prednisione.

In stronger and healthier patients, there are many options for treatment. Commonly used treatments include:

  • FCR: fludarabine (Fludara), cyclophosphamide (Cytoxan), and rituximab
  • Bendamustine (sometimes with rituximab)
  • FR: fludarabine and rituximab
  • CVP: cyclophosphamide, vincristine, and prednisone (sometimes with rituximab)
  • CHOP: cyclophosphamide, doxorubicin, vincristine (Oncovin), and prednisone
  • Chlorambucil combined with prednisone, rituximab, obinutuzumab, or ofatumumab
  • PCR: pentostatin (Nipent), cyclophosphamide, and rituximab
  • Alemtuzumab (Campath)
  • Fludarabine (alone)

Other drugs or combinations of drugs may also be also used.

If the only problem is an enlarged spleen or swollen lymph nodes in one region of the body, localized treatment with low-dose radiation therapy may be used. Splenectomy (surgery to remove the spleen) is another option if the enlarged spleen is causing symptoms.

Sometimes very high numbers of leukemia cells in the blood cause problems with normal circulation. This is calledleukostasis. Chemo may not lower the number of cells until a few days after the first dose, so before the chemo is given, some of the cells may be removed from the blood with a procedure called leukapheresis. This treatment lowers blood counts right away. The effect lasts only for a short time, but it may help until the chemo has a chance to work. Leukapheresis is also sometimes used before chemo if there are very high numbers of leukemia cells (even when they aren’t causing problems) to prevent tumor lysis syndrome (this was discussed in the chemotherapy section).

Some people who have very high-risk disease (based on prognostic factors) may be referred for possible stem cell transplant (SCT) early in treatment.

Second-line treatment of CLL

If the initial treatment is no longer working or the disease comes back, another type of treatment may help. If the initial response to the treatment lasted a long time (usually at least a few years), the same treatment can often be used again. If the initial response wasn’t long-lasting, using the same treatment again isn’t as likely to be helpful. The options will depend on what the first-line treatment was and how well it worked, as well as the person’s health.

Many of the drugs and combinations listed above may be options as second-line treatments. For many people who have already had fludarabine, alemtuzumab seems to be helpful as second-line treatment, but it carries an increased risk of infections. Other purine analog drugs, such as pentostatin or cladribine (2-CdA), may also be tried. Newer drugs such as ofatumumab, ibrutinib (Imbruvica), and idelalisib (Zydelig) may be other options.

If the leukemia responds, stem cell transplant may be an option for some patients.

Some people may have a good response to first-line treatment (such as fludarabine) but may still have some evidence of a small number of leukemia cells in the blood, bone marrow, or lymph nodes. This is known as minimal residual disease. CLL can’t be cured, so doctors aren’t sure if further treatment right away will be helpful. Some small studies have shown that alemtuzumab can sometimes help get rid of these remaining cells, but it’s not yet clear if this improves survival.

Treating complications of CLL

One of the most serious complications of CLL is a change (transformation) of the leukemia to a high-grade or aggressive type of non-Hodgkin lymphoma called diffuse large cell lymphoma. This happens in about 5% of CLL cases, and is known as Richter syndrome. Treatment is often the same as it would be for lymphoma (see our document called Non-Hodgkin Lymphoma for more information), and may include stem cell transplant, as these cases are often hard to treat.

Less often, CLL may transform to prolymphocytic leukemia. As with Richter syndrome, these cases can be hard to treat. Some studies have suggested that certain drugs such as cladribine (2-CdA) and alemtuzumab may be helpful.

In rare cases, patients with CLL may have their leukemia transform into acute lymphocytic leukemia (ALL). If this happens, treatment is likely to be similar to that used for patients with ALL (see our document called Leukemia: Acute Lymphocytic).

Acute myeloid leukemia (AML) is another rare complication in patients who have been treated for CLL. Drugs such as chlorambucil and cyclophosphamide can damage the DNA of blood-forming cells. These damaged cells may go on to become cancerous, leading to AML, which is very aggressive and often hard to treat (see our document calledLeukemia: Acute Myeloid).

CLL can cause problems with low blood counts and infections. Treatment of these problems were discussed in the section “Supportive care in chronic lymphocytic leukemia.”

Read Full Post »

Treatments other than Chemotherapy for Leukemias and Lymphomas

Author, Curator, Editor: Larry H. Bernstein, MD, FCAP

2.5.1 Radiation Therapy 

http://www.lls.org/treatment/types-of-treatment/radiation-therapy

Radiation therapy, also called radiotherapy or irradiation, can be used to treat leukemia, lymphoma, myeloma and myelodysplastic syndromes. The type of radiation used for radiotherapy (ionizing radiation) is the same that’s used for diagnostic x-rays. Radiotherapy, however, is given in higher doses.

Radiotherapy works by damaging the genetic material (DNA) within cells, which prevents them from growing and reproducing. Although the radiotherapy is directed at cancer cells, it can also damage nearby healthy cells. However, current methods of radiotherapy have been improved upon, minimizing “scatter” to nearby tissues. Therefore its benefit (destroying the cancer cells) outweighs its risk (harming healthy cells).

When radiotherapy is used for blood cancer treatment, it’s usually part of a treatment plan that includes drug therapy. Radiotherapy can also be used to relieve pain or discomfort caused by an enlarged liver, lymph node(s) or spleen.

Radiotherapy, either alone or with chemotherapy, is sometimes given as conditioning treatment to prepare a patient for a blood or marrow stem cell transplant. The most common types used to treat blood cancer are external beam radiation (see below) and radioimmunotherapy.
External Beam Radiation

External beam radiation is the type of radiotherapy used most often for people with blood cancers. A focused radiation beam is delivered outside the body by a machine called a linear accelerator, or linac for short. The linear accelerator moves around the body to deliver radiation from various angles. Linear accelerators make it possible to decrease or avoid skin reactions and deliver targeted radiation to lessen “scatter” of radiation to nearby tissues.

The dose (total amount) of radiation used during treatment depends on various factors regarding the patient, disease and reason for treatment, and is established by a radiation oncologist. You may receive radiotherapy during a series of visits, spread over several weeks (from two to 10 weeks, on average). This approach, called dose fractionation, lessens side effects. External beam radiation does not make you radioactive.

2.5.2  Bone marrow (BM) transplantation

http://www.nlm.nih.gov/medlineplus/ency/article/003009.htm

There are three kinds of bone marrow transplants:

Autologous bone marrow transplant: The term auto means self. Stem cells are removed from you before you receive high-dose chemotherapy or radiation treatment. The stem cells are stored in a freezer (cryopreservation). After high-dose chemotherapy or radiation treatments, your stems cells are put back in your body to make (regenerate) normal blood cells. This is called a rescue transplant.

Allogeneic bone marrow transplant: The term allo means other. Stem cells are removed from another person, called a donor. Most times, the donor’s genes must at least partly match your genes. Special blood tests are done to see if a donor is a good match for you. A brother or sister is most likely to be a good match. Sometimes parents, children, and other relatives are good matches. Donors who are not related to you may be found through national bone marrow registries.

Umbilical cord blood transplant: This is a type of allogeneic transplant. Stem cells are removed from a newborn baby’s umbilical cord right after birth. The stem cells are frozen and stored until they are needed for a transplant. Umbilical cord blood cells are very immature so there is less of a need for matching. But blood counts take much longer to recover.

Before the transplant, chemotherapy, radiation, or both may be given. This may be done in two ways:

Ablative (myeloablative) treatment: High-dose chemotherapy, radiation, or both are given to kill any cancer cells. This also kills all healthy bone marrow that remains, and allows new stem cells to grow in the bone marrow.

Reduced intensity treatment, also called a mini transplant: Patients receive lower doses of chemotherapy and radiation before a transplant. This allows older patients, and those with other health problems to have a transplant.

A stem cell transplant is usually done after chemotherapy and radiation is complete. The stem cells are delivered into your bloodstream usually through a tube called a central venous catheter. The process is similar to getting a blood transfusion. The stem cells travel through the blood into the bone marrow. Most times, no surgery is needed.

Donor stem cells can be collected in two ways:

  • Bone marrow harvest. This minor surgery is done under general anesthesia. This means the donor will be asleep and pain-free during the procedure. The bone marrow is removed from the back of both hip bones. The amount of marrow removed depends on the weight of the person who is receiving it.
  • Leukapheresis. First, the donor is given 5 days of shots to help stem cells move from the bone marrow into the blood. During leukapheresis, blood is removed from the donor through an IV line in a vein. The part of white blood cells that contains stem cells is then separated in a machine and removed to be later given to the recipient. The red blood cells are returned to the donor.

Why the Procedure is Performed

A bone marrow transplant replaces bone marrow that either is not working properly or has been destroyed (ablated) by chemotherapy or radiation. Doctors believe that for many cancers, the donor’s white blood cells can attach to any remaining cancer cells, similar to when white cells attach to bacteria or viruses when fighting an infection.

Your doctor may recommend a bone marrow transplant if you have:

Certain cancers, such as leukemia, lymphoma, and multiple myeloma

A disease that affects the production of bone marrow cells, such as aplastic anemia, congenital neutropenia, severe immunodeficiency syndromes, sickle cell anemia, thalassemia

Had chemotherapy that destroyed your bone

2.5.3 Autologous stem cell transplantation

Phase II trial of 131I-B1 (anti-CD20) antibody therapy with autologous stem cell transplantation for relapsed B cell lymphomas

O.W Press,  F Appelbaum,  P.J Martin, et al.
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(95)92225-3/abstract

25 patients with relapsed B-cell lymphomas were evaluated with trace-labelled doses (2·5 mg/kg, 185-370 MBq [5-10 mCi]) of 131I-labelled anti-CD20 (B1) antibody in a phase II trial. 22 patients achieved 131I-B1 biodistributions delivering higher doses of radiation to tumor sites than to normal organs and 21 of these were treated with therapeutic infusions of 131I-B1 (12·765-29·045 GBq) followed by autologous hemopoietic stem cell reinfusion. 18 of the 21 treated patients had objective responses, including 16 complete remissions. One patient died of progressive lymphoma and one died of sepsis. Analysis of our phase I and II trials with 131I-labelled B1 reveal a progression-free survival of 62% and an overall survival of 93% with a median follow-up of 2 years. 131I-anti-CD20 (B1) antibody therapy produces complete responses of long duration in most patients with relapsed B-cell lymphomas when given at maximally tolerated doses with autologous stem cell rescue.

Autologous (Self) Transplants

http://www.leukaemia.org.au/treatments/stem-cell-transplants/autologous-self-transplants

An autologous transplant (or rescue) is a type of transplant that uses the person’s own stem cells. These cells are collected in advance and returned at a later stage. They are used to replace stem cells that have been damaged by high doses of chemotherapy, used to treat the person’s underlying disease.

In most cases, stem cells are collected directly from the bloodstream. While stem cells normally live in your marrow, a combination of chemotherapy and a growth factor (a drug that stimulates stem cells) called Granulocyte Colony Stimulating Factor (G-CSF) is used to expand the number of stem cells in the marrow and cause them to spill out into the circulating blood. From here they can be collected from a vein by passing the blood through a special machine called a cell separator, in a process similar to dialysis.

Most of the side effects of an autologous transplant are caused by the conditioning therapy used. Although they can be very unpleasant at times it is important to remember that most of them are temporary and reversible.

Procedure of Hematopoietic Stem Cell Transplantation

Hematopoietic stem cell transplantation (HSCT) is the transplantation of multipotent hematopoietic stem cells, usually derived from bone marrow, peripheral blood, or umbilical cord blood. It may be autologous (the patient’s own stem cells are used) or allogeneic (the stem cells come from a donor).

Hematopoietic Stem Cell Transplantation

Author: Ajay Perumbeti, MD, FAAP; Chief Editor: Emmanuel C Besa, MD
http://emedicine.medscape.com/article/208954-overview

Hematopoietic stem cell transplantation (HSCT) involves the intravenous (IV) infusion of autologous or allogeneic stem cells to reestablish hematopoietic function in patients whose bone marrow or immune system is damaged or defective.

The image below illustrates an algorithm for typically preferred hematopoietic stem cell transplantation cell source for treatment of malignancy.

An algorithm for typically preferred hematopoietic stem cell transplantation cell source for treatment of malignancy: If a matched sibling donor is not available, then a MUD is selected; if a MUD is not available, then choices include a mismatched unrelated donor, umbilical cord donor(s), and a haploidentical donor.

Supportive Therapies

2.5.4  Blood transfusions – risks and complications of a blood transfusion

  • Allogeneic transfusion reaction (acute or delayed hemolytic reaction)
  • Allergic reaction
  • Viruses Infectious Diseases

The risk of catching a virus from a blood transfusion is very low.

HIV. Your risk of getting HIV from a blood transfusion is lower than your risk of getting killed by lightning. Only about 1 in 2 million donations might carry HIV and transmit HIV if given to a patient.

Hepatitis B and C. The risk of having a donation that carries hepatitis B is about 1 in 205,000. The risk for hepatitis C is 1 in 2 million. If you receive blood during a transfusion that contains hepatitis, you’ll likely develop the virus.

Variant Creutzfeldt-Jakob disease (vCJD). This disease is the human version of Mad Cow Disease. It’s a very rare, yet fatal brain disorder. There is a possible risk of getting vCJD from a blood transfusion, although the risk is very low. Because of this, people who may have been exposed to vCJD aren’t eligible blood donors.

  • Fever
  • Iron Overload
  • Lung Injury
  • Graft-Versus-Host Disease

Graft-versus-host disease (GVHD) is a condition in which white blood cells in the new blood attack your tissues.

2.5.5 Erythropoietin

Erythropoietin, (/ɨˌrɪθrɵˈpɔɪ.ɨtɨn/UK /ɛˌrɪθr.pˈtɪn/) also known as EPO, is a glycoprotein hormone that controls erythropoiesis, or red blood cell production. It is a cytokine (protein signaling molecule) for erythrocyte (red blood cell) precursors in the bone marrow. Human EPO has a molecular weight of 34 kDa.

Also called hematopoietin or hemopoietin, it is produced by interstitial fibroblasts in the kidney in close association with peritubular capillary and proximal convoluted tubule. It is also produced in perisinusoidal cells in the liver. While liver production predominates in the fetal and perinatal period, renal production is predominant during adulthood. In addition to erythropoiesis, erythropoietin also has other known biological functions. For example, it plays an important role in the brain’s response to neuronal injury.[1] EPO is also involved in the wound healing process.[2]

Exogenous erythropoietin is produced by recombinant DNA technology in cell culture. Several different pharmaceutical agents are available with a variety ofglycosylation patterns, and are collectively called erythropoiesis-stimulating agents (ESA). The specific details for labelled use vary between the package inserts, but ESAs have been used in the treatment of anemia in chronic kidney disease, anemia in myelodysplasia, and in anemia from cancer chemotherapy. Boxed warnings include a risk of death, myocardial infarction, stroke, venous thromboembolism, and tumor recurrence.[3]

2.5.6  G-CSF (granulocyte-colony stimulating factor)

Granulocyte-colony stimulating factor (G-CSF or GCSF), also known as colony-stimulating factor 3 (CSF 3), is a glycoprotein that stimulates the bone marrow to produce granulocytes and stem cells and release them into the bloodstream.

There are different types, including

  • Lenograstim (Granocyte)
  • Filgrastim (Neupogen, Zarzio, Nivestim, Ratiograstim)
  • Long acting (pegylated) filgrastim (pegfilgrastim, Neulasta) and lipegfilgrastim (Longquex)

Pegylated G-CSF stays in the body for longer so you have treatment less often than with the other types of G-CSF.

2.5.7  Plasma Exchange (plasmapheresis)

http://emedicine.medscape.com/article/1895577-overview

Plasmapheresis is a term used to refer to a broad range of procedures in which extracorporeal separation of blood components results in a filtered plasma product.[1, 2] The filtering of plasma from whole blood can be accomplished via centrifugation or semipermeable membranes.[3] Centrifugation takes advantage of the different specific gravities inherent to various blood products such as red cells, white cells, platelets, and plasma.[4] Membrane plasma separation uses differences in particle size to filter plasma from the cellular components of blood.[3]

Traditionally, in the United States, most plasmapheresis takes place using automated centrifuge-based technology.[5] In certain instances, in particular in patients already undergoing hemodialysis, plasmapheresis can be carried out using semipermeable membranes to filter plasma.[4]

In therapeutic plasma exchange, using an automated centrifuge, filtered plasma is discarded and red blood cells along with replacement colloid such as donor plasma or albumin is returned to the patient. In membrane plasma filtration, secondary membrane plasma fractionation can selectively remove undesired macromolecules, which then allows for return of the processed plasma to the patient instead of donor plasma or albumin. Examples of secondary membrane plasma fractionation include cascade filtration,[6] thermofiltration, cryofiltration,[7] and low-density lipoprotein pheresis.

The Apheresis Applications Committee of the American Society for Apheresis periodically evaluates potential indications for apheresis and categorizes them from I to IV based on the available medical literature. The following are some of the indications, and their categorization, from the society’s 2010 guidelines.[2]

  • The only Category I indication for hemopoietic malignancy is Hyperviscosity in monoclonal gammopathies

2.5.8  Platelet Transfusions

Indications for platelet transfusion in children with acute leukemia

Scott Murphy, Samuel Litwin, Leonard M. Herring, Penelope Koch, et al.
Am J Hematol Jun 1982; 12(4): 347–356
http://onlinelibrary.wiley.com/doi/10.1002/ajh.2830120406/abstract;jsessionid=A6001D9D865EA1EBC667EF98382EF20C.f03t01
http://dx.doi.org:/10.1002/ajh.2830120406

In an attempt to determine the indications for platelet transfusion in thrombocytopenic patients, we randomized 56 children with acute leukemia to one of two regimens of platelet transfusion. The prophylactic group received platelets when the platelet count fell below 20,000 per mm3 irrespective of clinical events. The therapeutic group was transfused only when significant bleeding occurred and not for thrombocytopenia alone. The time to first bleeding episode was significantly longer and the number of bleeding episodes were significantly reduced in the prophylactic group. The survival curves of the two groups could not be distinguished from each other. Prior to the last month of life, the total number of days on which bleeding was present was significantly reduced by prophylactic therapy. However, in the terminal phase (last month of life), the duration of bleeding episodes was significantly longer in the prophylactic group. This may have been due to a higher incidence of immunologic refractoriness to platelet transfusion. Because of this terminal bleeding, comparison of the two groups for total number of days on which bleeding was present did not show a significant difference over the entire study period.

Clinical and Laboratory Aspects of Platelet Transfusion Therapy
Yuan S, Goldfinger D
http://www.uptodate.com/contents/clinical-and-laboratory-aspects-of-platelet-transfusion-therapy

INTRODUCTION — Hemostasis depends on an adequate number of functional platelets, together with an intact coagulation (clotting factor) system. This topic covers the logistics of platelet use and the indications for platelet transfusion in adults. The approach to the bleeding patient, refractoriness to platelet transfusion, and platelet transfusion in neonates are discussed elsewhere.

Pooled Platelets – A single unit of platelets can be isolated from every unit of donated blood, by centrifuging the blood within the closed collection system to separate the platelets from the red blood cells (RBC). The number of platelets per unit varies according to the platelet count of the donor; a yield of 7 x 1010 platelets is typical [1]. Since this number is inadequate to raise the platelet count in an adult recipient, four to six units are pooled to allow transfusion of 3 to 4 x 1011 platelets per transfusion [2]. These are called whole blood-derived or random donor pooled platelets.

Advantages of pooled platelets include lower cost and ease of collection and processing (a separate donation procedure and pheresis equipment are not required). The major disadvantage is recipient exposure to multiple donors in a single transfusion and logistic issues related to bacterial testing.

Apheresis (single donor) Platelets – Platelets can also be collected from volunteer donors in the blood bank, in a one- to two-hour pheresis procedure. Platelets and some white blood cells are removed, and red blood cells and plasma are returned to the donor. A typical apheresis platelet unit provides the equivalent of six or more units of platelets from whole blood (ie, 3 to 6 x 1011 platelets) [2]. In larger donors with high platelet counts, up to three units can be collected in one session. These are called apheresis or single donor platelets.

Advantages of single donor platelets are exposure of the recipient to a single donor rather than multiple donors, and the ability to match donor and recipient characteristics such as HLA type, cytomegalovirus (CMV) status, and blood type for certain recipients.

Both pooled and apheresis platelets contain some white blood cells (WBC) that were collected along with the platelets. These WBC can cause febrile non-hemolytic transfusion reactions (FNHTR), alloimmunization, and transfusion-associated graft-versus-host disease (ta-GVHD) in some patients.

Platelet products also contain plasma, which can be implicated in adverse reactions including transfusion-related acute lung injury (TRALI) and anaphylaxis. (See ‘Complications of platelet transfusion’ .)

Read Full Post »

Hematologic Malignancies , Table of Contents

Writer and Curator:  Larry H. Bernstein, MD, FCAP

Hematologic Malignancies 

Not excluding lymphomas [solid tumors]

The following series of articles are discussions of current identifications, classification, and treatments of leukemias, myelodysplastic syndromes and myelomas.

2.4 Hematological Malignancies

2.4.1 Ontogenesis of blood elements

Erythropoiesis

White blood cell series: myelopoiesis

Thrombocytogenesis

2.4.2 Classification of hematopoietic cancers

Primary Classification

Acute leukemias

Myelodysplastic syndromes

Acute myeloid leukemia

Acute lymphoblastic leukemia

Myeloproliferative Disorders

Chronic myeloproliferative disorders

Chronic myelogenous leukemia and related disorders

Myelofibrosis, including chronic idiopathic

Polycythemia, including polycythemia rubra vera

Thrombocytosis, including essential thrombocythemia

Chronic lymphoid leukemia and other lymphoid leukemias

Lymphomas

Non-Hodgkin Lymphoma

Hodgkin lymphoma

Lymphoproliferative disorders associated with immunodeficiency

Plasma Cell dyscrasias

Mast cell disease and Histiocytic neoplasms

Secondary Classification

Nuance – PathologyOutlines

2.4.3 Diagnostics

Computer-aided diagnostics

Back-to-Front Design

Realtime Clinical Expert Support

Regression: A richly textured method for comparison and classification of predictor variables

Converting Hematology Based Data into an Inferential Interpretation

A model for Thalassemia Screening using Hematology Measurements

Measurement of granulocyte maturation may improve the early diagnosis of the septic state.

The automated malnutrition assessment.

Molecular Diagnostics

Genomic Analysis of Hematological Malignancies

Next-generation sequencing in hematologic malignancies: what will be the dividends?

Leveraging cancer genome information in hematologic malignancies.

p53 mutations are associated with resistance to chemotherapy and short survival in hematologic malignancies

Genomic approaches to hematologic malignancies

2.4.4 Treatment of hematopoietic cancers

2.4.4.1 Treatments for leukemia by type

2.4.4..2 Acute lymphocytic leukemias

2.4..4.3 Treatment of Acute Lymphoblastic Leukemia

Acute Lymphoblastic Leukemia

Gene-Expression Patterns in Drug-Resistant Acute Lymphoblastic Leukemia Cells and Response to Treatment

Leukemias Treatment & Management

Treatments and drugs

2.4.5 Acute Myeloid Leukemia

New treatment approaches in acute myeloid leukemia: review of recent clinical studies

Novel approaches to the treatment of acute myeloid leukemia.

Current treatment of acute myeloid leukemia

Adult Acute Myeloid Leukemia Treatment (PDQ®)

2.4.6 Treatment for CML

Chronic Myelogenous Leukemia Treatment (PDQ®)

What`s new in chronic myeloid leukemia research and treatment?

4.2.7 Chronic Lymphocytic Leukemia

Chronic Lymphocytic Leukemia Treatment (PDQ®)

Results from the Phase 3 Resonate™ Trial

Typical treatment of chronic lymphocytic leukemia

4.2.8 Lymphoma treatment

4.2.8.1 Overview

4.2.8.2 Chemotherapy

………………………………..

Chapter 6

Total body irradiation (TBI)

Bone marrow (BM) transplantation

Autologous stem cell transplantation

Hematopoietic stem cell transplantation

Supportive Therapies

Blood transfusions

Erythropoietin

G-CSF (granulocyte-colony stimulating factor)

Plasma exchange (plasmapheresis)

Platelet transfusions

Steroids

Read Full Post »

Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

Article ID #168: Artificial Intelligence Versus the Scientist: Who Will Win?. Published on 3/2/2015

WordCloud Image Produced by Adam Tubman

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »

The Union of Biomarkers and Drug Development

The Union of Biomarkers and Drug Development

Author and Curator: Larry H. Bernstein, MD, FCAP

There has been consolidation going on for over a decade in both thr pharmaceutical and in the diagnostics industry, and at the same time the page is being rewritten for health care delivery.  I shall try to work through a clear picture of these not coincidental events.

Key notables:

  1. A growing segment of the US population is reaching Medicare age
  2. There is also a large underserved population in both metropolitan and nonurban areas and a fragmentation of the middle class after a growth slowdown in the economy since the 2008 deep recession.
  3. The deep recession affecting worldwide economies was only buffered by availability of oil or natural gas.
  4. In addition, there was a self-destructive strategy to cut spending on national scales that withdrew the support that would bolster support for infrastrucrue renewl.
  5. There has been a dramatic success in the clinical diagnostics industry, with a long history of being viewed as a loss leader, and this has been recently followed by the pharmaceutical industry faced with inability to introduce new products, leading to more competition in off-patent medications.
  6. The introduction of the Accountable Care Act has opened the opportunities for improved care, despite political opposition, and has probably sustained opportunity in the healthcare market.

Let’s take a look at this three headed serpent. – Pharma, Diagnostics, New Entity
?  The patient  ?
?  Insurance    ?
?  Physician    ?

Part I.   The Concept

When Illumina Buys Roche: The Dawning Of The Era Of Diagnostics Dominance

Robert J. Easton, Alain J. Gilbert, Olivier Lesueur, Rachel Laing, and Mark Ratner
http://PharmaMedtechBI.com    | IN VIVO: The Business & Medicine Report Jul/Aug 2014; 32(7).

  • With current technology and resources, a well-funded IVD company can create and pursue a strategy of information gathering and informatics application to create medical knowledge, enabling it to assume the risk and manage certain segments of patients
  • We see the first step in the process as the emergence of new specialty therapy companies coming from an IVD legacy, most likely focused in cancer, infection, or critical care

When Illumina Inc. acquired the regulatory consulting firm Myraqa, a specialist in in vitro diagnostics (IVD), in July, the press release announcement characterized the deal as one that would bolster illumina’s in-house capabilities for clinical readiness and help prepare for its next growth phase in regulated markets. That’s not surprising given the US Food and Drug Administration’s (FDA) approval a year and a half ago of its MiSeq next-generation sequencer for clinical use. But the deal could also suggest illumina is beginning to move along the path toward taking on clinical risk – that is, eventually

  • advising physicians and patients, which would mean facing regulators directly

Such a move – by illumina, another life sciences tools firm, or an information specialist from the high-tech universe – is inevitable given

  • the emerging power of diagnostics and traditional health care players’ reluctance to themselves take on such risk.

Alternatively, we believe that a well-funded diagnostics company could establish this position. either way, such a champion would establish dominion over and earn higher valuation than less-aggressive players who

  • only supply compartmentalized drug and device solutions.

Diagnostics companies have long been dogged by a fundamental issue:

  1. they are viewed and valued more along the lines of a commodity business than as firms that deliver a unique product or service
  2. diagnostics companies are in position to do just that today because they are now advantaged by having access to more data points.
  3. if they were to cobble together the right capabilities, diagnostics companies would have the ability to turn information into true medical knowledge

Example: PathGEN PathChip

nucleic-acid-based platform detects 296 viruses, bacteria, fungi & parasites

http://ow.ly/d/2GvQhttp://ow.ly/DSORV

This puts the diagnostics player in an unfamiliar realm where it can ask the question of what value they offer compared with a therapeutic. The key is that diagnostics can now offer unique information and potentially unique tools to capture that information. In order to do so, it has to create information from the data it generates, and then to supply that knowledge to users who will value and act on that knowledge. Complex genomic tests, as much as physical examination, may be the first meaningful touch point for physicians’ classification of disease.

Even if lab tests are more expensive, it is a cheaper means for deciding what to do first for a patient than the trial and error of prescribing medication without adequate information. Information is gaining in value as the amount of treatment data available on genomically characterizable subpopulations increases. In such a circumstance
it is the ability to perform that advisory function that will add tremendous value above what any test provides, the leverage of being able to apply a proprietary diagnostics platform – and importantly, the data it generates. It is the ability to perform that advisory function that will add tremendous value above what any test provides.

Integrated Diagnostics Inc. and Biodesix Inc. with mass spectrometry has the tools for unraveling disease processes, and numerous players are quite visibly in or are getting into the business of providing medical knowledge and clinical decision support in pursuit of a huge payout for those who actually solve important disease mysteries. Of course one has to ask whether MS/MS is sufficient for the assigned task, and also whether the technology is ready for the kind of workload experienced in a clinical service compared to a research vehicle.  My impression (as a reviewer) is that it is not now the time to take this seriously.

Roche has not realized its intent with Ventana: failing to deliver on the promise of boosting Roche’s pipeline, which was a significant factor in the high price Roche paid. The combined company was to be “uniquely positioned to further expand Ventana’s business globally and together develop more cost-efficient, differentiated, and targeted medicines.  On the other hand,  Biodesix decided to use Veristrat to look back and analyze important trial data to try to ascertain which patients would benefit from ficlatuzumab (subset). The predictive effect for the otherwise unimpressive trial results was observed in both progression-free survival and overall survival endpoints, and encouraged the companies to conduct a proof-of-concept study of ficlatuzumab in combination with Tarceva in advanced Non Small Cell Lung Cancer Patients (NSCLC) selected using the Veristrat test.

A second phase of IVD evolution will be far more challenging to pharma, when the most accomplished companies begin to assemble and integrate much broader data
sets, thereby gaining knowledge sufficient to actually manage patients and dictate therapy, including drug selection. No individual physician has or will have access to all of this information on thousands of patients, combined with the informatics to tease out from trillions of data points the optimal personalized medical approach. When the IVD-origin knowledge integrator amasses enough data and understanding to guide therapy decisions in large categories, particularly drug choices, it will become more valuable than any of the drug suppliers.

This is an apparent reversal of fortune. The pharmaceutical industry has been considered the valued provider, while the IVD manufacturer has been the low valued cousin. Now, it is by an ability to make kore accurate the drug administration that the IVD company can control the drug bill, to the detriment of drug developers, by finding algorithms that generate equal-to-innovative-drug outcomes using generics for most of the patients, thereby limiting the margins of drug suppliers and the upsides for new drug discovery/development.

It is here that there appears to be a misunderstanding of the whole picture of the development of the healthcare industry.  The pharmaceutical industry had a high value added only insofar it could replace market leaders for treatment before or at the time of patent expiration, which largely depended either introducing a new class of drug, or by relieving the current drug in its class of undesired toxicities or “side effects”.  Otherwise, the drug armamentarium was time limited to the expiration date. In other words, the value was dependent on a window of no competition.  In addition, as the regulation of healthcare costs were tightening under managed care, the introduction of new products that were deemed to be only marginally better, could be substitued by “off-patent” drug products.

The other misunderstanding is related to the IVD sector.  Laboratory tests in the 1950’s were manual, and they could be done by “technicians” who might not have completed a specialized training in clinical laboratory sciences.  The first sign of progress was the introduction of continuous flow chemistry, with a sampling probe, tubing to bring the reacting reagents into a photocell, and the timing of the reaction controlled by a coiled glass tubing before introducing the colored product into a uv-visible photometer.  In perhaps a decade, the Technicon SMA 12 and 6 instruments were introduced that could do up to 18 tests from a single sample.

Part 2. Emergence of an IVD Clinical Automated Diagnostics Industry

Why tests are ordered

  1. Screening
  2. Diagnosis
  3. Monitoring

Historical Perspective

Case in Point 1:  Outstanding Contributions in Clinical Chemistry. 1991. Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

Case in point 2: Arterial Blood Gases.  Van Slyke. National Academy of Sciences.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the  classic nitrous acid method for the quantitative determination of primary aliphatic amino groups,  the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid

composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma,
and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.
This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A
monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that
related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated.
These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the
understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Davenport fig 10.jpg

Case in Point 3.  Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

Nathan Gochman

Nathan Gochman

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity”tests.

Part 3. Biomarkers in Medical Practice

Case in Point 1.

A Solid Prognostic Biomarker

HDL-C: Target of Therapy or Fuggedaboutit?

Steven E. Nissen, MD, MACC, Peter Libby, MD

DisclosuresNovember 06, 2014

Steven E. Nissen, MD, MACC: I am Steve Nissen, chairman of the Department of Cardiovascular Medicine at the Cleveland Clinic. I am here with Dr Peter Libby, chief of cardiology at the Brigham and Women’s Hospital and professor of medicine at Harvard Medical School. We are going to discuss high-density lipoprotein cholesterol (HDL-C), a topic that has been very controversial recently. Peter, HDL-C has been a pretty good biomarker. The question is whether it is a good target.

Peter Libby, MD: Since the early days in Berkley, when they were doing ultracentrifugation, and when it was reinforced and put on the map by the Framingham Study,[1] we have known that HDL-C is an extremely good biomarker of prospective cardiovascular risk with an inverse relationship with all kinds of cardiovascular events. That is as solid a finding as you can get in observational epidemiology. It is a very reliable prospective marker. It’s natural that the pharmaceutical industry and those of us who are interested in risk reduction would focus on HDL-C as a target. That is where the controversies come in.

Dr Nissen: It has been difficult. My view is that the trials that have attempted to modulate HDL-C or the drugs they used have been flawed. Although the results have not been promising, the jury is yet out. Torcetrapib, the cholesteryl ester transfer protein (CETP) inhibitor developed by Pfizer, had anoff-target toxicity.[2] Niacin is not very effective, and there are a lot of downsides to the drug. That has been an issue, but people are still working on this. We have done some studies. We did our ApoA-1 Milano infusion study[3]about a decade ago, which showed very promising results with respect to shrinking plaques in coronary arteries. I remain open to the possibility that the right drug in the right trial will work.

Dr Libby: What do you do with the genetic data that have come out in the past couple of years? Sekar Kathiresan masterminded and organized an enormous collaboration[4] in which they looked, with contemporary genetics, at whether HDL had the genetic markers of being a causal risk factor. They came up empty-handed.

Dr Nissen: I am cautious about interpreting those data, like I am cautious about interpreting animal studies of atherosclerosis. We have both lived through this problem in which something works extremely well in animals but doesn’t work in humans, or it doesn’t work in animals but it works in humans. The genetic studies don’t seal the fate of HDL. I have an open mind about this. Drugs are complex. They work by complex mechanisms. It is my belief that what we have to do is test these hypotheses in well-designed clinical trials, which are rigorously performed with drugs that are clean—unlike torcetrapib—and don’t have off-target toxicities.

An Unmet Need: High Lp(a) Levels

Dr Nissen: I’m going to push back on that and make a couple of points. The HPS2-THRIVE study was flawed. They studied the wrong people. It was not a good study, and AIM-HIGH[8] was underpowered. I am not putting people on niacin. What do you do with a patient whose Lp(a) is 200 mg/dL?

Dr Libby: I’m waiting for the results of the PCSK9 and anacetrapib studies. You can tell me about evacetrapib.[9]Reducing Lp(a) is an unmet medical need. We both care for kindreds with high Lp(a) levels and premature coronary artery disease. We have no idea what to do with them other than to treat them with statins and lower their LDL-C levels.

Dr Nissen: I have taken a more cautious approach with respect to taking people off of niacin. If I have patients who are doing well and tolerating it (depending on why it was started), I am discontinuing niacin in some people. I am starting very few people on the drug, but I worry about the quality of the trial.

Dr Libby: So you are of the “don’t start don’t stop” school?

Dr Nissen: Yes. It’s difficult when the trial is fatally flawed. There were 11,000 patients from China in this study. I have known for years that if you give niacin to people of Asiatic ethnic descent, they have terrible flushing and they won’t continue the drug. One question is, what was the adherence? The adverse events would have been tolerable had there been efficacy. The concern here is that this study was destined to fail because they studied a low LDL/high HDL population, a group of people for whom niacin just isn’t used.

Triglycerides and HDL: Do We Have It Backwards?

Dr Libby: What about the recent genetic[10] and epidemiologic data that support triglycerides, and apolipoprotein C3 in particular as a causal risk factor? Have we been misled through all of the generations in whom we have been adjusting triglycerides for HDL-C and saying that triglycerides are not a causal risk factor because once we adjust for HDL, the risk goes away? Do you think we got it backwards?

Dr Nissen: The tricky factor here is that because of this intimate inverse relationship between triglycerides and HDL, we may be talking about the same phenomenon. That is one of the reasons that I am not certain we are not going to be able to find a therapy. What if you had a therapy that lowered triglycerides and raised HDL-C? Could that work? Could that combination be favorable? I want answers from rigorous, well-designed clinical trials that ask the right questions in the right populations. I am disappointed, just as I have been disappointed by the fibrate trials.[11,12] There is a class of drugs that raises HDL-C a little and lowers triglycerides a lot.

Dr Nissen: But the gemfibrozil studies (VA-HIT[13] and Helsinki Heart[14]) showed benefit.

The Dyslipidemia Bar Has Been Raised

Dr Libby: Those studies were from the pre-statin era. We both were involved in trials in which patients were on high-dose statins at baseline. Do you think that this is too high a bar?

Dr Nissen: The bar has been raised, and for the pharmaceutical industry, the studies that we need to find out whether lowering triglycerides or raising HDL is beneficial are going to be large. We are doing a study with evacetrapib. It has 12,000 patients. It’s fully enrolled. Evacetrapib is a very clean-looking drug. It doesn’t have such a long biological half-life as anacetrapib, so I am very encouraged that it won’t have that baggage of being around for 2-4 years. We’ve got a couple of shots on goal here. Don’t forget that we have multiple ongoing studies of HDL-C infusion therapies that are still under development. Those have some promise too. The jury is still out.

Dr Libby: We agree on the need to do rigorous, large-scale endpoint trials. Do the biomarker studies, but don’t wait to start the endpoint trial because that’s the proof in the pudding.

Dr Nissen: Exactly. We have had a little controversy about HDL-C. We often agree, but not always, and we may have a different perspective. Thanks for joining me in this interesting discussion of what will continue to be a controversial topic for the next several years until we get the results of the current ongoing trials.

Case in Point 2.

NSTEMI? Honesty in Coding and Communication?

Melissa Walton-Shirley

November 07, 2014

The complaint at ER triage: Weakness, fatigue, near syncope of several days’ duration, vomiting, and decreased sensorium.

The findings: O2sat: 88% on room air. BP: 88 systolic. Telemetry: Sinus tachycardia 120 bpm. Blood sugar: 500 mg/dL. Chest X ray: atelectasis. Urinalysis: pyuria. ECG: T-wave-inversion anterior leads. Echocardiography: normal left ventricular ejection fraction (LVEF) and wall motion. Troponin I: 0.3 ng/mL. CT angiography: negative for pulmonary embolism (PE). White blood cell count: 20K with left shift. Blood cultures: positive for Gram-negative rods.

The treatment: Intravenous fluids and IV levofloxacin—changed to ciprofloxacin.

The communication at discharge: “You had a severe urinary-tract infection and grew bacteria in your bloodstream. Also, you’ve had a slight heart attack. See your cardiologist immediately upon discharge-no more than 5 days from now.”

The diagnoses coded at discharge: Urosepsis and non-ST segment elevation MI (NSTEMI) 410.1.

One year earlier: This moderately obese patient was referred to our practice for a preoperative risk assessment. The surgery planned was a technically simple procedure, but due to the need for precise instrumentation, general endotracheal anesthesia (GETA) was being considered. The patient was diabetic, overweight, and short of air. A stress exam was equivocal for CAD due to poor exercise tolerance and suboptimal imaging. Upon further discussion, symptoms were progressive; therefore, cardiac cath was recommended, revealing angiographically normal coronaries and a predictably elevated left ventricular end diastolic pressure (LVEDP) in the mid-20s range. The patient was given a diagnosis of diastolic dysfunction, a prescription for better hypertension control, and in-depth discussion on exercise and the Mediterranean and DASH diets for weight loss. Symptoms improved with a low dose of diuretic. The surgery was completed without difficulty. Upon follow-up visit, the patient felt well, had lost a few pounds, and blood pressure was well controlled.

Five days after ER workup: While out of town, the patient developed profound weakness and went to the ER as described above. Fast forward to our office visit in the designated time frame of “no longer than 5 days’ postdischarge,” where the patient and family asked me about the “slight heart attack” that literally came on the heels of a normal coronary angiogram.

But the patient really didn’t have a “heart attack,” did they? The cardiologist aptly stated that it was likely nonspecific troponin I leak in his progress notes. Yet the hospitalist framed the diagnosis of NSTEMI as item number 2 in the final diagnoses.

The motivations on behalf of personnel who code charts are largely innocent and likely a direct result of the lack of understanding of the coding system on behalf of us as healthcare providers. I have a feeling, though, that hospitals aren’t anxious to correct this misperception, due to an opportunity for increased reimbursement. I contacted a director of a coding department for a large hospital who prefers to remain anonymous. She explained that NSTEMI ICD9 code 410.1 falls in DRG 282 with a weight of .7562. The diagnosis of “demand ischemia,” code 411.89, a slightly less inappropriate code for a nonspecific troponin I leak, falls in DRG 311 with a weight of .5662. To determine reimbursement, one must multiply the weight by the average hospital Medicare base rate of $5370. Keep in mind that each hospital’s base rate and corresponding payment will vary. The difference in reimbursement for a large hospital bill between these two choices for coding is substantial, at over $1000 difference ($4060 vs $3040).

Although hospitals that are already reeling from shrinking revenues will make more money on the front end by coding the troponin leak incorrectly as an NSTEMI, when multiple unnecessary tests are generated to follow up on a nondiagnostic troponin leak, the amount of available Centers for Medicare & Medicaid Services (CMS) reimbursement pie shrinks in the long run. Furthermore, this inappropriate categorization generates extreme concern on behalf of patients and family members that is often never laid to rest. The emotional toll of a “heart-attack” diagnosis has an impact on work fitness, quality of life, cost of medication, and the cost of future testing. If the patient lived for another 100 years, they will likely still list a “heart attack” in their medical history.

As a cardiologist, I resent the loose utilization of one of “my” heart-attack codes when it wasn’t that at all. At discharge, we need to develop a better way of communicating what exactly did happen. Equally important, we need to communicate what exactly didn’t happen as well.

Case in Point 3.

Blood Markers Predict CKD Heart Failure 

Published: Oct 3, 2014 | Updated: Oct 3, 2014

Elevated levels of high-sensitivity troponin T (hsTnT) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) strongly predicted heart failure in patients with chronic kidney disease followed for a median of close to 6 years, researchers reported.

Compared with patients with the lowest blood levels of hsTnT, those with the highest had a nearly five-fold higher risk for developing heart failure and the risk was 10-fold higher in patients with the highest NT-proBNP levels compared with those with the lowest levels of the protein, researcher Nisha Bansal, MD, of the University of Washington in Seattle, and colleagues wrote online in the Journal of the American Society of Nephrology.

A separate study, published online in theJournal of the American Medical Association earlier in the week, also examined the comorbid conditions of heart and kidney disease, finding no benefit to the practice of treating cardiac surgery patients who developed acute kidney injury with infusions of the antihypertensive drug fenoldopam.

The study, reported by researcher Giovanni Landoni, MD, of the IRCCS San Raffaele Scientific Institute, Milan, Italy, and colleagues, was stopped early “for futility,” according to the authors, and the incidence of hypotension during drug infusion was significantly higher in patients infused with fenoldopam than placebo (26% vs. 15%; P=0.001).

Blood Markers Predict CKD Heart Failure

The study in patients with mild to moderate chronic kidney disease (CKD) was conducted to determine if blood markers could help identify patients at high risk for developing heart failure.

Heart failure is the most common cardiovascular complication among people with renal disease, occurring in about a quarter of CKD patients.

The two markers, hsTnT and NT-proBNP, are associated with overworked cardiac myocytes and have been shown to predict heart failure in the general population.

However, Bansal and colleagues noted, the markers have not been widely used in diagnosing heart failure among patients with CKD due to concerns that reduced renal excretion may raise levels of these markers, and therefore do not reflect an actual increase in heart muscle strain.

To better understand the importance of elevated concentrations of hsTnT and NT-proBNP in CKD patients, the researchers examined their association with incident heart failure events in 3,483 participants in the ongoing observational Chronic Renal Insufficiency Cohort (CRIC) study.

All participants were recruited from June 2003 to August 2008, and all were free of heart failure at baseline. The researchers used Cox regression to examine the association of baseline levels of hsTnT and NT-proBNP with incident heart failure after adjustment for demographic influences, traditional cardiovascular risk factors, makers of kidney disease, pertinent medication use, and mineral metabolism markers.

At baseline, hsTnT levels ranged from ≤5.0 to 378.7 pg/mL and NT-proBNP levels ranged from ≤5 to 35,000 pg/mL. Compared with patients who had undetectable hsTnT, those in the highest quartile (>26.5 ng/mL) had a significantly higher rate of heart failure (hazard ratio 4.77; 95% CI 2.49-9.14).

Compared with those in the lowest NT-proBNP quintile (<47.6 ng/mL), patients in the highest quintile (>433.0 ng/mL) experienced an almost 10-fold increase in heart failure risk (HR 9.57; 95% CI 4.40-20.83).

The researchers noted that these associations remained robust after adjustment for potential confounders and for the other biomarker, suggesting that while hsTnT and NT-proBNP are complementary, they may be indicative of distinct biological pathways for heart failure.

Even Modest Increases in NP-proBNP Linked to Heart Failure

The findings are consistent with an earlier analysis that included 8,000 patients with albuminuria in the Prevention of REnal and Vascular ENd-stage Disease (PREVEND) study, which showed that hsTnT was associated with incident cardiovascular events, even after adjustment for eGFR and severity of albuminuria.

“Among participants in the CRIC study, those with the highest quartile of detectable hsTnT had a twofold higher odds of left ventricular hypertrophy compared with those in the lowest quartile,” Bansal and colleagues wrote, adding that the findings were similar after excluding participants with any cardiovascular disease at baseline.

Even modest elevations in NT-proBNP were associated with significantly increased rates of heart failure, including in subgroups stratified by eGFR, proteinuria, and diabetic status.

“NT-proBNP regulates blood pressure and body fluid volume by its natriuretic and diuretic actions, arterial dilation, and inhibition of the renin-aldosterone-angiotensin system and increased levels of this marker likely reflect myocardial stress induced by subclinical changes in volume or pressure, even in persons without clinical disease,” the researchers wrote.

The researchers concluded that further studies are needed to develop and validate risk prediction tools for clinical heart failure in patients with CKD, and to determine the potential role of these two biomarkers in a heart failure risk prediction and prevention strategy.

Fenoldopam ‘Widely Promoted’ in AKI Cardiac Surgery Setting

The JAMA study examined whether the selective dopamine receptor D agonist fenoldopam mesylate can reduce the need for dialysis in cardiac surgery patients who develop acute kidney injury (AKI).

Fenoldopam induces vasodilation of the renal, mesenteric, peripheral, and coronary arteries, and, unlike dopamine, it has no significant affinity for D2 receptors, meaning that it theoretically induces greater vasodilation in the renal medulla than in the cortex, the researchers wrote.

“Because of these hemodynamic effects, fenoldopam has been widely promoted for the prevention and therapy of AKI in the United States and many other countries with apparent favorable results in cardiac surgery and other settings,” Landoni and colleagues wrote.

The drug was approved in 1997 by the FDA for the indication of in-hospital, short-term management of severe hypertension. It has not been approved for renal indications, but is commonly used off-label in cardiac surgery patients who develop AKI.

Although a meta analysis of randomized trials, conducted by the researchers, indicated a reduction in the incidence and progression of AKI associated with the treatment, Landoni and colleagues wrote that the absence of a definitive trial “leaves clinicians uncertain as to whether fenoldopam should be prescribed after cardiac surgery to prevent deterioration in renal function.”

To address this uncertainty, the researchers conducted a prospective, randomized, parallel-group trial in 667 patients treated at 19 hospitals in Italy from March 2008 to April 2013.

All patients had been admitted to ICUs after cardiac surgery with early acute kidney injury (≥50% increase of serum creatinine level from baseline or low output of urine for ≥6 hours). A total of 338 received fenoldopam by continuous intravenous infusion for a total of 96 hours or until ICU discharge, while 329 patients received saline infusions.

The primary end point was the rate of renal replacement therapy, and secondary end points included mortality (intensive care unit and 30-day mortality) and the rate of hypotension during study drug infusion.

Study Showed No Benefit, Was Stopped Early

Yale Lampoon – AA Liebow.   1954

Not As a Doctor
[Fourth Year]

These lyrics, sung by John Cole, Jack Gariepy and Ed Ransenhofer to music borrowed from Gilbert and Sullivan’s The Mikado, lampooned Averill Liebow, M.D., a pathologist noted for his demands on students. (CPC stands for clinical pathology conference.)

If you want to know what this is,
it’s a medical CPC
Where we give the house staff
the biz, for there’s no one so
wise as we!
We pathologists show them how,
Although it is too late now.
Our art is a sacred cow!

American physician, born 1911, Stryj in Galicia, Austria (now in Ukraine); died 1978.

Averill Abraham Liebow, born in Austria, was the “founding father” of pulmonary pathology in the United States. He started his career as a pathologist at Yale, where he remained for many years. In 1968 he moved to the University of California School of Medicine, San Diego, where he taught for 7 years as Professor and Chairman, Department of Pathology.

His studies include many classic studies of lung diseases. Best known of these is his famous classification of interstitial lung disease. He also published papers on sclerosing pneumocytoma, pulmonary alveolar proteinosis, meningothelial-like nodules, pulmonary hypertension, pulmonary veno-occlusive disease, lymphomatoid granulomatosis, pulmonary Langerhans cell histiocytosis, pulmonary epithelioid hemangioendothelioma and pulmonary hyalinizing granuloma .

As a Lieutenant Colonel in the US Army Medical Corps, He was a member of the Atomic Bomb Casualty Commission who studied the effects of the atomic bomb in Hiroshima and Nagasaki.

We thank Sanjay Mukhopadhyay, M.D., for information submitted.

As a resident at UCSD, Dr. Liebow held “Organ Recitals” every morning, including Mother’s day.  The organs had to be presented in specified order… heart, lung, and so forth.  On one occasion, we needed a heart for purification of human lactate dehydrogenase for a medical student project, so I presented the lung out of order.  Dr. Liebow asked where the heart was, and I told the group it was noprmal and I froze it for enzyme purification (smiles).  In the future show it to me first. He was generous to those who showed interest.  As I was also doing research in Nathan Kaplan’s laboratory, he made special arrangements for me to mentor Deborah Peters, the daughter of a pulmonary physician, and granddaughter of the Peters who collaborated with Van Slyke.  I mentored many students with great reward since then.  He could look at a slide and tell you what the x-ray looked like.  I didn’t encounter that again until he sent me to the Armed Forces Institute of Pathology, Washington, DC during the Vietnam War and Watergate, and I worked in Orthopedic Pathology with Lent C. Johnson.  He would not review a case without the x-ray, and he taught the radiologists.

Part 3

My Cancer Genome from Vanderbilt University: Matching Tumor Mutations to Therapies & Clinical Trials

Reporter: Aviva Lev-Ari, PhD, RN

My Cancer Genome from Vanderbilt University: Matching Tumor Mutations to Therapies & Clinical Trials


GenomOncology and Vanderbilt-Ingram Cancer Center (VICC) today announced a partnership for the exclusive commercial development of a decision support tool based on My Cancer Genome™, an online precision cancer medicine knowledge resource for physicians, patients, caregivers and researchers.

Through this collaboration, GenomOncology and VICC will enhance My Cancer Genome through the development of a new genomics content management tool. The MyCancerGenome.org website will remain free and open to the public. In addition, GenomOncology will develop a decision support tool based on My Cancer Genome™ data that will enable automated interpretation of mutations in the genome of a patient’s tumor, providing actionable results in hours versus days.

Vanderbilt-Ingram Cancer Center (VICC) launched My Cancer Genome™ in January 2011 as an integral part of their Personalized Cancer Medicine Initiative that helps physicians and researchers track the latest developments in precision cancer medicine and connect with clinical research trials. This web-based information tool is designed to quickly educate clinicians on the rapidly expanding list of genetic mutations that impact cancers and enable the research of treatment options based on specific mutations. For more information on My Cancer Genome™visit www.mycancergenome.org/about/what-is-my-cancer-genome.

Therapies based on the specific genetic alterations that underlie a patient’s cancer not only result in better outcomes but often have less adverse reactions

Up front fee

Nominal fee covers installation support, configuring the Workbench to your specification, designing and developing custom report(s) and training your team.

Per sample fee

GenomOncology is paid on signed-out clinical reports. This philosophy aligns GenomOncology with your Laboratory as we are incentivized to offer world-class support and solutions to differentiate your clinical NGS program. There is no annual license fee.

Part 4

Clinical Trial Services: Foundation Medicine & EmergingMed to Partner

Reporter: Aviva Lev-Ari, PhD, RN

Clinical Trial Services: Foundation Medicine & EmergingMed to Partner


Foundation Medicine and EmergingMed said today that they will partner to offer clinical trial navigation services for health care providers and their patients who have received one of Foundation Medicine’s tumor genomic profiling tests.

The firms will provide concierge services to help physicians

  • identify appropriate clinical trials for patients
  • based on the results of FoundationOne or FoundationOne Heme.

“By providing clinical trial navigation services, we aim to facilitate

  • timely and accurate clinical trial information and enrollment support services for physicians and patients,
  • enabling greater access to treatment options based on the unique genomic profile of a patient’s cancer

Currently, there are over 800 candidate therapies that target genomic alterations in clinical trials,

  • but “patients and physicians must identify and act on relevant options
  • when the patient’s clinical profile is aligned with the often short enrollment window for each trial.

These investigational therapies are an opportunity to engage patients with cancer whose cancer has progressed or returned following standard treatment in a most favorable second option after relapse.  The new service is unique in notifying when new clinical trials emerge that match a patient’s genomic and clinical profile.

Google signs on to Foundation Medicine cancer Dx by offering tests to employees

By Emily Wasserman

Diagnostics luminary Foundation Medicine ($FMI) is generating some upward momentum, fueled by growing revenues and the success of its clinical tests. Tech giant Google ($GOOG) has taken note and is signing onto the company’s cancer diagnostics by offering them to employees.

Foundation Medicine CEO Michael Pellini said during the company’s Q3 earnings call that Google will start covering its DNA tests for employees and their family members suffering from cancer as part of its health benefits portfolio, Reuters reports.

Both sides stand to benefit from the deal, as Google looks to keep a leg up on Silicon Valley competitors and Foundation Medicine expands its cancer diagnostics platform. Last month, Apple ($AAPL) and Facebook ($FB) announced that they would begin covering the cost of egg freezing for female employees. A diagnostics partnership and attractive health benefits could work wonders for Google’s employee retention rates and bottom line.

In the meantime, Cambridge, MA-based Foundation Medicine is charging full speed ahead with its cancer diagnostics platform after filing for an IPO in September 2013. The company chalked up 6,428 clinical tests during Q3 2014, an eye-popping 149% increase year over year, and brought in total revenue for the quarter of $16.4 million–a 100% leap from last year. Foundation Medicine credits the promising numbers in part to new diagnostic partnerships and extended coverage for its tests.

In January, the company teamed up with Novartis ($NVS) to help the drugmaker evaluate potential candidates for its cancer therapies. In April, Foundation Medicine announced that it would develop a companion diagnostic test for a Clovis Oncology ($CLVS) drug under development to treat patients with ovarian cancer, building on an ongoing collaboration between the two companies.

Foundation Medicine also has its sights set on China’s growing diagnostics market, inking a deal in October with WuXi PharmaTech ($WX) that allows the company to perform lab testing for its FoundationOne assay at WuXi’s Shanghai-based Genome Center.

a nod to the deal with Google during a corporate earnings call on Wednesday, according to a person who listened in. Pellini said Google employees were made aware of this new benefit last week.

Foundation Medicine teams with MD Anderson for new trial of cancer Dx

Second study to see if targeted therapy can change patient outcomes

August 15, 2014 | By   FierceDiagnostics

Foundation Medicine ($FMI) is teaming up with the MD Anderson Cancer Center in Texas for a new trial of the the Cambridge, MA-based company’s molecular diagnostic cancer test that targets therapies matched to individual patients.

The study is called IMPACT2 (Initiative for Molecular Profiling and Advanced Cancer Therapy) and is designed to build on results from the the first IMPACT study that found

  • 40% of the 1,144 patients enrolled had an identifiable genomic alteration.

The company said that

  • by matching specific gene alterations to therapies,
  • 27% of patients in the first study responded versus
  • 5% with an unmatched treatment, and
  • “progression-free survival” was longer in the matched group.

The FoundationOne molecular diagnostic test

  • combines genetic sequencing and data gathering
  • to help oncologists choose the best treatment for individual patients.

Costing $5,800 per test, FoundationOne’s technology can uncover a large number of genetic alterations for 200 cancer-related genes,

  • blending genomic sequencing, information and clinical practice.

“Based on the IMPACT1 data, a validated, comprehensive profiling approach has already been adopted by many academic and community-based oncology practices,” Vincent Miller, chief medical officer of Foundation Medicine, said in a release. “This study has the potential to yield sufficient evidence necessary to support broader adoption across most newly diagnosed metastatic tumors.”

The company got a boost last month when the New York State Department of Health approved Foundation Medicine’s two initial cancer tests: the FoundationOne test and FoundationOne Heme, which creates a genetic profile for blood cancers. Typically,

  • diagnostics companies struggle to win insurance approval for their tests
  • even after they gain a regulatory approval, leaving revenue growth relatively flat.

However, Foundation Medicine reported earlier this week its Q2 revenue reached $14.5 million compared to $5.9 million for the same period a year ago. Still,

  1. net losses continue to soar as the company ramps up
  2. its commercial and business development operation,
  • hitting $13.7 million versus a $10.1 million deficit in the second quarter of 2013.

Oncology

There has been a remarkable transformation in our understanding of

  • the molecular genetic basis of cancer and its treatment during the past decade or so.

In depth genetic and genomic analysis of cancers has revealed that

  • each cancer type can be sub-classified into many groups based on the genetic profiles and
  • this information can be used to develop new targeted therapies and treatment options for cancer patients.

This panel will explore the technologies that are facilitating our understanding of cancer, and

  • how this information is being used in novel approaches for clinical development and treatment.
Oncology _ Reprted by Dr. Aviva Lev-Ari, Founder, Leaders in Pharmaceutical Intelligence

Opening Speaker & Moderator:

Lynda Chin, M.D.
Department Chair, Department of Genomic Medicine
MD Anderson Cancer Center

  • Who pays for PM?
  • potential of Big data, analytics, Expert systems, so not each MD needs to see all cases, Profile disease to get same treatment
  • business model: IP, Discovery, sharing, ownership — yet accelerate therapy
  • security of healthcare data
  • segmentation of patient population
  • management of data and tracking innovations
  • platforms to be shared for innovations
  • study to be longitudinal,
  • How do we reconcile course of disease with PM
  • phinotyping the disease vs a Patient in wait for cure/treatment

Panelists:

Roy Herbst, M.D., Ph.D.
Ensign Professor of Medicine and Professor of Pharmacology;
Chief of Medical Oncology, Yale Cancer Center and Smilow Cancer Hospital

Development new drugs to match patient, disease and drug – finding the right patient for the right Clinical Trial

  • match patient to drugs
  • partnerships: out of 100 screened patients, 10 had the gene, 5 were able to attend the trial — without the biomarker — all 100 patients would participate for the WRONG drug for them (except the 5)
  • patients wants to participate in trials next to home NOT to have to travel — now it is in the protocol
  • Annotated Databases – clinical Trial informed consent – adaptive design of Clinical Trial vs protocol
  • even Academic MD can’t read the reports on Genomics
  • patients are treated in the community — more training to MDs
  • Five companies collaborating – comparison og 6 drugs in the same class
  • if drug exist and you have the patient — you must apply PM

Summary and Perspective:

The current changes in Biotechnology have been reviewed with an open question about the relationship of In Vitro Diagnostics to Biopharmaceuticals switching, with the potential, particularly in cancer and infectious diseases, to added value in targeted therapy by matching patients to the best potential treatment for a favorable outcome.

This reviewer does not see the movement of the major diagnostics leaders entering into the domain of direct patient care, even though there are signals in that direction.  The Roche example is perhaps the most interesting because Roche already became the elephant in the room after the introduction of Valium,  subsequently bought out Boehringer Mannheim Diagnostics to gain entry into the IVD market, and established a huge presence in Molecular Diagnostics early.  If it did anything to gain a foothold in the treatment realm, it would more likely forge a relationship with Foundation Medicine.  Abbott Laboratories more than a decade ago was overextended, and it had become the leader in IVD as a result of the specialty tests, but it fell into difficulties with quality control of its products in the high volume testing market, and acceeded to Olympus, Roche, and in the mid volume market to Beckman and Siemens.  Of course, Dupont and Kodak, pioneering companies in IVD, both left the market.

The biggest challenge in the long run is identified by the ability to eliminate many treatments that would be failures for a large number of patients. That has already met the proof of concept.  However, when you look at the size of the subgroups, we are not anywhere near a large scale endeavor.  In addition, there is a lot that has to be worked out that is not related to genomic expression by the “classic” model, but has to take into account the emrging knowledge and greater understanding of regulation of cell metabolism, not only in cancer, but also in chronic inflammatory diseases.

Read Full Post »

The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

Article ID #164: The Evolution of Clinical Chemistry in the 20th Century. Published on 12/13/2014

WordCloud Image Produced by Adam Tubman

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

AUTOMATION IN CLINICAL CHEMISTRY: CURRENT SUCCESSES AND TRENDS
FOR THE FUTURE
Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

GENERAL STEPS OF A CLINICAL CHEMISTRY PROCEDURE –
1 – PRELIMINARY TREATMENT (DEPR0TEINIZATION)
2 – SAMPLE + REAGENT(S)
3 – INCUBATION
L – READING
5 – CALCULATION
Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

ANALYTICAL METHODOLOGY –
– VERY ACTIVE ENZYMATIC REAGENTS
–                          SHORTER REACTION TIME
– KINETIC AND FIXED TIME REACTIONS
–                        No NEED OF DEPROTEINIZATION
– SURFACTANTS
–                      AUTOMATIC SAtIPLE BLANK CALCULATION
– POLYCHROMATIC ANALYSIS

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.

…more…

Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.

Technicon

The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.

http://www.ncbi.nlm.nih.gov/pubmed/1344702

The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Read Full Post »

The History of Infectious Diseases and Epidemiology in the late 19th and 20th Century

Curator: Larry H Bernstein, MD, FCAP

 

Infectious diseases are a part of the history of English, French, and Spanish Colonization of the Americas, and of the Slave Trade.  The many plagues in the new and old world that have effected the course of history from ancient to modern times were known to the Egyptians, Greeks, Chinese, crusaders, explorers, Napoleon, and had familiar ties of war, pestilence, and epidemic. Our coverage is mainly concerned with the scientific and public health consequences of these events that preceded WWI and extended to the Vietnam War, and is highlighted by the invention of a public health system world wide.

The Armed Forces Institute of Pathology (AFIP) closed its’ doors on September 15, 2011. It was founded as the Army Medical Museum on May 21, 1862, to collect pathological specimens along with their case histories.

The information from the case files of the pathological specimens from the Civil War was compared with Army pensions records and compiled into the six-volume Medical and Surgical History of the War of the Rebellion, an early study of wartime medicine.

In 1900, museum curator Walter Reed led the commission which proved that a mosquito was the vector for Yellow Fever, beginning the mosquito eradication campaigns throughout most of the twentieth century.

WalterReed

WalterReed

Another museum curator, Frederick Russell, conducted clinical trials on the typhoid vaccine in 1907, resulting in the U.S. Army to be the first Army vaccinated against typhoid.

Increased emphasis on pathology during the twentieth century turned the museum, renamed the Armed Forces Institute of Pathology in 1949, into an international resource for pathology and the study of disease. AFIP’s pathological collections have been used, for example, in the characterization of the 1918-influenza virus in 1997.

Prior to moving to the Walter Reed Army Medical Center, the AFIP was located at the Army Medical Museum and Library on the Mall (1887-1969), and earlier as Army Medical Museum in Ford’s Theatre (1867-1886).

Army Medical Museum and Library on the Mall

Army Medical Museum and Library on the Mall

This institution, originally the Library of the Surgeon General’s Office (U.S. Army), gained its present name and was transferred from the Army to the Public Health Service in 1956. In 1962, it moved to its own Bethesda site after sharing space for nearly 100 years with other Army units, first at the former Ford’s Theatre building and then at the Army Medical Museum and Library on the Mall. Rare books and other holdings that had been sent to Cleveland for safekeeping during World War II were also reunited with the main collection at that time.

The National Museum of Health and Medicine, established in 1862, inspires interest in and promotes the understanding of medicine — past, present, and future — with a special emphasis on tri-service American military medicine. As a National Historic Landmark recognized for its ongoing value to the health of the military and to the nation, the Museum identifies, collects, and preserves important and unique resources to support a broad agenda of innovative exhibits, educational programs, and scientific, historical, and medical research. NMHM is a headquarters element of the U.S. Army Medical Research and Materiel Command. NMHM’s newest exhibit installations showcase the institution’s 25-million object collection, focusing on topics as diverse as innovations in military medicine, traumatic brain injury, anatomy and pathology, military medicine during the Civil War, the assassination of Abraham Lincoln (including the bullet that killed him), human identification and a special exhibition on the Museum’s own major milestone—the 150th anniversary of the founding of the Army Medical Museum. Objects on display will include familiar artifacts and specimens: the bullet that killed Lincoln and a leg showing the effects of elephantiasis, as well as recent finds in the collection—all designed to astound visitors to the new Museum.

Today, the National Library of Medicine houses the largest collection of print and non-print materials in the history of the health sciences in the United States, and maintains an active program of exhibits and public lectures. Most of the archival and manuscript material dates from the 17th century; however, the Library owns about 200 pre-1601 Western and Islamic manuscripts. Holdings include pre-1914 books, pre-1871 journals, archives and modern manuscripts, medieval and Islamic manuscripts, a collection of printed books, manuscripts, and visual material in Japanese, Chinese, and Korean; historical prints, photographs, films, and videos; pamphlets, dissertations, theses, college catalogs, and government documents.

The oldest item in the Library is an Arabic manuscript on gastrointestinal diseases from al-Razi’s The Comprehensive Book on Medicine (Kitab al-Hawi fi al-tibb) dated 1094. Significant modern collections include the papers of U.S. Surgeons General, including C. Everett Koop, and the papers of Nobel Prize-winning scientists, particularly those connected with NIH.

As part of its Profiles in Science project, the National Library of Medicine has collaborated with the Churchill Archives Centre to digitize and make available over the World Wide Web a selection of the Rosalind Franklin Papers for use by educators and researchers. This site provides access to the portions of the Rosalind Franklin Papers, which range from 1920 to 1975. The collection contains photographs, correspondence, diaries, published articles, lectures, laboratory notebooks, and research notes.

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

Rosalind Franklin

“Science and everyday life cannot and should not be separated. Science, for me, gives a partial explanation of life. In so far as it goes, it is based on fact, experience, and experiment. . . . I agree that faith is essential to success in life, but I do not accept your definition of faith, i.e., belief in life after death. In my view, all that is necessary for faith is the belief that by doing our best we shall come nearer to success and that success in our aims (the improvement of the lot of mankind, present and future) is worth attaining.”

–Rosalind Franklin in a letter to Ellis Franklin, ca. summer 1940

Smallpox

Although some disliked mandatory smallpox vaccination measures, coordinated efforts against smallpox went on in the United States after 1867, and the disease continued to diminish in the wealthy countries. By 1897, smallpox had largely been eliminated from the United States. In Northern Europe a number of countries had eliminated smallpox by 1900, and by 1914, the incidence in most industrialized countries had decreased to comparatively low levels. Vaccination continued in industrialized countries, until the mid to late 1970s as protection against reintroduction. Australia and New Zealand are two notable exceptions; neither experienced endemic smallpox and never vaccinated widely, relying instead on protection by distance and strict quarantines.

In 1966 an international team, the Smallpox Eradication Unit, was formed under the leadership of an American, Donald Henderson. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort, and adopted the new disease surveillance method promoted by Czech epidemiologist Karel Raška. Two-year old Rahima Banu of Bangladesh (pictured) was the last person infected with naturally occurring Variola major, in 1975

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

—World Health Organization, Resolution WHA33.3

Anthrax

Anthrax is an acute disease caused by the bacterium Bacillus anthracis. Most forms of the disease are lethal, and it affects both humans and other animals. Effective vaccines against anthrax are now available, and some forms of the disease respond well to antibiotic treatment.

Like many other members of the genus Bacillus, B. anthracis can form dormant endospores (often referred to as “spores” for short, but not to be confused with fungal spores) that are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, even Antarctica. When spores are inhaled, ingested, or come into contact with a skin lesion on a host, they may become reactivated and multiply rapidly.

Anthrax commonly infects wild and domesticated herbivorous mammals that ingest or inhale the spores while grazing. Ingestion is thought to be the most common route by which herbivores contract anthrax. Carnivores living in the same environment may become infected by consuming infected animals. Diseased animals can spread anthrax to humans, either by direct contact (e.g., inoculation of infected blood to broken skin) or by consumption of a diseased animal’s flesh.

Anthrax does not spread directly from one infected animal or person to another; it is spread by spores. These spores can be transported by clothing or shoes. The body of an animal that had active anthrax at the time of death can also be a source of anthrax spores. Owing to the hardiness of anthrax spores, and their ease of production in vitro, they are extraordinarily well suited to use (in powdered and aerosol form) as biological weapons.

Bacillus anthracis is a rod-shaped, Gram-positive, aerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria and put them into a mouse. The bacterium normally rests in endospore form in the soil, and can survive for decades in this state. Once ingested or placed in an open wound, the bacterium begins multiplying inside the animal or human and typically kills the host within a few days or weeks. The endospores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.

Robert Koch

Robert Koch

Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. One spore forms per one vegetative bacterium. Once formed, these spores are very hard to eradicate.

The lethality of the anthrax disease is due to the bacterium’s two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.

To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore.

Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol.

Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling. LF inactivates neutrophils so they cannot phagocytose bacteria. Anthrax causes vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.

Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere. Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. The body’s natural defenses presumably can destroy low levels of exposure. These people usually contract cutaneous anthrax if they catch anything.

Throughout history, the most dangerous form of inhalational anthrax was called woolsorters’ disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare, as almost no infected animals remain. The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. Gastrointestinal anthrax is exceedingly rare in the United States, with only one case on record, reported in 1942, according to the Centers for Disease Control and Prevention.

Various techniques are used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.

All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2.

  1. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a “medusa head” appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.

Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine, from Pasteur’s pioneering 19th-century work with cattle (the second effective vaccine ever) to the controversial 20th century use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.

If a person is suspected as having died from anthrax, every precaution should be taken to avoid skin contact with the potentially contaminated body and fluids exuded through natural body openings. The body should be put in strict quarantine and then incinerated. A blood sample should then be collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though culture of the organism is still the gold standard for diagnosis.

Full isolation of the body is important to prevent possible contamination of others. Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations should be used when handling the body. Disposable personal protective equipment and filters should be autoclaved, and/or burned and buried.

Anyone working with anthrax in a suspected or confirmed victim should wear respiratory equipment capable of filtering this size of particle or smaller. The US National Institute for Occupational Safety and Health – and Mine Safety and Health Administration-approved high-efficiency respirator, such as a half-face disposable respirator with a high-efficiency particulate air filter, is recommended.

All possibly contaminated bedding or clothing should be isolated in double plastic bags and treated as possible biohazard waste. The victim should be sealed in an airtight body bag. Dead victims who are opened and not burned provide an ideal source of anthrax spores. Cremating victims is the preferred way of handling body disposal.

Until the 20th century, anthrax infections killed hundreds of thousands of animals and people worldwide each year. French scientist Louis Pasteur developed the first effective vaccine for anthrax in 1881.

louis-pasteur

louis-pasteur

As a result of over a century of animal vaccination programs, sterilization of raw animal waste materials, and anthrax eradication programs in United States, Canada, Russia, Eastern Europe, Oceania, and parts of Africa and Asia, anthrax infection is now relatively rare in domestic animals. Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001.

Anthrax outbreaks occur in some wild animal populations with some regularity. The disease is more common in countries without widespread veterinary or human public health programs. In the 21st century, anthrax is still a problem in less developed countries.

  1. anthracis bacterial spores are soil-borne. Because of their long lifespan, spores are present globally and remain at the burial sites of animals killed by anthrax for many decades. Disturbed grave sites of infected animals have caused reinfection over 70 years after the animal’s interment.

Cholera

This is an acute diarrheal infection that can kill within a matter of hours if untreated. Oral rehydration therapy — drinking water mixed with salts and sugar. But researchers at EPFL — the Swiss Federal Institute of Technology in Lausanne — say using rice starch instead of sugar with the rehydration salts could reduce bacterial toxicity by almost 75 percent. That would make the microbe less likely to infect a patient’s family and friends if they are exposed to any body fluids.

The World Health Organization says cholera, a water-borne bacterium, infects three to five million people every year, and the severe dehydration it causes leads to as many as 120,000 deaths.

Cholera is an acute diarrheal disease caused by the water borne bacteria Vibrio cholerae O1 or O139 (V. cholerae). Infection is mainly through ingestion of contaminated water or food. The V cholerae passes through the stomach, colonizes the upper part of the small intestine, penetrates the mucus layer, and secretes cholera toxin which affects the small intestine.

Clinically, the majority of cholera episodes are characterized by a sudden onset of massive diarrhea and vomiting accompanied by the loss of profuse amounts of protein-free fluid with electrolytes. The resulting dehydration produces tachycardia, hypotension, and vascular collapse, which can lead to sudden death. The diagnosis of cholera is commonly established by isolating the causative organism from the stools of infected individuals

There are an estimated 3–5 million cholera cases and 100 000–120 000 deaths due to cholera every year.

Up to 80% of cases can be successfully treated with oral rehydration salts.

Effective control measures rely on prevention, preparedness and response.

Provision of safe water and sanitation is critical in reducing the impact of cholera and other waterborne diseases.

Oral cholera vaccines are considered an additional means to control cholera, but should not replace conventional control measures.

During the 19th century, cholera spread across the world from its original reservoir in the Ganges delta in India. Six subsequent pandemics killed millions of people across all continents. The current (seventh) pandemic started in South Asia in 1961, and reached Africa in 1971 and the Americas in 1991. Cholera is now endemic in many countries.

INDIA-ENVIRONMENT-POLUTION

INDIA-ENVIRONMENT-POLUTION

In its extreme manifestation, cholera is one of the most rapidly fatal infectious illnesses known. Within 3–4 hours of onset of symptoms, a previously healthy person may become severely dehydrated and if not treated may die within 24 hours (WHO, 2010). The disease is one of the most researched in the world today; nevertheless, it is still an important public health problem despite more than a century of study, especially in developing tropical countries. Cholera is currently listed as one of three internationally quarantinable diseases by the World Health Organization (WHO), along with plague and yellow fever (WHO, 2000a).

Two serogroups of V. cholerae – O1 and O139 – cause outbreaks. V. cholerae O1 causes the majority of outbreaks, while O139 – first identified in Bangladesh in 1992 – is confined to South-East Asia.

Non-O1 and non-O139 V. cholerae can cause mild diarrhoea but do not generate epidemics.

The main reservoirs of V. cholerae are people and aquatic sources such as brackish water and estuaries, often associated with algal blooms. Recent studies indicate that global warming creates a favorable environment for the bacteria.

Socioeconomic and demographic factors enhance the vulnerability of a population to infection and contribute to epidemic spread. Such factors also mandate the extent to which the disease will reach epidemic proportions and also modulate the size of the epidemic.Known population level (local-level) risk factors of cholera include poverty, lack of development, high population density, low education, and lack of previous exposure. Cholera diffuses rapidly in environments that lack basic infrastructure with regard to access to safe water and proper sanitation. The cholera vibrios can survive and multiply outside the human body and can spread rapidly in environments where living conditions are overcrowded and where there is no safe disposal of solid waste, liquid waste, and human feces.

Mapping the locations of cholera victims, John Snow was able to trace the cause of the disease to a contaminated water source. Surprisingly, this was done 20 years before Koch and Pasteur established the beginnings of microbiology (Koch, 1884).

John Snow's  map

John Snow’s map

Yellow Fever

Yellow fever virus was probably introduced into the New World via ships carrying slaves from West Africa. Throughout the 18th and 19th centuries, regular and devastating epidemics of yellow fever occurred across the Caribbean, Central and South America, the southern United States and Europe. The Yellow Fever Commission, founded as a consequence of excessive disease mortality during the Spanish– American War (1898), concluded that the best way to control the disease was to control the mosquito. William Gorgas successfully eradicated yellow fever from Havana by destroying larval breeding sites and this strategy of source reduction was then successfully used to reduce disease problems and thus finally permit the construction of the Panama Canal in 1904. Success was due largely to a top-down, military approach involving strict supervision and discipline (Gorgas, 1915). In 1946, an intensive Aedes aegypti eradication campaign was initiated in the Americas, which succeeded in reducing vector populations to undetectable levels throughout most of its range.

The production of an effective vaccine in the 1930s led to a change of emphasis from vector control to vaccination for the control of yellow fever. Vaccination campaigns almost eliminated urban yellow fever but incomplete coverage, as with incomplete anti-vectorial measures previously, meant the disease persisted, and outbreaks occurred in remote forest areas.

It was acknowledged by the Health Organization of the League of Nations (the forerunner to the World Health Organization (WHO)) that yellow fever was a severe burden on endemic countries. The work of Soper and the Brazilian Cooperative Yellow Fever Service (Soper, 1934, 1935a, b) began to determine the geographical extent of the disease, specifically in Brazil. Regional maps of disease outbreaks were published by Sawyer (1934), but it was not until after the formation of the WHO that a global map of yellow fever endemicity was first constructed (van Rooyen and Rhodes, 1948). This map was based on expert opinion (United Nations Relief and Rehabilitation Administration/Expert Commission on Quarantine) and serological surveys. The present-day distribution map for yellow fever is still essentially a modified version of this map.

global yellow fever risk map

global yellow fever risk map

Yellow fever is conspicuously absent from Asia. Although there is some evidence that other flaviviruses may offer cross-protection against yellow fever (Gordon-Smith et al., 1962), why yellow fever does not occur in Asia is still unexplained.

It has been estimated that the currently circulating strains of YFV arose in Africa within the last 1,500 years and emerged in the Americas following the slave trade approximately 300–400 years ago. These viruses then spread westwards across the continent and persist there to this day in the jungles of South America.

The 17D live-attenuated vaccine still in use today was developed in 1936, and a single dose confers immunity for at least ten years in 95% of the cases. In a bid to contain the spread of the disease, travellers to countries within endemic areas or those thought to be ‘at risk’ require a certificate of vaccination. The yellow fever certificate is the only internationally regulated certification supported by the WHO. The effectiveness of the vaccine reduces the need for anti-vectorial campaigns directed specifically against yellow fever. As the same major vector is involved, control of Aedes aegypti for dengue reduction will also reduce yellow fever transmission where both diseases co-occur, especially within urban settings.

Dengue

Probable epidemics of dengue fever have been recorded from Africa, Asia, Europe and the Americas since the early 19th century (Armstrong, 1923). Although it is rarely fatal, up to 90% of the

population of an infected area can be incapacitated during the course of an epidemic (Armstrong, 1923; Siler et al., 1926). Widespread movements of troops and refugees during and after World War II introduced vectors and viruses into many new areas. Dengue fever has unsurprisingly been mistaken for yellow fever as well as other diseases including influenza, measles, typhoid and malaria. It is rarely fatal and survivors appear to have lifelong immunity to the homologous serotype.

Far more serious is dengue haemorrhagic fever (DHF), where additional symptoms develop, including haemorrhaging and shock. The mortality from DHF can exceed 30% if appropriate care is unavailable. The most significant risk factor for DHF is when secondary infection with a different serotype occurs in people who have already had, and recovered from, a primary dengue infection.

Dengue has adapted to changes in human demography very effectively. The main vector of dengue is the anthropophilic Aedes aegypti, which is found in close association with human settlements throughout the tropics, breeding mainly in containers in and around, and feeding almost exclusively on humans. As a result, dengue is essentially a disease of tropical urban areas. Before 1970, only nine countries had experienced DHF epidemics, but by 1995 this number had increased fourfold (WHO, 2001). Dengue case numbers have increased considerably since the 1960s; by the end of the 20th century an estimated 50 million cases of dengue fever and 500 000 cases of DHF were occurring every year (WHO, 2001).

The appearance of DHF stimulated large amounts of dengue research, which established the existence of the four serotypes and the range of competent vectors, and led to the adoption of Aedes aegypti control programs in some areas (particularly South-East Asia) (Kilpatrick et al., 1970).

There have been several attempts to estimate the economic impact of dengue: the 1977 epidemic in Puerto Rico was thought to have cost between $6.1 and $15.6 million ($26–$31 per clinical case) (Von Allmen et al., 1979), while the 1981 Cuban epidemic (with a total of 344 203 reported cases) cost about $103 million (around $299 per case) (Kouri et al., 1989).

There is no cure for dengue fever or for DHF. Currently, the only treatment is symptomatic, but this can reduce mortality from DHF to less than 1% (WHO, 2002). Unfortunately, the extent of dengue epidemics means that local public health services are often overwhelmed by the demands for treatment.

Malaria

Malaria is a serious and sometimes fatal disease caused by a parasite that infects a mosquito. People who get malaria are typically very sick with high fevers, shaking chills, and flu-like illness. About 1,500 cases of malaria are diagnosed in the United States each year. The vast majority of cases in the United States are in travelers and immigrants returning from countries where malaria transmission occurs, many from sub-Saharan Africa and South Asia. Malaria has been noted for more than 4,000 years. It became widely recognized in Greece by the 4th century BCE, and it was responsible for the decline of many of the city-state populations. Hippocrates noted the principal symptoms. In the Susruta, a Sanskrit medical treatise, the symptoms of malarial fever were described and attributed to the bites of certain insects. A number of Roman writers attributed malarial diseases to the swamps.

Following their arrival in the New World, Spanish Jesuit missionaries learned from indigenous Indian tribes of a medicinal bark used for the treatment of fevers. With this bark, the Countess of Chinchón, the wife of the Viceroy of Peru, was cured of her fever. The bark from the tree was then called Peruvian bark and the tree was named Cinchona after the countess. The medicine from the bark is now known as the antimalarial, quinine. Along with artemisinins, quinine is one of the most effective antimalarial drugs available today.

quinquin acalisaya

quinquin acalisaya

Cinchona officinalis is a medicinal plant, one of several Cinchona species used for the production of quinine, which is an anti-fever agent. It is especially useful in the prevention and treatment of malaria. Cinchona calisaya is the tree most cultivated for quinine production.

There are a number of other alkaloids that are extracted from this tree. They include cinchonine, cinchonidine and quinidine  (Wikipedia)

Charles Louis Alphonse Laveran, a French army surgeon stationed in Constantine, Algeria, was the first to notice parasites in the blood of a patient suffering from malaria in 1880. Laveran was awarded the Nobel Prize in 1907.

Alphonse Laveran

Alphonse Laveran

Camillo Golgi, an Italian neurophysiologist, established that there were at least two forms of the disease, one with tertian periodicity (fever every other day) and one with quartan periodicity (fever every third day). He also observed that the forms produced differing numbers of merozoites (new parasites) upon maturity and that fever coincided with the rupture and release of merozoites into the blood stream. He was awarded a Nobel Prize in Medicine for his discoveries in neurophysiology in 1906.

malaria_lifecycle.

malaria_lifecycle.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

The Italian investigators Giovanni Batista Grassi and Raimondo Filetti first introduced the names Plasmodium vivax and P. malariae for two of the malaria parasites that affect humans in 1890. Laveran had believed that there was only one species, Oscillaria malariae. William H. Welch, reviewed the subject and, in 1897, he named the malignant tertian malaria parasite P. falciparum. In 1922, John William Watson Stephens described the fourth human malaria parasite, P. ovale. P. knowlesi was first described by Robert Knowles and Biraj Mohan Das Gupta in 1931 in a long-tailed macaque, but the first documented human infection with P. knowlesi was in 1965.

Anopheles mosquito

Anopheles mosquito

Ronald Ross, a British officer in the Indian Medical Service, was the first to demonstrate that malaria parasites could be transmitted from infected patients to mosquitoes in 1997. In further work with bird malaria, Ross showed that mosquitoes could transmit malaria parasites from bird to bird. This necessitated a sporogonic cycle (the time interval during which the parasite developed in the mosquito). Ross was awarded the Nobel Prize in 1902.

Ronald Ross_1899

Ronald Ross_1899

A team of Italian investigators led by Giovanni Batista Grassi, collected Anopheles claviger mosquitoes and fed them on malarial patients. The complete sporogonic cycle of Plasmodium falciparum, P. vivax, and P. malariae were demonstrated. Mosquitoes infected by feeding on a patient in Rome were sent to London in 1999, where they fed on two volunteers, both of whom developed malaria.

The construction of the Panama Canal was made possible only after yellow fever and malaria were controlled in the area. These two diseases were a major cause of death and disease among workers in the area. In 1906, there were over 26,000 employees working on the Canal. Of these, over 21,000 were hospitalized for malaria at some time during their work. By 1912, there were over 50,000 employees, and the number of hospitalized workers had decreased to approximately 5,600. Through the leadership and efforts of William Crawford Gorgas, Joseph Augustin LePrince, and Samuel Taylor Darling, yellow fever was eliminated and malaria incidence markedly reduced through an integrated program of insect and malaria control.

Gorgas-William-Crawford, MD

Gorgas-William-Crawford, MD

During the U.S. military occupation of Cuba and the construction of the Panama Canal at the turn of the 20th century, U.S. officials made great strides in the control of malaria and yellow fever. In 1914 Henry Rose Carter and Rudolph H. von Ezdorf of the USPHS requested and received funds from the U.S. Congress to control malaria in the United States. Various activities to investigate and combat malaria in the United States followed from this initial request and reduced the number of malaria cases in the United States. USPHS established malaria control activities around military bases in the malarious regions of the southern United States to allow soldiers to train year round.

U.S. President Franklin D. Roosevelt signed a bill that created the Tennessee Valley Authority (TVA) on May 18, 1933. The law gave the federal government a centralized body to control the Tennessee River’s potential for hydroelectric power and improve the land and waterways for development of the region. An organized and effective malaria control program stemmed from this new authority in the Tennessee River valley. Malaria affected 30 percent of the population in the region when the TVA was incorporated in 1933. The Public Health Service played a vital role in the research and control operations and by 1947, the disease was essentially eliminated. Mosquito breeding sites were reduced by controlling water levels and insecticide applications.

Chloroquine was discovered by a German, Hans Andersag, in 1934 at Bayer I.G. Farbenindustrie A.G. laboratories in Eberfeld, Germany. He named his compound resochin. Through a series of lapses and confusion brought about during the war, chloroquine was finally recognized and established as an effective and safe antimalarial in 1946 by British and U.S. scientists.

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

Felix Hoffmann, Gerhard Domagk, Hermann Schnell_BAYER

A German chemistry student, Othmer Zeidler, synthesized DDT in 1874, for his thesis. The insecticidal property of DDT was not discovered until 1939 by Paul Müller in Switzerland. Various militaries in WWII utilized the new insecticide initially for control of louse-borne typhus. DDT was used for malaria control at the end of WWII after it had proven effective against malaria-carrying mosquitoes by British, Italian, and American scientists. Müller won the Nobel Prize for Medicine in 1948.

Paul Muller

Paul Muller

Malaria Control in War Areas (MCWA) was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic. Many of the bases were established in areas where mosquitoes were abundant. MCWA aimed to prevent reintroduction of malaria into the civilian population by mosquitoes that would have fed on malaria-infected soldiers, in training or returning from endemic areas. During these activities, MCWA also trained state and local health department officials in malaria control techniques and strategies.

The National Malaria Eradication Program, a cooperative undertaking by state and local health agencies of 13 Southeastern states and the CDC, originally proposed by Louis Laval Williams, commenced operations on July 1, 1947. By the end of 1949, over 4,650,000 housespray applications had been made. In 1947, 15,000 malaria cases were reported. By 1950, only 2,000 cases were reported. By 1951, malaria was considered eliminated from the United States.

With the success of DDT, the advent of less toxic, more effective synthetic antimalarials, and the enthusiastic and urgent belief that time and money were of the essence, the World Health Organization (WHO) submitted at the World Health Assembly in 1955 an ambitious proposal for the eradication of malaria worldwide. Eradication efforts began and focused on house spraying with residual insecticides, antimalarial drug treatment, and surveillance, and would be carried out in 4 successive steps: preparation, attack, consolidation, and maintenance. Successes included elimination in nations with temperate climates and seasonal malaria transmission.

Some countries such as India and Sri Lanka had sharp reductions in the number of cases, followed by increases to substantial levels after efforts ceased, while other nations had negligible progress (such as Indonesia, Afghanistan, Haiti, and Nicaragua), and still others were excluded completely from the eradication campaign(sub-Saharan Africa). The emergence of drug resistance, widespread resistance to available insecticides, wars and massive population movements, difficulties in obtaining sustained funding from donor countries, and lack of community participation made the long-term maintenance of the effort untenable.

The goal of most current National Malaria Prevention and Control Programs and most malaria activities conducted in endemic countries is to reduce the number of malaria-related cases and deaths. To reduce malaria transmission to a level where it is no longer a public health problem is the goal of what is called malaria “control.”

The natural ecology of malaria involves malaria parasites infecting successively two types of hosts: humans and female Anopheles mosquitoes. In humans, the parasites grow and multiply first in the liver cells and then in the red cells of the blood. In the blood, successive broods of parasites grow inside the red cells and destroy them, releasing daughter parasites (“merozoites”) that continue the cycle by invading other red cells.

Anopheles mosquito

Anopheles mosquito

The blood stage parasites are those that cause the symptoms of malaria. When certain forms of blood stage parasites (“gametocytes”) are picked up by a female Anopheles mosquito during a blood meal, they start another, different cycle of growth and multiplication in the mosquito.

After 10-18 days, the parasites are found (as “sporozoites”) in the mosquito’s salivary glands. When the Anopheles mosquito takes a blood meal on another human, the sporozoites are injected with the mosquito’s saliva and start another human infection when they parasitize the liver cells.

Malaria. Wikipedia

Malaria. Wikipedia

A Plasmodium from the saliva of a female mosquito moving across a mosquito cell

Thus the mosquito carries the disease from one human to another (acting as a “vector”). Differently from the human host, the mosquito vector does not suffer from the presence of the parasites.

All the clinical symptoms associated with malaria are caused by the asexual erythrocytic or blood stage parasites. When the parasite develops in the erythrocyte, numerous known and unknown waste substances such as hemozoin pigment and other toxic factors accumulate in the infected red blood cell. These are dumped into the bloodstream when the infected cells lyse and release invasive merozoites. The hemozoin and other toxic factors such as glucose phosphate isomerase (GPI) stimulate macrophages and other cells to produce cytokines and other soluble factors which act to produce fever and rigors associated with malaria.

Ookinete,_sporozoite,_merozoite

Ookinete,_sporozoite,_merozoite

Plasmodium falciparum-infected erythrocytes, particularly those with mature trophozoites, adhere to the vascular endothelium of venular blood vessel walls and when they become sequestered in the vessels of the brain it is a factor in causing the severe disease syndrome known as cerebral malaria, which is associated with high mortality.

Following the infective bite by the Anopheles mosquito, a period of time (the “incubation period”) goes by before the first symptoms appear. The incubation period in most cases varies from 7 to 30 days. The shorter periods are observed most frequently with P. falciparum and the longer ones with P. malariae.

malaria_lifecycle.

malaria_lifecycle.

Antimalarial drugs taken for prophylaxis by travelers can delay the appearance of malaria symptoms by weeks or months, long after the traveler has left the malaria-endemic area. (This can happen particularly with P. vivax and P. ovale, both of which can produce dormant liver stage parasites; the liver stages may reactivate and cause disease months after the infective mosquito bite.)

The Influenza Pandemic of 1918

The Nation’s Health

If you had lived in the early twentieth century, your life expectancy would
have been much shorter than it is today. Today, life expectancy for men is 75 years;
for women, it is 80 years. In 1918, life expectancy for men was only 53 years.

Women’s life expectancy at 54 was only marginally better.

Why was life expectancy so much shorter?

During the early twentieth century, communicable diseases—that is diseases
which can spread from person to person—were widespread. Influenza and
pneumonia along with tuberculosis and gastrointestinal infections such
as diarrhea killed Americans at an alarming rate but
non-communicable diseases such as cancer and heart disease also
exacted a heavy toll. Accidents, especially in the nation’s unregulated factories
and workshops, were also responsible for maiming and killing many workers.

High infant mortality further shortened life expectancy. In 1918, one in
five American children did not live beyond their fifth birthday. In some
cities, the situation was even worse, with thirty percent of all infants dying
before their first birthday. Childhood diseases such as diphtheria, measles,
scarlet fever and whooping cough contributed significantly to these high
death rates.

osler_at_a_bedside

osler_at_a_bedside

By 1900, an increasing number of physicians were receiving clinical
training. This training provided doctors with new insights into disease
and specific types of diseases. [Credit: National Library of Medicine]

scarlet_fever

scarlet_fever

Quarantine signs such as this one warned visitors away from homes
with scarlet fever and other infectious diseases. [Credit: National
Library of Medicine]

Rat Proofing

Cities often sponsored Clean-Up Days. Here, Public Health Service
employees clean up San Francisco’s streets in a campaign to
eradicate bubonic plague. [Credit: Office of the Public Health
Service Historian]

cleanup days

cleanup days

A young woman is seated with a baby on her lap in the center
of the photo.  On the right are two young children.  One child is
standing.  The other is seated in a crib.  A woman in a long
white apron stands by the stove on the left side of the photo.
She is pulling a bottle out of a pan on the stove.

nurse_helps_with_baby_formula

nurse_helps_with_baby_formula

A public health nurse teaches a young mother how to sterilize
a bottle. [Credit: National Library of Medicine]

Seeking Medical Care

Feeling Sick in 1918?

If you became sick in nineteenth-century America, you might consult
a doctor, a druggist, a midwife, a folk healer, a nurse or even
your neighbor. Most of these practitioners would visit you in your home.

By 1918, these attitudes toward health care were beginning to
change. Some physicians had begun to set up offices where patients
could receive medical care and hospitals, which emphasized sterilization
and isolation, were also becoming popular.

However, these changes were not yet universal and many Americans
still lived their entire lives without visiting a doctor.

How Did Ordinary People View Disease?

Folk Medicine:

In 1918, folk healers could be found all over America. Some of these
healers believed that diseases had a physical cause such as cold
weather but others believed it had a supernatural cause such as a curse.

Treatments advocated by these healers ran the gamut. Herbal remedies
were especially popular. Other popular remedies included cupping,
which entailed attaching a heated cup to the surface of the skin,
and acupuncture. Many people also wore magical objects which they
believed protected the wearer from illness.

During the influenza pandemic of 1918 when scientific medicine
failed to provide Americans with a cure or preventative, many people
turned to folk remedies and treatments.

Scientific Medicine

In the 1880s, building on developments which had been in the
making since the 1830s, a growing number of scientists and
physicians came to believe that disease was spread by
minute pathogenic organisms or germs.

Often called the bacteriological revolution, this new theory
radically transformed the practice of medicine. But while this was a
major step forward in understanding disease, doctors and scientists
continued to have only a rudimentary understanding of the differences
between different types of microbes. Many practicing physicians
did not understand the differences between bacteria and viruses
and this sharply limited their ability to understand disease
causation and disease prevention.

Drugs and Druggists:

Although the early twentieth century witnessed growing attempts
to regulate the practice of medicine, many druggists assumed
duties we associate today with physicians. Some druggists, for
example, diagnosed and prescribed treatments which they
then sold to the patient. Some of these treatments included opiates;
few actually cured diseases.

Desperate times called for desperate remedies and during the
influenza pandemic, many patients turned to these and other drugs
in the hopes that they would provide a cure.

Nurses:

Between 1890 and 1920, nursing schools multiplied and trained
nurses began to replace practical nurses. Isolation practices
sterility, and strict routines, practices associated with professionally
trained nurses, increasingly became standard during this period. In 1918, nurses served as the physician’s hand, assisting doctors as
they made the rounds. During the pandemic, many nurses acted
independently of doctors, treating and prescribing for patients.

Physicians:

Throughout the eighteenth and much of the nineteenth centuries,
pretty much anyone had the right to call oneself a physician. By the
late nineteenth century, growing calls for reform had begun to
transform the profession.

In 1900, every state in the Union had some type of medical registration
law with about half of all states requiring physicians to possess a
medical diploma and pass an exam before they received a license
to practice. However, grandfather clauses which exempted many older
physicians meant that many physicians who practiced in 1918
had been poorly trained.

quack_doctor

quack_doctor

Poor training and loose regulations meant that some doctors were
little more than quacks. [Credit: National Library of Medicine]

drug_ad

drug_ad

Drug advertisers routinely promised quick and painless cures.
[Credit: National Library of Medicine]

While access to the profession was tightening, women and minorities,
including African-Americans, entered the profession in growing
numbers during the early twentieth century.

What Did Doctors Really Know?

Growing understanding of bacteriology enabled early twentieth-
century physicians to diagnose diseases more effectively than their
predecessors but diagnosis continued to be difficult. Influenza was
especially tricky to diagnose and many physicians may have incorrectly
diagnosed their patients, especially in the early stages of the pandemic.

Bacteriology did not revolutionize the treatment of disease. In the
pre-antibiotic era of 1918, physicians continued to rely heavily
on traditional therapeutics. During the pandemic, many physicians
used traditional treatments such as sweating which had their
roots in humoral medicine.

Reflecting the uneven structure of medical education, the level and
quality of care which physicians provided varied wildly.

The Public Health Service

Founded in 1798, the Marine Hospital Service originally provided
health care for sick and disabled seaman. By the late nineteenth
century, the growth of trade, travel and immigration networks
had led the Service to expand its mission to include protecting
the health of all Americans.

In a nation where federal and state authorities had consistently
battled for supremacy, the powers of the Public Health Service
were limited. Viewed with suspicion by many state and local
authorities, PHS officers often found themselves fighting state
and local authorities as well as epidemics—even when they had
been called in by these authorities.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

A network of hospitals in the nation’s ports provided seamen with
access to healthcare. [Credit: Office of the Public Health Service Historian]

In 1918, there were fewer than 700 commissioned officers in the PHS.
Charged with the daunting task of protecting the health of some
106 million Americans, PHS officers were stationed in not only
the United States but also abroad.

Because few diseases could be cured, the prevention of disease
was central to the PHS mission. Under the leadership of Surgeon
General Rupert Blue, the PHS advocated the use of scientific
research, domestic and foreign quarantine, marine hospitals
and statistics to accomplish this mission. hen an epidemic emerged,
the Public Health Service’s epidemiologists tracked the disease,
house by house. The 1918 influenza pandemic occurred too
rapidly for the PHS to develop a detailed study of the pandemic.

typhoid_map

typhoid_map

This map was used to trace a smaller typhoid epidemic which erupted in
Washington, DC in 1906. [Credit: Office of the Public Health Service Historian]

The spread of disease within the US was a serious concern. However,
PHS officers were most concerned about the importation of disease into
the United States. To prevent this, ships could be, and often were,
quarantined by the PHS.

fever-quaranteen-station-1880

fever-quaranteen-station-1880

Travelers and immigrants to the United States were also required
to undergo a medical exam when entering the country. In 1918 alone,
700,000 immigrants underwent a medical exam at the hands of PHS
officers. Within the United States, PHS officers worked directly with
state and local departments of health to track, prevent and arrest
epidemics as they emerged. During 1918, PHS officers found themselves
battling not only influenza but also polio, typhus, typhoid, smallpox
and a range of other diseases. In 1918, the PHS operated research
laboratories stretching from Hamilton, Montana to Washington DC.
Scientific researchers at these laboratories ultimately discovered
both the causes and cures of diseases ranging from Rocky Mountain
Spotted Fever to pellagra.

Sewers and Sanitation:

In the nineteenth century, most physicians and public health experts
believed that disease was caused not by microorganisms but rather by dirt itself.

Sanitarians, as these people were called, argued that cleaning dirt-
infested cities and building better sewage systems would both prevent
and end many epidemics. At their urging, cities and towns across the United
States built better sewage systems and provided citizens with access to
clean water. By 1918, these improved water and sewage systems had greatly
contributed to a decline in gastrointestinal infections and a significant
reduction in mortality rates among infants, children and young adults.

But because diseases are caused by microorganisms, not dirt, these
tactics were not completely effective in ending all epidemics.

Sanitation: Controlling problems at source

Box 1: Sharing toilets in Uganda

A recent survey by the Ministry of Health in Uganda suggested that there is only one toilet for every 700 Ugandan pupils, compared to one for every 328 pupils in 1995. Out of 8000 schools surveyed, only 33% of the 8000 schools sampled have separate latrines for girls. The deterioration in sanitary conditions was attributed to increased enrolment in schools. UNICEF surveyed 90 primary schools in crisis-affected districts of north and west Uganda: only 2% had adequate latrine facilities (IRIN, 1999).

Box 2: Sanitation and diarrhoeal disease

Gwatkin and Guillot (1999) have claimed that diarrhoea accounts for 11% of all deaths in the poorest 20% of all countries. This toll could be reduced by key measures: better sanitation to reduce the cause of water linked diarrhoea; and more widespread use of oral rehydration therapy (ORT) to treat its effects. Improving water supplies, sanitation facilities and hygiene practices reduces diarrhoea incidence by 26%. Even more impressive, deaths due to diarrhoea are reduced by 65% with these same improvements (Esrey et al., 1991). Of the 2.2 million people that die from diarrhoea each year, many of those deaths are caused by one bacteria – Shigella. Simple hand washing with soap and water reduces Shigella and other diarrhoea transmission by 35% (Kotloff et al., 1999; Khan, 1982). ORT is effective in reducing deaths due to diarrhoea but does not prevent it.

http://www.who.int/water_sanitation_health/sanitproblems/en/index1.html

Garbage-A-polluted-creek

Garbage-A-polluted-creek

Influenza Strikes

Throughout history, influenza viruses have mutated and caused
pandemics or global epidemics. In 1890, an especially virulent influenza
pandemic struck, killing many Americans. Those who survived that
pandemic and lived to experience the 1918 pandemic tended to be
less susceptible to the disease.

From Kansas to Europe and back again, wave after wave, the
unfolding of the pandemic, mobilizing to fight influenza, the
pandemic hits, protecting yourself, communication, fading of
the pandemic.

Influenza ward

Influenza ward

When it came to treating influenza patients, doctors, nurses and
druggists were at a loss. [Credit: Office of the Public Health Service Historian]

The influenza pandemic of 1918-1919 killed more people than the
Great War, known today as World War I (WWI), at somewhere
between 20 and 40 million people. It has been cited as the most
devastating epidemic in recorded world history. More people died of
influenza in a single year than in four-years of the Black Death Bubonic
Plague from 1347 to 1351. Known as “Spanish Flu” or “La Grippe”
the influenza of 1918-1919 was a global disaster.

Grim Reaper

Grim Reaper

The Grim Reaper by Louis Raemaekers

In the fall of 1918 the Great War in Europe was winding down and
peace was on the horizon. The Americans had joined in the fight,
bringing the Allies closer to victory against the Germans. Deep within
the trenches these men lived through some of the most brutal conditions
of life, which it seemed could not be any worse. Then, in pockets
across the globe, something erupted that seemed as benign as the
common cold. The influenza of that season, however, was far more
than a cold. In the two years that this scourge ravaged the earth,
a fifth of the world’s population was infected. The flu was most deadly
for people ages 20 to 40. This pattern of morbidity was unusual for
influenza which is usually a killer of the elderly and young children.
It infected 28% of all Americans (Tice). An estimated 675,000
Americans died of influenza during the pandemic, ten times as
many as in the world war. Of the U.S. soldiers who died in Europe,
half of them fell to the influenza virus and not to the enemy (Deseret
News). An estimated 43,000 servicemen mobilized for WWI died
of influenza (Crosby). 1918 would go down as unforgettable year
of suffering and death and yet of peace. As noted in the Journal
of the American Medical Association final edition of 1918:   “The 1918
has gone: a year momentous as the termination of the most cruel war
in the annals of the human race; a year which marked, the end at
least for a time, of man’s destruction of man; unfortunately a year in
which developed a most fatal infectious disease causing the death
of hundreds of thousands of human beings. Medical science for
four and one-half years devoted itself to putting men on the firing
line and keeping them there. Now it must turn with its whole might to
combating the greatest enemy of all–infectious disease,” (12/28/1918).

From Kansas to Europe and Back Again:

scourge ravaged the earth

scourge ravaged the earth

Where did the 1918 influenza come from? And why was it so lethal?

In 1918, the Public Health Service had just begun to require state
and local health departments to provide them with reports about
diseases in their communities. The problem? Influenza wasn’t
a reportable disease.

But in early March of 1918, officials in Haskell County in Kansas
sent a worrisome report to the Public Health Service.Although
these officials knew that influenza was not a reportable disease,
they wanted the federal government to know that “18 cases
of influenza of a severe type” had been reported there.

By May, reports of severe influenza trickled in from Europe. Young
soldiers, men in the prime of life, were becoming ill in large
numbers. Most of these men recovered quickly but some developed
a secondary pneumonia of “a most virulent and deadly type.”

Within two months, influenza had spread from the military to the
civilian population in Europe. From there, the disease spread outward—to Asia, Africa, South America and, back again, to North America.

Wave After Wave:

In late August, the influenza virus probably mutated again and
epidemics now erupted in three port cities: Freetown, Sierra
Leone; Brest, France, and Boston, Massachusetts. In Boston,
dockworkers at Commonwealth Pier reported sick in massive
numbers during the last week in August. Suffering from fevers
as high as 105 degrees, these workers had severe muscle and
joint pains. For most of these men, recovery quickly followed. But
5 to 10% of these patients developed severe and massive
pneumonia. Death often followed.

Public health experts had little time to register their shock at the
severity of this outbreak. Within days, the disease had spread
outward to the city of Boston itself. By mid-September, the epidemic
had spread even further with states as far away as California, North
Dakota, Florida and Texas reporting severe epidemics.

The Unfolding of the Pandemic:

The pandemic of 1918-1919 occurred in three waves. The first
wave had occurred when mild influenza erupted in the late
spring and summer of 1918. The second wave occurred with an
outbreak of severe influenza in the fall of 1918 and the final wave
occurred in the spring of 1919.

In its wake, the pandemic would leave about twenty million dead
across the world. In America alone, about 675,000 people in
a population of 105 million would die from the disease.

Find out what happened in your state during the Pandemic

Mobilizing to Fight Influenza:

Although taken unaware by the pandemic, federal, state and local
authorities quickly mobilized to fight the disease.

On September 27th, influenza became a reportable disease. However,
influenza had become so widespread by that time that most states
were unable to keep accurate records. Many simply failed to
report to the Public Health Service during the pandemic, leaving
epidemiologists to guess at the impact the disease may have
had in different areas.

World War I had left many communities with a shortage of trained
medical personnel. As influenza spread, local officials urgently
requested the Public Health Service to send nurses and doctors.
With less than 700 officers on duty, the Public Health Service was
unable to meet most of these requests. On the rare occasions when
the PHS was able to send physicians and nurses, they often became
ill en route. Those who did reach their destination safely often found
themselves both unprepared and unable to provide real assistance.

In October, Congress appropriated a million dollars for the Public
Health Service. The money enabled the PHS to recruit and pay
for additional doctors and nurses. The existing shortage of doctors
and nurses, caused by the war, made it difficult for the PHS to locate and hire qualified practitioners. The virulence of the disease also meant that many nurses and doctors contracted influenza
within days of being hired.

Confronted with a shortage of hospital beds, many local officials
ordered that community centers and local schools be transformed
into emergency hospitals. In some areas, the lack of doctors meant
that nursing and medical students were drafted to staff these
makeshift hospitals.

The Pandemic Hits:

Entire families became ill. In Philadelphia, a city especially hard hit,
so many children were orphaned that the Bureau of Child Hygiene
found itself overwhelmed and unable to care for them.

As the disease spread, schools and businesses emptied. Telegraph
and telephone services collapsed as operators took to their
beds. Garbage went uncollected as garbage men reported sick.
The mail piled up as postal carriers failed to come to work.

State and local departments of health also suffered from high
absentee rates. No one was left to record the pandemic’s spread
and the Public Health Service’s requests for information went
unanswered.

As the bodies accumulated, funeral parlors ran out of caskets
and bodies went uncollected in morgues.

Protecting Yourself From Influenza:

In the absence of a sure cure, fighting influenza seemed an
impossible task.

In many communities, quarantines were imposed to prevent
the spread of the disease.Schools, theaters, saloons, pool
halls and even churches were all closed. As the bodies
mounted, even funerals were held out doors to protect mourners
against the spread of the disease.

Emergency Hospital for Influenza Patients

An Emergency Hospital for Influenza Patients

The effect of the influenza epidemic was so severe that the
average life span in the US was depressed by 10 years.
The influenza virus had a profound virulence, with a mortality
rate at 2.5% compared to the previous influenza epidemics, which
were less than 0.1%. The death rate for 15 to 34-year-olds of
influenza and pneumonia were 20 times higher in 1918 than in
previous years (Taubenberger). People were struck
with illness on the street and died rapid deaths.

One anecdote shared of 1918 was of four women playing bridge
together late into the night. Overnight, three of the women died
from influenza (Hoagg). Others told stories of people on their way
to work suddenly developing the flu and dying within hours
(Henig). One physician writes that patients with seemingly
ordinary influenza would rapidly “develop the most viscous
type of pneumonia that has ever been seen” and later when
cyanosis appeared in the patients, “it is simply a struggle for air
until they suffocate,” (Grist, 1979). Another physician recalls
that the influenza patients “died struggling to clear their airways
of a blood-tinged froth that sometimes gushed from their nose
and mouth,” (Starr, 1976). The physicians of the time were
helpless against this powerful agent of influenza. In 1918 children
would skip rope to the rhyme (Crawford):

I had a little bird,

Its name was Enza.

I opened the window,

And in-flu-enza.

schools inspected -

schools inspected –

The influenza pandemic circled the globe. Most of humanity felt the
effects of this strain of the influenza virus. It spread following
the path of its human carriers, along trade routes and shipping lines.
Outbreaks swept through North America, Europe, Asia, Africa, Brazil
and the South Pacific (Taubenberger). In India the mortality rate was
extremely high at around 50 deaths from influenza per 1,000
people (Brown). The Great War, with its mass movements of men
in armies and aboard ships, probably aided in its rapid diffusion
and attack. The origins of the deadly flu disease were unknown but
widely speculated upon. Some of the allies thought of the epidemic as a
biological warfare tool of the Germans. Many thought it was a result of
the trench warfare, the use of mustard gases and the generated “smoke
and fumes” of the war. A national campaign began using the ready
rhetoric of war to fight the new enemy of microscopic proportions. A
study attempted to reason why the disease had been so devastating
in certain localized regions, looking at the climate, the weather and
the racial composition of cities. They found humidity to be linked with
more severe epidemics as it “fosters the dissemination of the bacteria,”
(Committee on Atmosphere and Man, 1923). Meanwhile the new
sciences of the infectious agents and immunology were
racing to come up with a vaccine or therapy to stop the epidemics.

The experiences of people in military camps encountering the
influenza pandemic: An excerpt for the memoirs of a survivor at
Camp Funston of the pandemic Survivor A letter to a fellow physician
describing conditions during the influenza epidemic at Camp Devens.

A collection of letters of a soldier stationed in Camp Funston Soldier

The origins of this influenza variant is not precisely known. It is thought
to have originated in China in a rare genetic shift of the influenza virus.
The recombination of its surface proteins created a virus novel to
almost everyone and a loss of herd immunity. Recently the virus
has been reconstructed from the tissue of a dead soldier and is
now being genetically characterized.

The name of Spanish Flu came from the early affliction and large
mortalities in Spain (BMJ,10/19/1918) where it allegedly killed 8
million in May (BMJ, 7/13/1918). However, a first wave of influenza
appeared early in the spring of 1918 in Kansas and in military
camps throughout the US. Few noticed the epidemic in the midst of
the war. Wilson had just given his 14 point address. There was
virtually no response or acknowledgment to the epidemics in March
and April in the military camps. It was unfortunate that no steps were
taken to prepare for the usual recrudescence of the virulent influenza
strain in the winter. The lack of action was later criticized when the
epidemic could not be ignored in the winter of 1918 (BMJ, 1918).
These first epidemics at training camps were a sign of what was
coming in greater magnitude in the fall and winter of 1918 to the
entire world.

The war brought the virus back into the US for the second wave
of the epidemic. It first arrived in Boston in September of 1918
through the port busy with war shipments of machinery and supplies.
The war also enabled the virus to spread and diffuse. Men across
the nation were mobilizing to join the military and the cause. As they
came together, they brought the virus with them and to those they
contacted. The virus  killed almost 200,00 in October of 1918
alone. In November 11 of 1918 the end of the war enabled a resurgence.
As people celebrated Armistice Day with parades and large parties, a
complete disaster from the public health standpoint, a rebirth of
the epidemic occurred in some cities. The flu that winter was beyond
imagination as millions were infected and thousands died. Just as
the war had effected the course of influenza, influenza affected
the war. Entire fleets were ill with the disease and men on the front
were too sick to fight. The flu was devastating to both sides, killing
more men than their own weapons could.

With the military patients coming home from the war with battle wounds
and mustard gas burns, hospital facilities and staff were taxed
to the limit. This created a shortage of physicians, especially in the
civilian sector as many had been lost for service with the military.
Since the medical practitioners were away with the troops, only
the medical students were left to care for the sick. Third and forth
year classes were closed and the students assigned jobs as
interns or nurses (Starr,1976). One article noted that “depletion has
been carried to such an extent that the practitioners are brought
very near the breaking point,” (BMJ, 11/2/1918). The shortage was
further confounded by the added loss of physicians to the epidemic.
In the U.S., the Red Cross had to recruit more volunteers to contribute
to the new cause at home of fighting the influenza epidemic. To respond
with the fullest utilization of nurses, volunteers and medical supplies, the
Red Cross created a National Committee on Influenza. It was involved
in both military and civilian sectors to mobilize all forces to fight Spanish
influenza (Crosby, 1989). In some areas of the US, the nursing shortage
was so acute that the Red Cross had to ask local businesses to
allow workers to have the day off if they volunteer in the hospitals
at night (Deseret News). Emergency hospitals were created to
take in the patients from the US and those arriving sick from overseas.

chelsea marine hospital in 1918

chelsea marine hospital in 1918

red_cross_public_health_nurse

red_cross_public_health_nurse

The pandemic affected everyone. With one-quarter of the US and
one-fifth of the world infected with the influenza, it was  impossible
to escape from the illness. Even President Woodrow Wilson suffered
from the flu in early 1919 while negotiating the crucial treaty of
Versailles to end the World War (Tice). Those who were
lucky enough to avoid infection had to deal with the public health
ordinances to restrain the spread of the disease.

The public health departments distributed gauze masks to be worn
in public. Stores could not hold sales, funerals were limited
to 15 minutes. Some towns required a signed certificate to
enter and railroads would not accept passengers without
them. Those who ignored the flu ordinances had to pay steep
fines enforced by extra officers (Deseret News). Bodies pilled up
as the massive deaths of the epidemic ensued. Besides the
lack of health care workers and medical supplies, there was a shortage
of coffins, morticians and gravediggers (Knox). The conditions in 1918
were not so far removed from the Black Death in the era of the
bubonic plague of the Middle Ages.

iowa_flu

iowa_flu

In 1918-19 this deadly influenza pandemic erupted during the final
stages of World War I. Nations were already attempting to deal with
the  effects and costs of the war. Propaganda campaigns and war
restrictions and rations had been implemented by governments.
Nationalism pervaded as people accepted government authority.
This allowed the public health departments to easily step in and
implement their restrictive measures. The war also gave science
greater importance as governments relied on scientists, now armed
with the new germ theory and the development of antiseptic surgery,
to design vaccines and reduce mortalities of disease and battle
wounds. Their new technologies could preserve the men on
the front and ultimately save the world. These conditions
created by World War I, together with the current social attitudes
and ideas, led to the relatively calm response of the public and
application of scientific ideas. People allowed for strict measures
and loss of freedom during the war as they submitted to the
needs of the nation ahead of their personal needs. They had
accepted the limitations placed with rationing and drafting.
The responses of the public health officials reflected the new
allegiance to science and the wartime society. The medical
and scientific communities had developed new theories and
applied them to prevention, diagnostics and treatment of the
influenza patients.

The Medical and Scientific Conceptions of Influenza

Scientific ideas about influenza, the disease and its origins,
shaped the public health and medical responses. In 1918
infectious diseases were beginning to be unraveled. Pasteur
and Koch had solidified the germ theory of disease through
clear experiments clever science. The bacillus responsible
for many infections such as tuberculosis and anthrax  had
been visualized, isolated and identified. Koch’s postulates
had been developed to clearly link a disease to a specific
microbial agent.

Robert Koch

Robert Koch

The petri dish was widely used to grow sterile cultures of bacteria
and investigate bacterial flora. Vaccines had been created for
bacterial infections and even the unseen rabies virus by
serial passage techniques. The immune system was explained by
Paul Erhlich and his side-chain theory. Tests of antibodies such as
Wasserman and coagulation experiments were becoming commonplace.
Science and medicine were on their way to their complete entanglement
and fusion as scientific principles and methodologies made their way
into clinical practice, diagnostics and therapy.

The Clinical Descriptions of Influenza

Patients with the influenza disease of the epidemic were generally
characterized by common complaints associated with the flu. They had
body aches, muscle and joint pain, headache, a sore throat and a
unproductive cough with occasional harsh breathing (JAMA, 1/25/1919).

The most common sign of infection was the fever, which ranged from
100 to 104 F and lasted for a few days. The onset of the epidemic influenza
was peculiarly sudden, as people were struck down with dizziness, weakness
and pain while on duty or in the street (BMJ, 7/13/1918). After  the
disease was established the mucous membranes became reddened
with sneezing. In some cases there was a hemorrhage of the
mucous membranes of the nose and bloody noses were commonly
seen. Vomiting occurred on occasion, and also sometimes diarrhea
but more commonly there was constipation (JAMA, 10/3/1918).

The danger of an influenza infection was its tendency to progress into
the often fatal secondary bacterial infection of pneumonia. In the
patients that did not rapidly recover after three or four days of fever, there
is an “irregular pyrexia” due to bronchitis or broncopneumonia (BMJ,
7/13/1918). The pneumonia would often appear after a period of
normal temperature with a sharp spike and expectorant of bright
red blood. The lobes of the lung became speckled with “pneumonic
consolidations.” The fatal cases developed toxemia and vasomotor
depression (JAMA, 10/3/1918). It was this tendency for secondary
complications that made this influenza infection so deadly.

pneumonia

pneumonia

hospital ward in 1918

hospital ward in 1918

A military hospital ward in 1918

In the medical literature characterizing the influenza disease, new
diagnostic techniques are frequently used to describe the clinical
appearance. The most basic clinical guideline was the temperature,
a record of which was kept in a table over time. Also closely
monitored was the pulse rate. One clinical account said that
“the pulse was remarkably slow,” (JAMA, 4/12/1919) while others
noted that the pulse rate did not increase as expected. With the
pulse, the respiration rate was measured and reported to provide
clues of the clinical progression.
Patients were also occasionally “roentgenographed” or chest x-rayed,
(JAMA, 1/25/1919). The discussion of clinical influenza also often
included analysis of the blood. The number of white blood cells were
counted for many patients. Leukopenia was commonly associated
with influenza. The albumin was also measured, since it was noted that
transient albuminuria was frequent in influenza patients. This was
done by urine analysis. The Wassermann reaction was another
added new test of the blood for antibodies (JAMA, 10/3/1918).
These new measurements enabled to physicians to have an
image of action and knowledge using scientific instruments. They
could record precisely the progress of the influenza infection and perhaps
were able to forecast its outcome.

The most novel of these tests were the blood and sputum cultures.
Building on the germ theory of disease, the physicians and their
associated research scientists attempted to find the culprit for this
deadly infection. Physicians would commonly order both blood and sputum
cultures of their influenza and pneumonia patients mostly for research
and investigative purposes. At the military training camp
Camp Lewis during a influenza epidemic, “in all cases of pneumonia.
a sputum study, white blood and differential count, blood culture
and urine examinations were made as routine,” (JAMA, 1/25/1919).

The bacterial flora of the nasopharynx of some patients was also cultured
since droplet infection was where the disease disseminated. The
collected swabs and specimens were inoculated onto blood agar of
petri dishes. The grown up bacterial colonies were closely studied to
find the causal organism. Commonly found were pneumococcus,
streptococcus, staphylococcus and Bacillus influenzae (JAMA, 4/12/1919).

pneumonia

pneumonia

These new laboratory tests used in the clinical setting brought in a solid
scientific, biological link to the practice of medicine. Medicine had
become fully scientific and technologic in its understanding and
characterization of the influenza epidemic.

Treatment and Therapy

The therapeutic remedies for influenza patients varied from the
newly developed drugs to oils and herbs. The therapy was much less
scientific than the diagnostics, as the drugs had no clear explanatory
theory of action. The treatment was largely symptomatic, aiming to
reduce fever or pain. Aspirin, or acetylsalicylic acid was a common remedy.
For secondary pneumonia doses of epinephrin were given. To
combat the cyanosis physicians gave oxygen by mask or some
injected it under the skin (JAMA, 10/3/1918). Others used salicin which
reduced pain, discomfort and fever and claimed to reduce the infectivity
of the patient. Another popular remedy was cinnamon in powder or oil form
with milk to reduce temperature (BMJ, 10/19/1918). Finally, salt of quinine
was suggested as a treatment. Most physicians agreed that the patient should
be  kept in bed (BMJ, 7/13/1918). With that was the advice of plenty of
fluids and nourishment. The application of cold to the head, with
warm packs or warm drinks was also advised. Warm baths were used
as a hydrotherapeutic method in hospitals but were discarded for
lack of success (JAMA, 10/3/1918). These treatments, like the
suggested prophylactic measures of the public health officials, seemed to
originate in the common social practices and not in the growing field of
scientific medicine. It seems that as science was entering the medical
field, it served only for explanatory, diagnostic and preventative
measures such as vaccines and technical tests. This science had
little use once a person was ill.

However, a few proposed treatment did incorporate scientific ideas
of germ theory and the immune system. O’Malley and Hartman
suggested to treat influenza patients with the serum of convalescent
patients. They utilize the theorized antibodies to boost the immune
system of sick patients. Other treatments were “digitalis,” the
administration of isotonic glucose and sodium bicarbonate intravenously
which was done in military camps (JAMA, 1/4/1919). Ross and
Hund too utilized ideas about the immune system and properties of the
blood to neutralize toxins and circulate white blood cells. They believed
that the best treatment for influenza should aim to: “…neutralize or render
the intoxicant inert…and prevent the blood destruction with its destructive
leukopenia and lessened coagulability,” (JAMA, 3/1/1919). They tried
to create a therapeutic immune serum to fight infection. These therapies
built on current scientific ideas and represented the highest
biomedical, technological treatment like the antitoxin to diphtheria.

influenza

influenza

In July, an American soldier said that while influenza caused a heavy
fever, it “usually only confines the patient to bed for a few days.” The
mutation of the virus changed all that. [Credit: National Library of Medicine]

recovering_from_influenza

recovering_from_influenza

An old cliché maintained that influenza was a wonderful disease as
it killed no one but provided doctors with lots of patients. The 1918
pandemic turned this saying on its head. [Credit: The Etiology of
Influenza in 1918]

During the 1890 influenza epidemic, Pfeiffer found what he
determined to be the microbial agent to cause influenza.
In the sputum and respiratory tract of influenza patients in 1892,
he isolated the bacteria Bacillus influenzae , which was
accepted as the true “virus” though it was not found in localized
outbreaks (BMJ, 11/2/1918). However, in studies of the 1907-8
epidemic in the US, Lord had found the bacillus in only 3 of 20 cases.
He also found the bacillus in 30% of cultures of sputum from TB patients.
Rosenthal further refuted the finding when he found the bacillus in 1 of 6
healthy people in 1900 (JAMA, 1/18/1919). The bacillus was also
found to be present in all cases of whooping cough and many cases
of measles, chronic bronchitis and scarlet fever (JAMA, 10/5/1918).
The influenza pandemic provided scientists the opportunity to confirm
or refute this contested microbe as the cause of influenza. The sputum
studies from the Camp Lewis epidemic found only a few influenza cases
harvesting the influenza bacilli and mostly type IV pneumococcus . They
concluded that “the recent epidemic at Camp Lewis was an acute
respiratory infection and not an epidemic due to Bacillus influenzae ,”
(JAMA, 1/25/1919). This finding along with others suggested to most
scientists that the Pfeiffer’s Bacillus was not the cause of influenza.

In the 1918-19 influenza pandemic, there was a great drive to find the
etiological agent responsible for the deadly scourge. Scientists in their
labs were working hard, using the cultures obtained from physician clinics,
to isolate the etiological agent for influenza. As a report early in the
epidemic said, “the ‘influence’ of influenza is still veiled in mystery, ”
(JAMA, 10/5/1918). The nominated bacillus influenzae bacteria
seemed to be incorrect and scientists scrambled to isolate the true cause.
In the journals, many authors speculated on the type of agent- was
it a new microbe, was it a bacteria, was it a virus? One journal offered
that “the severity of the present pandemic, the suddenness of onset…
led to the suggestion that the disease cannot be influenza but some other
and more lethal infection,” (BMJ, 11/2/1918). However, most accepted that
the epidemic disease was influenza based on the familiar symptoms
and known pattern of disease. The respiratory disease of influenza was
understood to give warning in the late spring of its potential effects
upon its recrudescence once the weather turned cold in the winter
(BMJ, 10/19/1918).One article with foresight stated that ” there can
be no question that the virus of influenza is a living organism…

flu virus EM

flu virus EM

it is possibly beyond the range of microscopic vision,” (BMJ, 11/16/1918). Another
article confirmed the idea of an “undiscovered virus” and noted that pneumococci
and streptococci were responsible for “the gravity of the secondary pulmonary
complications,” (BMJ, 11/2/1918). The article went on to offer the idea of a
symbiosis of virus and secondary bacterial infection combining to make it
such a severe disease.

The investigators as they attempted to find the responsible agent for the influenza
pandemic were developing ideas of infectious microbes and the concept of the
virus. The idea of the virus as an infectious agent had been around for years.
The articles of the period refer to the “virus” in their discussion but do not
consistently use it to be an infectious microbe, distinctive from bacteria. The
term virus has the same usage and application as bacillus. In 1918, a virus
was defined scientifically to be a submicroscopic infectious entity which could
be filtered but not grown in vitro . In the 1880s Pasteur developed an attenuated
vaccine for the rabies virus by serial passage way ahead of his time. Ivanoski’s
work on the tobacco mosaic virus in 1890s lead to the discovery of the virus.
He found an infectious agent that acted as a micro-organism as it multiplied
yet which passed through the sterilizing filter as a nonmicrobe. By the 1910s
several viruses, defined as filterable infectious microbes, had been identified
as causing infectious disease (Hughes). However, the scientists were still
conceptually behind in defining a virus; they distinguished it only by size
from a bacteria and not as an obligate parasite with a distinct life cycle
dependent on infecting a host cell.

The influenza epidemic afforded the opportunity to research the etiological
agent and develop the idea of the virus. Experiments by Nicolle and Le Bailly in
Paris were the earliest suggestions that influenza was caused by a “filter-passing
virus,” (BMJ, 11/2/1918). They filtered out the bacteria from bronchial expectoration
of an influenza patient and injected the filtrate into the eyes and nose of two monkeys.
The monkeys developed a fever and a marked depression. The filtration was later
administered to a volunteer subcutaneously who developed typical signs of influenza.
They reasoned that the inoculated person developed influenza from the filtrate since
no one else in their quarters developed influenza (JAMA, 12/28/1918). These scientists
followed Koch’s postulates as they isolated the causal agent from patients with the
illness and used it to reproduce the same illness in animals. Through these studies,
the scientists proved that influenza was due to a submicroscopic infectious agent
and not a bacteria, refuting the claims of Pfeiffer and advancing virology. They were
on their way to discerning the virus and characterizing the orthomyxo viruses that
lead to the disease of influenza.

These scientific experiments which unravel the cause of influenza, had immediate
preventative applications. They would assist in the effort to create a effective
vaccine to prevent influenza. This was the ultimate goal of most studies, since
vaccines were thought to be the best preventative solution in the early 20th century.
Several experiments attempted to produce vaccines, each with a different
understanding of the etiology of fatal influenza infection. A Dr. Rosenow invented
a vaccine to target the multiple bacterial agents involved from the serum of patients.
He aimed to raise the immunity to against the bacteria, the “common causes of death,
“and not the cause of the initial symptoms by inoculating with the proportions found
in the lungs and sputum (JAMA, 1/4/1919). The vaccines made for the British forces
took a similar approach and were “mixed vaccines” of pneumococcus and
lethal streptococcus. The vaccine development therefore focused on the culture
results of what could be isolated from the sickest patients and lagged behind the
scientific progress.

Fading of the Pandemic:

In November, two months after the pandemic had erupted, the Public Health Service
began reporting that influenza cases were declining.

Communities slowly lifted their quarantines. Masks were discarded. Schools were
re-opened and citizens flocked to celebrate the end of World War I.

Communities and the disease continued to be a threat throughout the spring of 1919.

By the time the pandemic had ended, in the summer of 1919, nearly 675,000
Americans were dead from influenza. Hundred of thousands more were orphaned
and widowed.

The Legacy of the Pandemic

No one knows exactly how many people died during the 1918-1919 influenza
pandemic. During the 1920s, researchers estimated that 21.5 million people died
as a result of the 1918-1919 pandemic. More recent estimates have estimated
global mortality from the 1918-1919 pandemic at anywhere between 30 and 50
million. An estimated 675,000 Americans were among the dead.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969. These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Research, forgetting the pandemic of 1918-1919, scientific milestones, 20th century influenza or global pandemics.
The Influenza Pandemic occurred in three waves in the United States throughout
1918 and 1919.

More Americans died from influenza than died in World War I. [Credit: National Library of Medicine]

All of these deaths caused a severe disruption in the economy. Claims against life
insurance policies skyrocketed, with one insurance company reporting a 745 percent
rise in the number of claims made. Small businesses, many of which had been unable to operate during the pandemic, went bankrupt.

Joseph goldberger

Joseph goldberger

Joseph Goldberger, one of the leading researchers in the PHS, studied influenza
during the pandemic. But Goldberger had multiple interests and influenza research
became less important to him in the years following 1918. [Credit: Office of the Public
Health Service Historian]

In the summer and fall of 1919, Americans called for the government to research
both the causes and impact of the pandemic. In response, both the federal government
and private companies, such as Metropolitan Life Insurance, dedicated money
specifically for flu research.

In an attempt to determine the effect influenza had different communities, the Public
Health Service conducted several small epidemiological studies. These studies,
however, were conducted after the pandemic and most PHS officers
admitted that the data which was collected was probably inaccurate.

PHS scientists continued to search for the causative agent of influenza in their
laboratories as did their fellow scientists in and outside the United States.

But while there was a burst of enthusiasm for funding flu research in
1918- 1919, the funds allocated for this research were actually fairly meager.
As time passed, Americans became less interested in the pandemic and its
causes. And even when funding for medical research dramatically increased
after World War II, funding for research on the 1918-1919 pandemic remained
limited.

Forgetting the 1918-1919 Pandemic:

In the years following 1919, Americans seemed eager to forget the pandemic.
Given the devastating impact of the pandemic, the reasons for this forgetfulness
are puzzling.

It is possible, however, that the pandemic’s close association with World War I
may have caused this amnesia. While more people died from the pandemic than
from World War I, the war had lasted longer than the pandemic and caused
greater and more immediate changes in American society.

Influenza also hit communities quickly. Often it disappeared within a few weeks of
its arrival. As one historian put it, “the disease moved too fast, arrived, flourished
and was gone before…many people had time to fully realize just how great
was the danger.” Small wonder, then, that many Americans forgot about the
pandemic in the years which followed.

Scientific Milestones in Understanding and Preventing Influenza:

In the early stages of the pandemic, many scientists believed that the agent
responsible for influenza was Pfeiffer’s bacillus. Autopsies and research conducted
during the pandemic ultimately led many scientists to discard this theory.

In late October of 1918, some researchers began to argue that influenza was
caused by a virus. Although scientists had understood that viruses could cause
diseases for more than two decades, virology was still very much in its infancy at
this time.

It was not until 1933 that the influenza A virus, which causes almost every type
of endemic and pandemic influenza, was isolated. Seven years later, in 1940,
the influenza B virus was isolated. The influenza C virus was finally isolated in 1950.

Influenza vaccine was first introduced as a licensed product in the United States in
1944. Because of the rapid rate of mutation of the influenza virus, the
effectiveness of a given vaccine usually lasts for only a year or two.

By the 1950s, vaccine makers were able to prepare and routinely release vaccines
which could be used in the prevention or control of future pandemics. During the
1960s, increased understanding of the virus enabled scientists to develop both
more potent and purer vaccines.

Mass production of influenza vaccines continued, however, to require several
months lead time.

Twentieth-Century Influenza Pandemics or Global Epidemics:

The pandemic which occurred in 1918-1919 was not the only influenza pandemic
of the twentieth century. Influenza returned in a pandemic form in 1957-1958
and, again, in 1968-1969.

These two later pandemics were much less severe than the 1918-1919 pandemic.
Estimated deaths within the United States for these two later pandemics
were 70,000 excess deaths (1957-1958) and 33,000 excess deaths (1968-1967).

Tuberculosis

Mycobacterium tuberculosis was first discovered in 1882 by Robert Koch and is one of almost 200 mycobacterial species which have been detected by molecular techniques. The genus Actinobacteria (given its own family, the Mycobacteriaceae) includes pathogens known to cause serious diseases in mammals, including tuberculosis (MTBC) and leprosy (M. leprae). Mycobacteria are grouped neither as Gram-positive nor Gram-negative bacteria. MTBC consists of M. tuberculosis, M. bovis, M. bovis BCG (bacillus Calmette-Guérin), M. africanum M. caprae, M. microti, M. canettii and M. pinnipedii, all of which share genetic homology, with no significant variation between sequences (∼0.01 to 0.03%), although differences in phenotypes are present. Cells in the genus have a typical rod, or slightly curved-shape, with dimensions of 0.2 to 0.6 μm by 1 to 10 μm.

Mycobacterium tuberculosis has a waxy mycolic acid lipid complex coating on its cell surface. The cells are impervious to Gram staining, so a common staining procedure used is Ziehl-Neelsen (ZN) staining. The outer compartment of the cell wall contains lipid-linked polysaccharides, is water-soluble, and interacts with the immune system. The inner wall is impermeable. Mycobacteria have some unique qualities that are divergent from members of the Gram-positive group, such as the presence of mycolic acids in the cell wall.

MTBC and M. leprae replication occurs in the tissues of warm-blooded human hosts. This air-borne pathogen is transmitted from an active pulmonary tuberculosis patient by coughing. Droplet nuclei, approximately 1 to 5 μm in size “meander” in the air and are transmitted to susceptible individuals by inhalation. Mycobacteria are incapable of replicating in or on inanimate objects. The risk of infection is dependent to the load of the bacillus that has been inhaled, level of infectiousness, contact perimeter and the immune competency of potential hosts. Due to the size of the droplets inhaled into the lungs, the infection penetrates the defense systems of the bronchi and enters the terminal alveoli. Invading bacteria are then engulfed by alveolar macrophage and dendritic cells.

The cell-mediated immune response alleviates the multiplication of M. tuberculosis and halts infection. Infected individuals with strong immune systems are generally able to combat the infection within 2 to 8 weeks post-infection, when the active cell-mediated immune response stops further multiplication of M. tuberculosis. Tuberculosis infection shows several significant clinical manifestations in pulmonary and extra-pulmonary sites. Prolonged coughing, severe weigh-lost, night sweats, low-grade fever, dyspnoea and chest pain are clinical symptoms indicated from pulmonary infections.

Fort Bayard, N.M., T.B. service assignment

Fort Gayard, NM

Fort Gayard, NM

Fort Bayard, NM Post Hospital circa 1890

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

U.S. Army, General Hospital, Fort Bayard, New Mexico, General View,

Tuberculosis, (Pvt.) Richard Johnson said, was “regarded as a much dreaded disease that was easily contracted by association.” In fact, so many hospital corpsmen requested transfers out that the Surgeon General established a policy that no such requests would be considered until after two years of service. Consequently, Johnson noted, “During my time there we had a high percentage of desertions.” For example, all four of the men who arrived with Johnson, within a year—“two of them,” he dryly observed, “owing me money.”

Four years later another young man arrived at Fort Bayard. He, too, remarked on the long journey by rail through the “desert waste of New Mexico,” and then the wagon ride over “dry desolate foothills,” to the post. But his reaction was different from Johnson’s. Capt. Earl Bruns moved from being a patient to a physician at the hospital. For Bruns Fort Bayard was “a veritable oasis in the desert, studded with shade trees, green lawns, shrubbery, and flowers.” He credited the hospital commander, Colonel (Col.) George E. Bushnell, writing that, “[i]n this one spot one man had made the desert bloom like a rose.”

Johnson’s and Bruns’ different views from 1904 and 1908, respectively, may reflect the fact that Johnson was healthy and assigned grudgingly to work at the tuberculosis hospital, whereas Bruns had few other options and came in hopes of regaining his health—or it may reflect the improvements Bushnell made during his first years in command. But every week for the more than twenty years that Fort Bayard was an Army tuberculosis hospital, workers and patients arrived with dread and foreboding, or joy and relief—or a mix of them all.

The approach Fort Bayard and George Bushnell took to tuberculosis was similar to how physicians manage the disease today in that it involved isolating the patient, treating the disease, and educating the patient and his family on how to maintain their health. The hospital offered patients sanctuary from the demands, fears, and prejudices regarding tuberculosis in the outside world. Fort Bayard treated tuberculosis patients with prolonged bed rest, fresh air, and a healthy diet, but undertaking this “rest treatment”—confining oneself to bed for months—proved difficult if not impossible for many patients. Fort Bayard involved patients’ adaptation to new lifestyles as people with tuberculosis. Finally, Fort Bayard managed patients’ transition back to the outside world.

One of the most striking aspects of Fort Bayard was that many of the medical staff had tuberculosis themselves, including George Bushnell. Tuberculosis weakened Bushnell’s lungs and shaped his life in numerous ways. He tired easily, had to carefully monitor his health, and as Earl Bruns observed, “was never a well man.” Bushnell had active tuberculosis five times in his life: the fourth time in 1919 with a breakdown from the strain of wartime work; and the fifth and the final illness in 1924 that lead to his death at age 70. In 1911 he advised his superiors that, “I did not consider myself strong enough to carry on the work of commanding this Hospital and keeping myself in condition for active duty.” The War Department generally required officers in poor physical condition to retire, but the Surgeon General secured a waiver for Bushnell, because “the interests of the service would suffer by his retirement.” After a leave of absence in 1909–10, Bushnell’s annual reports on the competency of his officers included his own name on the list of those competent for hospital duty, but “unfit for active field service.”

“What would our sanatorium movement and our anti-tuberculosis crusade amount to,” wrote tuberculosis expert Adolphus Knopf, “were it not for the labors of tuberculous physicians, or one-time tuberculous physicians, who, because of their infirmity, had become interested in tuberculosis?” Well-known leaders in the antituberculosis movement such as Edward Trudeau and Lawrence Flick established their sanatoriums after they recovered from tuberculosis in order to offer others the treatment. Twenty-one of the first thirty recipients of the Trudeau Medal, established in 1926 for outstanding work in tuberculosis, had the disease. James Waring, a tuberculosis physician who arrived at a Colorado Springs sanatorium on a stretcher in 1908, later wrote, “It has been my good fortune to serve three separate and extended ‘hitches’ as a ‘bed patient,’ the time so spent numbering in all about nine years.” He, like many physicians, saw his personal experience as an asset in his practice. The three key figures in the Army tuberculosis program during World War I were Bushnell, Bruns, and Gerald Webb of Colorado Springs who started a tuberculosis sanatorium after his wife died of the disease.

Bushnell turned tuberculosis into an asset for the Army Medical Department, making Fort Bayard a center of national expertise on the disease. His personal experience with chronic pulmonary tuberculosis gave him good rapport and credibility with many of his patients. Medical officer Earl Bruns wrote that, “[H]e went among the patients and talked to them individually” and thereby provided “a living example of a cure due to rational treatment.” Bruns described how Bushnell spent his days attending to patients, carrying out administrative duties, and devoted hours to supervising the work in the gardens and grounds of Fort Bayard.

(Who’s Who in America, 1924-25. E. H. Bruns in American Review of Tuberculosis, June 1925. 0. B. Webb in Outdoor Life, Sept. 1924. Lancet. Lond., 1924. Jour. Am. Med. Ass’n., 1924, p. 374.)

General George M. Sternberg

In addition to being an Army surgeon, Sternberg was also a noted bacteriologist who, in 1880, had translated Antoine Magnin’s The Bacteria, which presented the latest research in germ theory. Sternberg’s work contributed to preparing American understanding of Robert Koch’s pronouncement in 1882 of the existence of the tubercle bacillus (Ott 1996:55). Over the next two decades Koch’s analysis gained converts, leading to the universally accepted belief that tuberculosis represented a bacterium infection that could be diagnosed and then monitored by microscopic inspection of patient’s sputum.

Sternberg was no doubt aware of the efforts of Edward Livingston Trudeau. Beginning in the 1870s, when he undertook his own recovery from consumption by withdrawing to the Adirondack Mountains, Trudeau had become an advocate of extended bed rest in remote, healthful environments. Quickly accepting Koch’s research, Trudeau argued that those afflicted by the tubercle bacillus could best be healed when removed from cities and placed under the care of physicians who carefully monitored their weight and sputum and who prescribed constant bed rest with exposure to fresh air. Preferring the term “sanatorium,” derived from the Latin word “to heal,” to “sanitarium,” derived from the Latin term for health, Trudeau founded his Adirondack Cottage Sanatorium at Saranac, New York, in 1885. This spawned the opening of hundreds of similar institutions throughout the country (Caldwell 1988:70).

In 1899, Fort Bayard remained within the Army under the auspices of the Army Medical Department. The Army’s decision to retain the fort, even after it had outlived its military usefulness, grew from the strong interest that General George M. Sternberg, Surgeon General of the Army, had in pulmonary tuberculosis and its treatment.
Sternberg was also aware of the relatively good health that the Army’s soldiers had enjoyed serving in the higher elevations of the American West. Members of Zebulon Pike’s expedition of 1810 and of Fremont’s exploratory parties of the 1840s had witness their health improve while in the Rocky Mountains.

………………………………………………………………………………………………………………………………………………..

Upon assuming command in 1904, Bushnell, who had studied botany for years, immediately began to plant flowers, shrubs, and trees. When President Theodore Roosevelt created the Gila Forest Reserve in 1905, Bushnell ensured that Fort Bayard, which adjoined the Reserve, was part of a government reforestation project. The first year alone the Forest Service gave the hospital 250 seedlings of Himalayan cedar and yellow pine. Bushnell also got approval to fence in land for pasturing dairy cattle and arranged to recultivate long-neglected garden plots. The first year he predicted that the garden would generate “about 1300 dollars worth of produce.” After the quartermaster located an underground water source, Bushnell redoubled his cultivation efforts, planting trees, flowers, and grass to mitigate the wind and dust, and “to beautify the Post.” In later years Bushnell successfully grew beans from ancient cave dwellers (Anasazi beans), and made a less successful effort to grow Giant Sequoia from California.28 By 1910 Fort Bayard had four acres of vegetable gardens, a greenhouse, an orchard of 200 fruit trees, and alfalfa fields and hay fields for the dairy herd of 115 Holsteins, which the Silver City Enterprise proclaimed “one of the finest in the west.” The hospital also raised all of the beef consumed at the hospital (thereby avoiding Daniel Appel’s purchasing problems) and consumed pork at small expense by feeding the pigs the waste food. The hospital laboratory raised its own Belgian hares and guinea pigs for experiments.

Bushnell oversaw years of construction at Fort Bayard. In the wake of Florence Nightingale’s writings, nineteenth-century sanitation practices stressed cleanliness and ventilation, giving rise to pavilion style hospitals, narrow one- or two-story buildings lined with windows to provide patients with ample ventilation. In March 1904, Bushnell sent the Surgeon General plans for an “open court building” in modified pavilion style (Figure 2-1).

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

Plan for tuberculosis patient ward, as designed by George E. Bushnell, providing fresh air porches for each patient, United States Army Tuberculosis Hospital in New Mexico, .

The building consisted of a quadrangle of long, narrow dressing rooms around an open court with porches along both the exterior and interior of the building. The rooms could be used for sleeping in inclement weather and the porches allowed patients to seek sun or shade as they wished. Wide doors enabled the easy movement of beds between the rooms and the porches. “The object of this style of building is to facilitate sleeping out of doors, which is now considered so important in modern sanatoria for the treatment of tuberculosis,” Bushnell explained.

The United States escaped the cauldron of WWI until April 1917. But after years of trying to maintain neutrality, President Woodrow Wilson’s administration mobilized the nation to fight in the most deadly enterprise the world had ever seen. Modern industrialized warfare would kill millions of soldiers, sailors, and civilians and unleash disease and famine across the globe. Typhus flourished in Eastern Europe and a lethal strain of influenza exploded out of the Western Front in 1918, producing one of the worst pandemics in history. Although eclipsed by such fierce epidemics, tuberculosis also fed on the war.

He was ordered to the office of The Surgeon General on June 2, 1917, and placed in charge of the Division of Internal Medicine and on June 13 there appeared S. G. O. Circular No. 20, Examinations for pulmonary tuberculosis in the military service, establishing a standard method of examination of the lungs for tuberculosis. Through his efforts a reexamination of all personnel already in the service was made by tuberculosis examiners and about 24,000 were rejected on that score. He had charge of the location, construction, and administration of all army tuberculosis hospitals, of which eight were built with a capacity of 8,000 patients.

With his relief from service in 1919 he took up his residence on a small farm at Bedford, Mass., where he prepared his Study of the Epidemiology of Tuberculosis (1920) and later Diseases of the Chest (1925) in collaboration with Dr. Joseph H. Pratt of Boston. As chief delegate of the National Tuberculosis Association he attended the first meeting of the International Union Against Tuberculosis in London in 1921. During the winter of 1922-23 he delivered a series of lectures on military medicine at Harvard University. In the summer of 1923 he moved to California and took up his residence at Pasadena.

………………………………………………………………………………………………………………………………………………..

In eighteen months the Selective Service registered twenty-five million men for the draft, examined ten million for military service, and enlisted more than four million soldiers, sailors, and Marines. To the dismay of many people, medical screening boards across the nation soon discovered that American men were not as strong and healthy as they had assumed. Of those eligible for military service, 30 percent were physically unfit; a number of them deemed ineligible to serve had tuberculosis. Therefore, in 1917 Surgeon General William Gorgas called George Bushnell to Washington, DC, to establish the Office of Tuberculosis in the Division of Internal Medicine, leaving Bushnell’s protégé, Earl Bruns, in charge of Fort Bayard. Given the Medical Department’s mission to maintain a strong and healthy fighting force, Bushnell’s new job was to minimize the incidence of tuberculosis among active-duty soldiers and avoid the high cost of disability pensions for men who incurred the disease during military service. It was a tall order.

Wartime tuberculosis had already received attention in 1916, when reports circulated that the French army had sent home 86,000 men with the disease, raising the specter that life in the trenches would generate hundreds of thousands of cases. One investigator found that tuberculosis rates in the British army were double those in peacetime, reversing the prewar downward trend. The head of the New York City Public Health Department, Hermann Biggs, declared that “tuberculosis
offers a problem of stupendous magnitude in France.” Subsequent studies revealed that only 20 percent or less of the French soldiers sent home with tuberculosis actually had the disease; others were either misdiagnosed or had had tuberculosis prior to entering the military and therefore had not contracted it in the trenches. The reports nevertheless galvanized public health officials to address the tuberculosis problem. The Rockefeller Foundation, for example, in cooperation with the American Red Cross, established a Commission for the Prevention of Tuberculosis in France to help the French and protect any Americans from contracting tuberculosis “over there.”

Bushnell established four “tuberculosis screens” by (1) examining all volunteers and draftees before enlistment, (2) checking recruits again in the training camps, (3) examining soldiers already in the Army for tuberculosis, and (4) screening military personnel at discharge to ensure they returned to civil life in sound condition. To implement these activities, Bushnell developed a protocol under which physicians could quickly examine men for tuberculosis as part of the larger physical examination process. He standardized the procedures for examinations throughout the Army, and crafted a narrow definition of what constituted a tuberculosis diagnosis to enable the Army to enlist as many young men as possible. Despite these efforts, soldiers developed active cases of tuberculosis throughout the war. Bushnell’s office also created eight more tuberculosis hospitals in the United States and designated three hospitals with the American Expeditionary Forces (AEF) in France to care for soldiers who developed active tuberculosis in the camps and trenches. Short of resources and knowledge, however, the Army Medical Department at times struggled just to provide beds for tuberculosis patients, let alone deliver the individual care Bushnell and his staff had provided at Fort Bayard before the war.

Overburdened medical personnel worked long hours, in often poor conditions. Thousands of tuberculosis patients resented the diagnosis and protested the conditions in which at times they were virtually warehoused. The draft, which brought millions of young men into government control and responsibility, also exposed the Army Medical Department to public scrutiny. Congress launched an investigation in 1919. World War I, which so dramatically changed the world, profoundly altered the Army’s tuberculosis program as well. It also challenged George Bushnell’s expertise. The Army’s tuberculosis expert had founded his policies on assumptions that, although widely held at the time, proved to be inaccurate and costly in lives and treasure. Wartime tuberculosis, therefore, shows the power of disease to overwhelm both knowledge and institutions.

Bushnell and his contemporaries were familiar with the concept of immunity and the power of vaccination, and the Army Medical Department vaccinated soldiers for smallpox and typhoid. Extending this concept of immunity to tuberculosis, medical officers differentiated between primary infection in childhood and secondary infection later in life. Observing that tuberculosis was often fatal for infants and young children, they reasoned that for survivors, an early infection of tuberculosis bacilli immunized a person against the disease later in life.
A “primary infection,” wrote Bushnell, gave a person some immunity, which “while not sufficient in many cases to prevent extension of disease [within the body]…is sufficient to counteract new infections from without.”8 In an article on “The Tuberculous Soldier,” the revered physician William Osler agreed. For years autopsies had uncovered healed tuberculosis lesions in people who had died in accidents or of other diseases. Although it was not known how many men between the ages of eighteen and forty harbored the tubercle bacillus, Osler wrote, “We do know that it is exceptional not to find a few [lesions] in the bodies of men between these ages dead of other diseases.” Thus, he argued, “In a majority of cases the germ enlists with the soldier. A few, very few, catch the disease in infected billets or barracks.”9 Bushnell reasoned if adults developed tuberculosis, “they do it on account of failure of their resistance.”

At one point Bushnell told the chief surgeon of the AEF, “Personally I have no fear of the contagion of tuberculosis between adults and see no reason why patients of this kind should not be treated in the ordinary hospital.” He asserted that the “really cruel persecution of the consumptive…through the fear that he will infect others, is based on what I must characterize as highly exaggerated notions of the danger of such infection.” This, too, was the prevailing view. Boston bacteriologist Edward O. Otis, who served as a medical officer during the war, wrote that “Undue fear of the communicability of pulmonary tuberculosis from one adult to another is unwarranted in the present state of our knowledge.”
Bushnell reasoned that if men infected with tuberculosis could indeed easily spread it to others, there would be much more tuberculosis in the Army than there was. British physician Leslie Murry, reasoned that although the crowded and damp conditions of trench warfare would have unfavorable effects on soldiers’ health, living outside with plenty of fresh air and good food and hygienic practices would improve their resistance to tuberculosis. Public health specialist George Thomas Palmer countered that although reactivation may not be higher in the military than in civil life, the United States had enough men without tuberculosis to bar anyone suspected of it from the military and thereby avoid an “added financial burden to the nation.” The challenge was to keep tuberculosis out of the Army and tuberculars off the disability rolls, but not to exclude so many men as to impair the nation’s ability to amass an army.

Bushnell’s views of tuberculosis immunity, contagion, interaction with military life, and the risk of overdiagnosis shaped the Army Medical Department programs for screening recruits. He knew he could not guarantee that all tuberculosis could be eliminated from the Army, but asserted that, “a sufficiently rigid selection of promising material in itself practically excludes tuberculosis.” In addition to enlisting the strongest men, Bushnell believed that a massive screening program would pay for itself by eliminating those who would later cost the government in medical services and disability benefits.

But the nation at war did not have the time or resources for the meticulous one-hour examination practiced at Fort Bayard, so Bushnell developed a protocol for civilian and military physicians to examine volunteers, draftees, trainees, and soldiers for tuberculosis in a matter of minutes. Circular No. 20 detailed how physicians should examine recruits, and became the single most important Army tuberculosis document during the war. The circular explained that the apices, or the tops of lungs, were the most common location for tuberculosis lesions, and that “the only trustworthy sign of activity in apical tuberculosis is the presence of persistent moist rales.” Circular No. 20 directed that “the presence of tubercle bacilli in the sputum is a cause for rejection,” and that “no examination for tuberculosis is complete without auscultation following a cough.” It recommended that a sputum sample “be coughed up in [the examiner’s] presence,” to ensure that it was actually from the examinee.

The last one-third of the document detailed X-ray examinations, summarizing eight different kinds of conditions that may appear and that would be grounds for rejection, and which conditions would not. By 1915, a Fort Bayard medical officer stated that X-ray technology “has become one of the most valued procedures in the diagnosis of pulmonary tuberculosis.” Medical officers F. E. Diemer and R. D. MacRae at Camp Lewis, Washington, argued in the pages of the JAMA that X-rays should be the primary diagnostic tool, not an “adjunct.” World War I ultimately, however, did encourage X-ray technology by revealing its power to thousands of physicians, stimulating the search for technical advances, and demonstrating the importance of specialization in reading X-rays. By the end of the war, the Army Medical Department had shipped to France hundreds of X-ray machines for use in Army hospitals and at the bedside, and developed various modes of X-ray equipment, including X-ray ambulances

Calculating that it would require 600 examiners for the screening process, the Medical Department turned to training general practitioners from civil life who knew little about tuberculosis. Bushnell’s office established a six-week tuberculosis course to prepare physicians. The first course at the Army Medical School in Washington, DC, was so popular that instructors offered it at several other training camps in the country. General Hospital No. 16, operating in conjunction with Yale Medical School, also offered a course on hospital administration to train medical officers to run tuberculosis hospitals.

Public health officials and the National Tuberculosis Association asked to be informed of any tuberculous individuals being sent to their communities, including the name and address of the “party assuming responsibility for such continued treatment and care.” The journal American Medicine published an article by British tuberculosis specialist Halliday Sutherland, who expressed concern that if men declined treatment and returned home they could spread tuberculosis to their families. He suggested that the U.S. Army retain men diagnosed with tuberculosis so that the government could provide treatment and discipline them if they resisted. Members of Congress opposed simply discharging men with tuberculosis. Representative Carl Hayden of Arizona argued that such men had given up their civilian lives upon induction into the Army, only to discover “that they were afflicted with a dread disease which prevents them from earning a livelihood.” He suggested that “some provision should be made for the care of such men until they are able to provide for themselves.”

While Bushnell’s policies succeeded in suppressing tuberculosis rates in the Army, the narrow definition of a tuberculosis diagnosis explicitly allowed men with healed lesions in their lungs to serve, and the rapid screening system caused some examiners to miss cases of active disease. Bushnell recognized that “a standard, though imperfect, is believed to be an indispensable adjunct in Army tuberculosis work not only to support the examiner but also to secure the necessary uniformity of practice in the matter of discharge for tuberculosis.” Nationwide, local draft boards and training camps rejected more than 88,000 men for tuberculosis, about 2.3 percent of the 3.8 million men examined. Postwar assessments calculated that of the more than two million soldiers who went to France to serve in the AEF, only 8,717 were evacuated with a diagnosis of tuberculosis, an incidence of only 0.4 percent.

In early 1918 a strep infection in the training camps in the United States caused medical officers to send hundreds of trainees to Army hospitals misdiagnosed with tuberculosis, crowding hospitals and generating paperwork and confusion. For a time, therefore, the Office of The Surgeon General ordered that no one should be discharged for tuberculosis from the training camps unless he had bacilli in his sputum—meaning the very severe cases. More than 50 percent of the patients being sent back to the United States from France with a diagnosis of tuberculosis did not actually have the disease. Bushnell viewed such overdiagnoses as “evil,” because it took men out of the AEF and overburdened tuberculosis hospitals and naval transports, which had to segregate suspected tuberculosis cases in isolation rooms or on open decks.

Faced with what he called “leaking” of soldiers from the AEF due to erroneous tuberculosis diagnoses, Bushnell turned to a specialist for assistance, Gerald B. Webb (Figure 4-3), from Colorado Springs.61 An Englishman by birth, Webb had married an American, and when she developed tuberculosis the couple traveled to Colorado Springs, Colorado, for treatment. His wife struggled with the disease for ten years until her death in 1903, and afterward Webb stayed on in Colorado Springs, remarrying and building a medical practice specializing in tuberculosis. In addition to his medical practice, Webb pioneered research into the body’s immune function, searched for a tuberculosis vaccine, and was a founder of the American Association of Immunologists (1913). Still somewhat bored in Colorado Springs, Webb volunteered for the Medical Corps soon after the United States declared war and helped organize and run tuberculosis screening boards at Camp Russell, Wyoming, and Camp Bowie, Texas. Bushnell
appointed him senior tuberculosis consultant for the AEF. After meeting with Bushnell in Washington and attending the Army War Course for senior officers at Columbia University, Webb sailed to France in March 1918.

Gerald B. Webb, World War I, Gerald B. Webb Papers.

Gerald B. Webb, World War I,

Gerald B. Webb, World War I,

Photograph courtesy of Special Collections, Tutt Library, Colorado College, Colorado Springs, Colorado.

Immunity in tuberculosis: Further experiments Unknown Binding – 1914

Webb instituted a screening process similar to that in the United States, distributing Circular No. 20, and preparing an illustrated version for medical officers in the field. He established a policy directing that only patients with sputum positive for tuberculosis should be sent back to the United States. Others would be tagged “tuberculosis observation” and sent to one of three hospitals designated as tuberculosis observation centers. There, specialists—Bushnell’s “good tuberculosis men”—would distinguish tuberculosis signs from other lung problems such as bronchitis and pneumonia, determining that he was free of disease, and send only patients who were indeed positive for tuberculosis back to the homeland.

Webb traveled to field and base hospitals throughout France. He would typically spend three days at a hospital, examining patients, leading conferences, giving lectures, and, according to his biographer, Helen Clapesattle, “preaching his gospel of fresh air and absolute rest.” He recruited a radiologist to teach the proper reading of X-ray plates, and advocated the early detection of tuberculosis, explaining, “Just as the wounded do better if they are got to the surgeons quickly, so the tuberculosis-wounded are more likely to recover if they are spotted and sent to the doctors early.”

In the 1930s, as Webb had concluded in 1919, scientists came to recognize that early tuberculosis infections did not provide protection and that adults could be reinfected with tuberculosis and develop active disease. In the meantime, with his AEF work done, in January 1919 Webb returned to his family and medical practice in Colorado Springs. The National Tuberculosis Association recognized Webb’s war work by electing him president in 1920, and Webb set the Association on a course of tuberculosis research on the immunity question and the standardization of X-ray diagnostics. He did not return to military service, but was a mentor for young physicians Esmond Long and James Waring, who would be leaders in the Army Medical Department’s tuberculosis program during the next war.

May 1941, as the United States stood on the brink of another world war, Benjamin Goldberg, president of the American College of Chest Physicians, recited some stunning figures at the association’s annual meeting in Cleveland, Ohio. He calculated that from 1919 to 1940 the Veterans Administration had admitted 293,761 tuberculosis patients to its hospitals. These patients had received government care and benefits for a total of 1,085,245 patient-years, at a cost of $1,185,914,489.56. Goldberg’s remarks reveal that although tuberculosis rates in the United States were declining 3 to 4 percent annually during the interwar years, the government’s burden to care for tuberculosis patients remained heavy. The Army was only three-quarters the size it was before World War I (131,000 versus 175,000 strength) and experienced no major epidemics, so that suicide and automobile accidents became the leading causes of death in the peacetime Army. Although hospital admissions of active duty personnel for tuberculosis declined during the decade, tuberculosis admissions at Fitzsimons Hospital in Denver remained constant due to a steady stream of patients who were veterans of the war. Tuberculosis, in fact, became a leading cause of disability discharges from the Army and, with nervous and mental disorders, generated the greatest amount of veterans’ benefits between the wars,

The story of tuberculosis in the Army after World War I, then, is one of increasing demand and decreasing resources, a dynamic that left Fitzsimons financially strapped even before the country entered the Great Depression. An examination of Fitzsimons’ postwar environment—the modern hospital and technology, the ever-changing landscape of veterans’ benefits, and new, invasive treatments for tuberculosis—illuminates these stresses.

President Franklin Delano Roosevelt proclaimed a “limited national emergency” on 8 September 1939, a week after Germany invaded Poland. But due to underfunding during the interwar period, one observer wrote that, “to prepare for war the Medical Department had to start almost from scratch.”1 Given the lean years of the 1920s and 1930s and the Army Medical Department’s policy of discharging officers with tuberculosis from duty, Surgeon General James C. Magee had to turn to the civilian sector for a tuberculosis expert. He recruited Esmond R. Long, M.D., Ph.D., director of the Henry Phipps Institute for the Study, Prevention and Treatment of Tuberculosis in Philadelphia. He could not have made a better choice. Long was also professor of pathology at the University of Pennsylvania, director of medical research for the National Tuberculosis Association, and the youngest person to be awarded the Trudeau Medal at age forty-two years (in 1932) for his tuberculosis research.2 He would now become the Army’s point man on the disease and stand at the front lines of the Medical Department’s struggle with tuberculosis beginning before Pearl Harbor to well after V-J (Victory-Japan) Day.

His mission to reduce the effect of tuberculosis on the Army differed from that of Colonel (Col.) George Bushnell in the previous war because disease was less of a threat. In fact, World War II would be the first war in which more American personnel died of battle wounds than of disease. Of 405,399 recorded fatalities, battle deaths outnumbered those from disease and nonbattle injuries more than two to one: 291,557 to 113,842.3 Malaria, sexually transmitted diseases, and respiratory infections did sicken millions of soldiers, sailors, Marines, and airmen, but most survived. Thanks in part to sulfa drugs and, beginning in 1943, penicillin to treat bacterial infections, the Army Medical Department had only 14,904 deaths of 14,998,369 disease admissions worldwide, a 0.1 percent death rate.4 Tuberculosis declined, too, representing only 1 percent of Army hospital admissions for diseases—1.2 per 1,000 cases per year—a rate much lower than the 12 per 1,000 cases per year during World War I. The Medical Department concluded that “tuberculosis was not a major cause of non-effectiveness during the war.”

But Sir Arthur S. McNalty, chief medical officer of the British Ministry of Health (1935–40), called tuberculosis “one of the camp followers of war.” War abetted tuberculosis, he explained, because of the “lowering of bodily resistance and increased physical or mental strain or both.”6 It also found fertile ground in crowded barracks and camps, and ran rampant in the World War II prison camps and Nazi concentration camps. And just one active case of tuberculosis per thousand in the Army meant thousands of tuberculosis sufferers among the 11 million Americans in uniform, each of whom consumed Medical Department resources: the average hospital stay per case during the war was 113 days.7

But if tuberculosis was a camp follower, Esmond Long (Figure 8-1) was a tuberculosis follower.8 He tracked it down, studied it, and tried to prevent its spread at every stage of American involvement in the war. With war looming in 1940, the National Research Council asked Long to chair the Division of Medical Sciences, Subcommittee for Tuberculosis, to advise the government on preventing and controlling tuberculosis in both civilian and military populations during war mobilization. Once the United States entered the war, Long received a commission as a colonel in the Medical Corps and moved his family from Philadelphia to Washington, DC. Working out of the Office of The Surgeon General, Long set up a screening process with the Selective Service to keep tuberculosis out of the Army and then traveled to more than ninety induction camps to ensure adherence to the procedures. He also oversaw the expansion of tuberculosis treatment facilities in the United States, inspected Fitzsimons and other Army tuberculosis hospitals, advised medical officers on treating patients, kept abreast of research developments in the labs, monitored outbreaks of tuberculosis in the theaters of war, and wrote articles for medical and lay periodicals to publicize the Army’s antituberculosis program.

In 1945 Long traveled to the European theater to inspect hospitals caring for tubercular refugees and liberated prisoners of war (POWs). There he saw the horrors of the concentration camps at Buchenwald and Dachau where Army medical personnel cared for thousands of former prisoners sick and dying of typhus, starvation, and tuberculosis. After the war Long organized the tuberculosis control program for the Allied occupation of Germany, and returned annually in the 1950s to assess its progress. He split his time between the Army Medical Department and the Veterans Administration (VA) to supervise the transition of the federal tuberculosis treatment program from the War Department to the VA. He also helped organize and evaluate the antibiotic trials, which ultimately led to an effective cure for tuberculosis. After returning to civilian life Long continued to study tuberculosis in the Army, and he wrote the key tuberculosis chapters for the Army Medical Department’s official history of the war.

With Long as a guide, this chapter shows how war once again served as handmaiden to disease around the globe. This time the Army Medical Department assumed not only national but international responsibilities for the control of tuberculosis in military and civilian populations, among friend and foe. Long and the Army Medical Department did succeed in demoting tuberculosis from the leading cause of disability discharge for American World War I personnel (13.5 percent of discharges), to thirteenth position during the years 1942–45 (1.9 percent of all discharges), behind conditions such as psychoneuroses, ulcers, respiratory diseases, arthritis, and other diseases.9 But this achievement required continued vigilance, an Army-wide surveillance program, and dedicated personnel and resources. The first step was to keep tuberculosis out of the Army.

After war broke out in Europe, Congress passed the National Defense Act of 1940, which established the first peacetime military draft in U.S. history, increasing Army strength eightfold from 210,000 in September 1939 to almost 1.7 million (1,686,403) by December 1941. This resulted in a 75 percent rise in the number of patients in military hospitals, straining the Medical Department, which had only seven general hospitals and 119 station hospitals in 1939.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Esmond R. Long, who directed the Army tuberculosis program during World War II.

Figure.. Esmond R. Long, who directed the Army tuberculosis program during World War II. Photograph courtesy of the National Library of Medicine, Image #B017302.

“Good Tuberculosis Men”

Soon appropriating freely, pledging “all of the resources of the country“ to meet the crisis, the War Department was constantly readjusting to meet the escalating emergency.

The National Research Council Committee on Medicine, Subcommittee on Tuberculosis, chaired by Long, met for the first time on 24 July 1940 and prioritized its responsibilities: first, develop recommendations on how to screen draft registrants for tuberculosis; second, screen civilians in federal service and wartime industries; third, figure out how to care for people rejected by the draft for the disease; and finally, help civilian and military agencies prepare for tuberculosis in war refugee populations. In its first nine-hour meeting, the subcommittee decided on centralized tuberculosis screening centers at 200 recruiting stations and generated a list of tuberculosis specialists nationwide to evaluate recruits and interpret X-rays at those centers. Subcommittee members stressed the importance of maintaining good records for processing any subsequent benefits claims and, most importantly, called for X-ray screening of all inductees—not just those who looked like they might have tuberculosis.

The War Department leadership initially rejected such comprehensive screening of inductees as expensive and time-consuming. The fact that tuberculosis death rates in the country had fallen two-thirds from 140 per 100,000 people in 1917 to 45 per 100,000 people in 1941, and in the Army from 4.6 per 1,000 in 1922 to 1.4 per 1,000 in 1940, may have led to complacency. But Long, his colleagues, and the national tuberculosis community, mindful of the cost to the nation in sickness, death, and disability benefits in the previous war, persisted. The American College of Chest Surgeons asked in July 1940, “Shall We Spread or Eliminate Tuberculosis in the Army?” and their president, Benjamin Goldberg, reported that the VA had spent almost $1.2 billion on tuberculosis patients through 1940. One medical officer calculated that 31 percent of all veterans who died as a result of World War I service and whose dependents received benefits, had died of tuberculosis. Even the lay press chimed in with a TIME magazine article, “TB Warning,” that stressed the importance of chest X-rays.16 Advocates pointed out that X-ray technology was more available and less expensive than in the previous war, and radiologists were more plentiful and skillful. They were also confident that new technology, such as the development of a lens that allowed the direct and rapid photography of a fluoroscopic image and new 4 x 5 inch films, which made storage and transport easier than that of the 11 x 14 inch films, rendered screening more practical than in 1917–18.

The Army Medical Department agreed with the National Research Council subcommittee. Since 1934 it had required X-rays for all Army personnel assigned overseas, but it had not yet convinced the War Department on universal screening. In June 1941, Brigadier General (Brig. Gen.) Charles Hillman, Chief, Office of The Surgeon General Professional Service Division, told the National Tuberculosis chairman, C. M. Hendricks, that “the desirability of routine X-rays had long been recognized by the Surgeon General’s Office,” but “considerations other than medical entered the picture and the character of induction

Camp Follower: Tuberculosis in World War II 277

Examinations had to be adapted to the limitations of time, place, and available equipment.” When Fitzsimons informed Hillman later that new recruits were arriving at the hospital with tuberculosis, he responded almost plaintively. “I am working with the Adjutant General to devise some method by which every volunteer for enlistment in the Regular Army will have a chest X-ray and serological test before acceptance.” He asked for all available evidence of sick recruits, explaining that “data on Regular Army men of short service now in Fitzsimons with tuberculosis will help me get the thing across.” As the data and advice accumulated, in January 1942, the Adjutant General required that all voluntary applicants and reenlisting men be given chest X-rays. Finally, on 15 March 1942, mobilization regulations made chest X-rays mandatory in all induction physicals.

With universal screening in place, Long, as chief of the tuberculosis branch in the Office of The Surgeon General, oversaw the screening process and faced a task similar to that of George Bushnell in 1917–18, finding that fine line between excluding as much tuberculosis as possible from the Army without rejecting too few or too many men. Conscious of his predecessor’s miscalculations, Long was careful not to criticize Bushnell’s tuberculosis program, at one point noting that World War I medical officers were “not to be reproached for not having knowledge that came into existence only later, any more than the chief of the Army air service in 1917 is to be reproached because more efficient airplanes are available now than then.”

The wartime emergency produced a public health campaign regarding tuberculosis and other disease threats. A War Department pamphlet, What Every Citizen Should Know about Wartime Medicine, presented the issue as one of maintaining troop health and limiting public costs. “The strenuous activity of soldiering is likely to cause extension of an incipient (early) tuberculous invasion of the lungs, or to precipitate the breakdown and reactivation of arrested cases,” it explained. Such illness could result in disability “and the necessity of providing long care of these patients in military hospitals where they must remain isolated from nontuberculous patients.” The Public Health Service also created a tuberculosis office to handle the expected increase in tuberculosis, and, as the National Research Council Subcommittee recommended, gave war industry workers chest examinations.

As military and civilian screening boards found thousands of people with active tuberculosis and sent many of them to tuberculosis sanatoriums and hospitals, they generated what a public health nurse referred to as “potentially the greatest case finding program that workers in tuberculosis control have ever known.” At the same time, however, war mobilization drew civilian medical personnel into the military, reducing staffing in home front institutions. Army medical personnel ultimately numbered more than 688,000, including 48,000 physicians in the Medical Corps, 14,000 dentists in the Dental Corps, and 56,000 nurses in the Army Nurse Corps—a large portion of the nation’s medical professionals.27 To maintain his nursing staff, VA Director Frank Hynes even asked the Army Nurse Corps in May 1942 not to hire VA nurses away from his hospitals.

Army tuberculosis rates during World War II, while lower than during World War I, did show a similar “U” curve with high rates at the beginning of the war as the Selective Service built up the military forces and cases that had eluded screening became active during training or combat (Figure 8-2). Tuberculosis rates fell as radiologists became more proficient at identifying tuberculosis infections, and then another sharp, higher increase in cases at the end of the war as discharge examinations found people who had developed active tuberculosis during their service. Postwar studies also revealed a seemingly paradoxical phenomenon that during the war military personnel serving overseas had lower tuberculosis rates than those serving in the United States, yet higher rates when they returned home.

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

chart-comparing-incidence-rates-of-tb-in-wwi-wwii

Chart comparing the incidence curves of tuberculosis in the Army during World War I and World War II. From Esmond R. Long, “Tuberculosis,” in John Boyd Coates, Robert S. Anderson, and W. Paul Havens, eds., Internal Medicine in World War II, Medical Department, U.S. Army in World War II, vol. 2, Infectious Diseases (Washington, DC: Office of The Surgeon General, Department of the Army, 1961), chart 17, p. 335. Available at http://history.amedd.army.mil/booksdocs/wwii/infectiousdisvolii/chapter11chart17.pdf.

The Medical Department of the United States Army in the World War. Communicable and Other Diseases. Washington: U. S. Government Printing Office, 1928, vol. IX, pp. 171-202.
Letter, The Adjutant General, to Commanding Generals of all Corps Areas and Departments, 25 Oct. 1940, subject: Chest X-rays on Induction Examinations.
M. R. No. 1-9, Standards of Physical Examination During Mobilization, 31 Aug. 1940 and 15 Mar. 1942
Long, E. R.: Exclusion of Tuberculosis. Physical Standards for Induction and Appointment.[Official record.]

Long, E. R., and Stearns, W. II.: Physical Examination at Induction; Standards With Respect to Tuberculosis Induction and Their Application as Illustrated by a Review of 53,400 X-ray Films of Men in the Army of the United States. Radiology 41: 144-150, August 1943.
Long, Esmond R., and Jablon, Seymour: Tuberculosis in the Army of the United States in World War II. An Epidemiological Study with an Evaluation of X-ray Screening. Washington: U. S. Government Printing Office, 1955.

It is estimated that, before roentgen examination became mandatory (MR No. 1-9, 15 March 1942), one. million men had been accepted without this form of examination. Where roentgen examination was practiced, it resulted in a rejection rate of about 1 percent for tuberculosis. Applying this figure, it can be estimated that some 10,000 men were accepted who would have been rejected if they had been subjected to chest roentgen-ray study. Various studies have shown that approximately one-half of these would have been cases of active

http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Troops who developed tuberculosis were not discovered until their separation examinations, conducted when they were once again in the United States.

In the end, the screening process rejected 171,300 men for tuberculosis as the primary cause (thousands more had tuberculosis in addition to the disqualifying condition), and Long calculated that this saved the government millions of dollars in hospitalization costs. After the war, however, Long identified two factors that allowed tuberculous men into the Army: the failure to screen all inductees until March 1942, and the 4 x 5 inch stereoscopic (fluorographic) films, which were used in the interest of speed but which Long believed caused examiners to miss about 10 percent of minimal tuberculosis lesions in recruits. To better understand the latter problem he had two radiologists read the same X-rays and found substantial disagreement between their findings. Long therefore concluded that “if the induction films had each been read by two different radiologists, undoubtedly many more of the men who had tuberculosis at entry could have been excluded from service.” The Army ultimately discharged 15,387 enlisted men for tuberculosis during the war, which earned it thirteenth position as a cause of disability discharge.

American military forces fought in nine theaters of war—five in the Pacific and Asia, the other four in North Africa, the Mediterranean, Europe, and the Middle East. The Allies gave priority to defeating Germany and Italy in Europe beginning with operations in North Africa and the Mediterranean. After fighting in Tunisia in 1942–43, the Allies invaded Sicily on 10 July 1943, and moved up the Italian peninsula. By April 1944—in preparation for the D-Day invasion on 6 June 1944—the United States had more than 3 million soldiers in Europe, supported by 258,000 medical personnel managing a total of 318 hospitals with 252,050 beds. The war against Japan got off to a slower start as U.S. military forces developed the means to execute an island war across vast expanses of ocean. After fighting began in the Southwest Pacific, military forces grew from 62,500 troops in March 1942 to 670,000 in the summer of 1944 with 60,140 medical personnel. Even though military personnel developed tuberculosis in all of the nine theaters, the numbers were not high and tuberculosis was not a major military problem. In the Southwest Pacific theater, for example, only sixty-four of more than 40,000 hospital admissions were for the disease.

Tuberculosis was of the greatest consequence in the North Africa and Mediterranean theaters, in part due to poor screening early in the war, but also because, according to historian Charles Wiltse, it was the theater “in which the lessons of ground combat were learned by the Medical Department as much as by the line troops.” In general, medical personnel learned the importance of treating battle casualties as promptly as possible and keeping hospitals and clearing stations mobile and far forward to shorten evacuation and turnaround times. With regard to tuberculosis, the Medical Department had to relearn the World War I lesson of the importance of having skilled practitioners—or “good tuberculosis men”—in theater. They also ascertained which treatments were appropriate close to the battle lines and which were not, and when and how best to evacuate tubercular patients to the United States.

When soldiers with tuberculosis began to appear at Army medical stations in North Africa in late 1942, Major General (Maj. Gen.) Paul R. Hawley, chief of medical services for the European theater of operations, called for a tuberculosis specialist. On Long’s recommendation, Hawley appointed Col. Theodore Badger (Figure 8-3) as a senior consultant in tuberculosis on 2 January 1943. A professor of medicine at the Harvard School of Medicine, Badger had served in the Navy during World War I, and then attended Yale and Harvard where he earned his medical degree. Chief of medical service of the 5th General Hospital (GH), organized out of Harvard, Badger would play a role similar to that played by Gerald Webb during World War I—medical specialist, teacher, and troubleshooter.

Assessing the tuberculosis situation in the Mediterranean theater, Badger identified five hazards: (1) the development of active disease in American troops who had not been X-rayed upon induction; (2) association with British troops and civilians who had not been screened for tuberculosis; (3) drinking of nonpasteurized and possibly infected milk that could transmit tuberculosis; (4) battlefield conditions that could activate soldiers’ latent infections; and (5) the undetermined effects of other respiratory infections.41 Badger soon got the Army to use pasteurized milk and to establish X-ray centers with the proper equipment and trained staff, but he was not able to examine the thousands of American soldiers in the war zone. To gauge the extent of the tuberculosis problem he therefore arranged for a mobile X-ray unit to conduct spot surveys of troops in the field. Three examinations of some 3,000 troops each found only about 1 percent with signs of tuberculosis. To avoid losing manpower, Badger reported in mid-1943 that “up to the present time no individual has been removed from duty because of X-ray findings, and follow-up study has, so far, not indicated the necessity for it.” Badger planned to recheck those with suspicious films every few months to see if the signs had advanced. Badger recommended that patients with pleural effusion, the accumulation of fluid between the layers of the membranes that line the lungs and chest cavity that often indicates tuberculosis, be evacuated back to the United States. He also ended the practice of transporting some tuberculosis patients sitting up

. As the first true air war, World War II saw the introduction of air evacuation when Army aeromedical squadrons deployed in early 1943. After successful trials in the Pacific and North Africa, air evacuation increased so that during the Battle of the Bulge (1944–45), some patients arrived in U.S. hospitals within three days of being wounded. Some medical officers were concerned about the effects of transporting tuberculosis patients by air where they would be exposed to high speeds, jolting, and reduced air pressure. Tuberculosis specialists in New Mexico and Colorado therefore studied 143 white, male military patients, twenty-two-years old to twenty-eight-years old, with active tuberculosis flown to Army hospitals in nonpressurized air ambulances for any signs of trouble. Fearing the worst, they instead found that “severe discomfort, pulmonary hemorrhage, and spontaneous pneumothorax did not occur in the series either during or following the flight,” and concluded that air transport up to 10,000 feet was safe and preferable to time-consuming travel by water. By the end of the war the consensus was that rapid air evacuation to the United States also reduced the need to give a tuberculosis patient a pneumothorax in the field.

From the roof of Fitzsimons’ new building in April 1943, Rocky Mountain News reporter John Stephenson could see the Rocky Mountain Arsenal, the Denver Ordnance Plant, and Lowry Field, “places where the Army studies how to kill people.” But, he wrote, “The Army is merciful. It lets the right-hand of justice know what the left hand of mercy is doing at Fitzsimons General Hospital.” The largest Army hospital in the world, Fitzsimons had 322 buildings on 600 acres, paved streets with traffic lights, a post office, barbershop, pharmacy school, dental school, print shop, bakery, fire department, and chapel. It was, wrote Stephenson, “a city of 10,000.”61 No longer a liability, Fitzsimons was the pride of the Army Medical Department. One Army inspector reported that “it is apparent that no expense has been spared in this extraordinary building or in the general equipment and maintenance of the whole hospital plant.”62 As Congressman Lawrence Lewis had hoped, Fitzsimons’ mission now extended beyond caring for tuberculosis patients to meeting the general medical and surgical needs of the wider military community in the Denver region.

During the war the hospital maintained about 3,500 beds, reaching its highest daily patient population after the war—3,719 on 3 February 1946. The annual occupancy rate, calculated in patient days, increased from 603,683 in 1942 to a high of 1,097,760 for 1945, about 85 percent capacity.

With the reduction of tuberculosis in the Army over the years, the percentage of tuberculosis patients among all those at Fitzsimons had declined from 80 percent to 90 percent in the 1920s to 40 percent to 50 percent in the late 1930s. As the Army grew it now rose again. During the war Fitzsimons admitted more than 8,100 patients with tuberculosis. In fact, in 1943, only eighteen patients had battle injuries; the rest were in the hospital for illness and noncombat injuries. Unlike during the previous war, however, this Medical Department had a network of more than fifty veterans’ hospitals to which it could transfer patients too disabled by tuberculosis or other disease or injury to return to duty. Now, instead of allowing patients to stay in the service and receive the benefit of hospitalization with the hopes that they would recover and return to duty, the Medical Department discharged patients to VA hospitals as soon as they were determined to be unfit for military service, thereby reserving capacity for active-duty personnel. Maj. D. P. Greenlee had returned from a training course in penicillin therapy at Bushnell General Hospital in Utah to supervise the administration of the new drug on a variety of infections. He soon reported a cure rate of 93 percent. There were fewer victories in tuberculosis treatment.

During the war about one-quarter of all tuberculosis patients were treated with pneumothorax. During the war Fitzsimons surgeon Col. John B. Grow and other surgeons tried lung resection to treat tuberculosis, with few patient deaths. In 1946, however, when Grow’s staff contacted thirty patients who had had such surgery, they found that half of them were doing well, but three others had died, seven were seriously ill, and the rest were still under treatment. “It was felt that pulmonary resection in the presence of positive sputum was extremely hazardous and the indications were consequently narrowed down.”

Outside the operating rooms, the “City of 10,000” had a rich social life with people arriving at the post from all corners of the country. With Congressman Lewis’s acquisition of the School for Medical Technicians, Fitzsimons assumed the role of medical trainer, offering six- to twelve-week courses in technical training for dental, laboratory, X-ray, surgical, clinical, and pharmacy assistants. By 1946 the School had graduated more than 28,000 such technicians to serve around the world. The Women’s Army Corps arrived at Fitzsimons in February 1944 when 165 women attended the medical technicians school as part of the first coeducational class.74 Members of the Women’s Army Corps, rehabilitation aides, Education Department staff, dietitians, as well as nurses increased the female presence at Fitzsimons, as did activities of welfare organizations such as the Gold Star Mothers, the Red Cross, and the Junior League. Fitzsimons’ patients and staff also enjoyed visits from celebrities, including Jack Benny, Miss America, Gary Cooper, Dorothy Lamour, and other entertainers such as the big band leader Fred Waring and his Pennsylvanians, the Denver Symphony Orchestra, and an African American Methodist Church children’s choir from Denver. Like communities across the country, the hospital participated in war bond campaigns and had a huge war garden that produced thousands of ears of sweet corn and bushels of other vegetables.

Despite national mobilization and generous congressional funding, the Army could not escape the strain on its hospitals. By July 1944, Fitzsimons had reached capacity so the Medical Department designated two more hospitals as specialty centers for tuberculosis. Earl Bruns’ widow Caroline, who lived in Denver at the time, was no doubt pleased when the department named Bruns General Hospital in Santa Fe, New Mexico, in honor of her husband. Bruns along with Moore General Hospital in Swannanoa, North Carolina, cared for enlisted patients with minimal or suspected tuberculosis.

As Allied troops liberated France in 1944 and crossed into Germany they encountered thousands of refugees or “displaced persons”—escaped prisoners from Nazi concentration camps, exhausted and terrified Jews, slave laborers, political prisoners, Allied POWs, and other victims. The Nazi camps that held these people served as incubators for diseases such as tuberculosis and typhus, and the frightened, sick, and starved refugees inundated Army hospitals in late 1944 and early 1945. Theodore Badger reported one of the first waves that arrived on 18 December 1944 when 304 men, most of them Russians, came to the 50th GH in Commercy, France. They had been in the Nazi labor camps for the mines and heavy industries, where thousands died and survivors were malnourished and sick. All of the 304 had tuberculosis, 90 percent with moderate or advanced disease. Four were dead on arrival, eight more died in the first week, and one-third of the patients would die by May.96 Alarmed, Gen. Hawley, Chief Surgeon of the European Theater of Operations, ordered that all displaced civilians and recovered military personnel be examined for signs of tuberculosis “to establish the gravity of the situation.” The situation was dire. At one time the 46th GH had more than 1,000 tuberculosis patients, all recovered Allied POWs, causing Esmond Long to remark that the hospital “had the largest number of tuberculosis patients of any Army hospital in the world.”

The 46th GH from Portland, Oregon, which had cared for tuberculosis patients in the Mediterranean theater, also stood on the front lines of the tuberculosis problem in Europe. Serving at Besancon, France, the hospital would receive the Meritorious Service Unit Plaque and Col. J. G. Strohm, the commanding officer, the Bronze Star Medal for service during the liberation of France. During the spring of 1945, the 46th GH admitted 2,472 Russians, forty-one Poles, and 128 Yugoslav POWs and former slave laborers freed by American forces. The influx began on 12 March and within four days the 46th GH had admitted 1,200 such patients.

“The hospital staff was agast [sic] at the terrible physical condition of these people,” reported the hospital commander.99 When Badger visited the 46th GH in March 1945 he said the patients “constitute one of the most seriously affected groups with tuberculosis and malnutrition that I have ever seen,” explaining that most of them suffered “acute fulminating, rapidly fatal disease, mixed with chronic, slowly progressive, fibrotic tuberculosis. ”Medical personnel (Figure 8-4) cared for these patients as best they could, comforting many of them as they died. They began the rest treatment with some men but, as Badger reported, convincing Allied POWs to submit to absolute bed rest after months of confinement was “practically impossible.” Badger was able to report that after a month “those men who did not die of acute tuberculosis showed marked improvement.”

46th General Hospital nurses who cared for former prisoners of war.

46th General Hospital nurses who cared for former prisoners of war.

Figure 8-4. 46th General Hospital nurses who cared for former prisoners of war. Photograph courtesy of Oregon Health Sciences University, Historical Collections and Archives, Portland, Oregon.

26th Gen Hospital WWII, North Africa

26th Gen Hospital WWII, North Africa

In late 1944 Hawley requested 100,000 additional hospital beds for the displaced persons and POWs he expected to encounter after the German surrender, but Gen. George Marshall and Secretary of War Henry L. Stimson denied the request, believing they could not spare resources of that magnitude. The European Theater, they decided, must use German medical personnel and hospitals to care for the prisoners. Only after the war did American hospital units transfer their equipment and supplies to German civilians and Allies for their use.

The liberation of Europe also freed American POWs, who, not surprisingly, had higher rates of tuberculosis than other American military personnel. Captured British medical officer Capt. A. L. Cochrane cared for some of them in the prison where he was confined and noted sardonically that imprisonment was “an excellent place to study tuberculosis; [and] to learn the vast importance of food in human health and happiness.” German prison guards gave POWs only 1,000 to 1,500 calories per day, so Red Cross food parcels, which provided an additional 1,500 daily calories per person, were critical to preventing malnutrition and physical breakdown. Cochrane observed that the American and British POWs received the most parcels and had the lowest tuberculosis rates in the camp, while the Russians received nothing at all and had the highest rates. During the eighteen months that French POWs received the Red Cross parcels, he noted, just two men of 1,200 developed tuberculosis but when parcels for the French ceased to arrive in 1945, their tuberculosis rate rose to equal that of the Russians. The situation, he concluded, showed the “vast importance of nutrition in the incidence of tuberculosis.” Not all Americans got their parcels, though. William H. Balzer, with an American artillery unit, was captured in February 1943, and remembered how German guards stole the Americans’ packages.
Balzer survived imprisonment but never recovered from the ordeal. Severely disabled (70 percent), he died in 1960 on his forty-sixth birthday.

Exact tuberculosis rates among American POWs are not known because the rush of events surrounding the liberation of prisoners from German and Japanese control prevented a systematic X-ray survey. Rates did appear to be higher, though, for prisoners of the Japanese than for prisoners of the Germans. Long reported that about 0.6 percent of recovered troops from European POW camps had tuberculosis, whereas data from the Pacific theater suggested that 1 percent of recovered prisoners had tuberculosis. Moreover, an analysis of the chest X-rays done at West Coast debarkation hospitals revealed that 101 (or 2.7 percent) of 3,742 former POWs of the Japanese showed evidence of active tuberculosis. John R. Bumgarner was a tuberculosis ward officer at Sternberg General Hospital in Manila, the Philippines, before the war. A POW for forty-two months after the Japanese invasion, he described his experience in Parade of the Dead. Bumgarner did what he could to care for many of the 13,000 prisoners in the camp, but knew that “my patients were poorly diagnosed and poorly treated.” The narrow cots were so close together, he wrote, “the crowding and the breathing of air loaded with this bacilliary miasma from coughing ensured that those mistakenly segregated would be infected.”

Bumgarner was able to stay relatively healthy throughout his imprisonment. His luck ended, however, because “on my way home across the Pacific I had the first symptoms of tuberculosis.” Severe chest pain and subsequent X-rays at Letterman Hospital in San Francisco revealed active disease. “I had gone through more than four years of hell—now this!” Discharged on disability for tuberculosis in September 1946 he began to work at the Medical College of Virginia but soon had a lung hemorrhage. This time it took eight years of rest, with surgery and new antibiotic treatment for him to recover. By 1956, however, Bumgarner had married his sweetheart, Evelyn, and begun a medical career in cardiology that lasted for thirty years.

Tuberculosis continued to take its toll on POWs for years after the war. The VA followed POWs as a special group because, explained Long, of “the hardships that many of these men endured, and the notorious tendency for tuberculosis to make its appearance years after the acquisition of infection.” A follow-up study published in 1954 reported that for American POWs during the six years after liberation tuberculosis was the second highest cause of death, after accidents.

If the challenges Army medical personnel faced in caring for sick and starving POWs and refugees were unprecedented, the scale of disease and suffering they encountered in the Nazi concentration camps was almost unimaginable. Allied troops had heard about secret and deadly camps but were not prepared for what they found. As the Allies converged on Berlin from the East and the West, the Nazis evacuated thousands of prisoners—most of them Jews seized from across Europe, as well as POWs—to interior camps to hide their crimes and prevent the inmates from falling into Allied hands. These evacuations became death marches as SS (abbreviation of Schutzstaffel, which stood for “defense squadron”) guards beat and murdered people, and failed to feed them for days on end. Survivors were crowded into camps such as Buchenwald and Dachau making them even more chaotic and deadly. Americans, therefore, liberated camps that were riven with disease, especially typhus, tuberculosis, and malnutrition.

The Allies liberated Buchenwald on 11 April 1945. The following day the world learned that Franklin Roosevelt had died. Americans then liberated Dachau on 29 April, the day Italian partisans executed Mussolini in Milan, and the next day Hitler killed himself in his bunker. Dachau (Figure 8-5) had been the first of hundreds of concentration camps in the German Reich to which the Nazis sent political enemies, the disabled, people accused of socially deviant behavior, and, increasingly after the Kristallnacht pogroms of 1938, Jewish men, women, and children. In January1945 Dachau held 67,000 prisoners, but with troops of the Seventh U.S. Army approaching the SS began evacuating and killing prisoners. Capt. Marcus J. Smith, a medical officer in his thirties, arrived at Dachau on 30 April 1945, the day after liberation, part of a small team trained to treat persons displaced by the war. Horror greeted him outside the camp in a train of forty boxcars loaded with more than two thousand corpses. Smith called the frost that had formed on the bodies in the intense cold, “Nature’s shroud.” Inside Dachau he encountered more grotesque piles of naked, skeletal bodies of prisoners and scattered, mutilated bodies of German guards.

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Dachau survivors gather by the moat to greet American liberators, 29 April 1945

Figure 8-5. Dachau survivors gather by the moat to greet American liberators, 29 April 1945. Photograph courtesy of the United States Holocaust Memorial Museum, Washington, DC.
Smith found more than 30,000 prisoners, mostly Jews of forty nationalities, and all men except for about 300 women the SS had kept in a brothel. They were in desperate condition. Typhus and dysentery raged, at least half of the prisoners were starving, and hundreds had advanced tuberculosis. “The well, the sick, the dying, and the dead lie next to each other in these poorly ventilated, unheated, dark, stinking buildings,” Smith told his wife. The men were “malnourished and emaciated, their diseases in all stages of development: early, late, and terminal.” He wondered, “What am I going to write in my notebook?” and then started a list of needed supplies: clothes, shoes, socks, towels, bedding, beds, soap, toilet paper, more latrines, and new quarters. He almost despaired. “What are we going to do with the starving patients? How will we care for them without sterile bandages, gloves, bedpans, urinals, thermometers, and all the basic material? How do we manage without an organization? No interns, no nursing staff, no ambulances, no bathtubs, no laboratories, no charts, and no orderlies, no administrator, and no doctors.… I feel helpless and empty. I cannot think of anything like this in modern medical history.”

American efforts did prevent a deadly typhus epidemic from sweeping postwar Europe and helped contain tuberculosis rates in Germany, but the Nazis had created a human catastrophe so immense that even the most dedicated efforts would at times fall short.

Faced with horror on such a scale, Smith and other Army Medical Department personnel assigned to the concentration camps threw themselves into the work of cleansing, comforting, treating, and nurturing their patients. American commanders called in at least six Army evacuation hospitals (EH) to care for the sick and dying in the liberated camps. EH No. 116 and EH No. 127 began arriving at Dachau on 2 May with some forty medical officers, forty nurses, and 220 enlisted men. Consulting with Smith and his team, the units set up in the former SS guard barracks. They tore out partitions to create larger wards, scrubbed the walls and floors with Cresol solution, sprayed them with dichloro-diphenyl-trichloroethane (DDT), and then set up cots to create two hospitals of 1,200 beds each. Medical staff also discovered physician-prisoners who had cared for the sick and injured as well as they could, and could now advise and assist, and in some cases translate for the medical staff. In two days the hospitals were ready to admit patients by triage, segregating them by disease and prognosis. Laurence Ball, the EH No. 116 commander, noted that more than 900 patients had “two or more diseases, such as malnutrition, typhus, diarrhea, and tuberculosis.” Staff bathed and deloused them, gave them clean pajamas, and put them to bed.

Death by overeating was but one of the dangers that the prisoners faced. During May 1945, American hospitals at Dachau had more than 4,000 typhus patients and lost 2,226 to typhus and other diseases. Typhus, a rickettsial disease transmitted by body lice, had a mortality rate as high as 40 percent. With no medical cure, treatment consisted of supportive care—keeping patients clean and nourished—to mitigate effects of prolonged fever, such as the breakdown of tissue into gangrene. The Americans knew that typhus had taken three million lives in Eastern Europe after World War I, but now they had a means of prevention and better weapons—a typhus vaccine and DDT. On 2 May, the day the evacuation hospitals arrived, the commander of the Seventh Army imposed quarantines for typhus and tuberculosis, and summoned the U.S. Typhus Commission, which had controlled a typhus outbreak in Naples, Italy. A typhus team arrived the next day and began to immunize American personnel and dust them with DDT. On 7 May staff began to vaccinate inmates but kept typhus patients isolated for at least twenty-one days from the onset of illness to prevent transmission to others. This meant that the Americans did not immediately enter the inner camp barracks—the worst, most typhus-infested part of the camp—nor did they quickly relieve crowding there for fear of spreading typhus-bearing lice. It took over a week for personnel to prepare more spacious and clean quarters.

Smith wrote his lists, reported to his wife, and kept track of the daily death toll, finding comfort as the number of people who died daily fell from 200 during the first week to twenty by the end of May. Another medical officer performed autopsies. He chose ten of the dead bodies, five from the death train and five from the camp yard, to see what had caused their deaths. All had typhus and extreme malnutrition, eight had advanced tuberculosis, and some bodies had signs of fractures and head injuries.

Survivors in Dachau, 1 May 1945

Survivors in Dachau, 1 May 1945

By the end of May, conditions at Dachau had improved. Typhus was abating and American officials began to release groups of inmates by nationality. Beyond Dachau, the U.S. Typhus Commission tracked down new cases of typhus in civilian and military populations, deloused one million people, sprayed fifteen tons of DDT, and created a cordon sanitaire on the Rhine requiring all who crossed from Germany to be vaccinated and dusted to prevent the spread of disease. Thus the Army averted a broader typhus epidemic.138 The tuberculosis situation was more complicated and presented the Americans with a conundrum. What to do with thousands of people suffering from a long-term, infectious, and deadly disease?

As with the American POWs, tuberculosis continued to follow Dachau survivors into their new lives. Thousands of Jewish survivors emigrated to what would become the state of Israel. Fifteen years after liberation, the Israeli Minister of Health reported that although concentration camp survivors comprised only 25 percent of the population, they accounted for 65 percent of the tuberculosis cases in the country. Tuberculosis continued to thrive in Europe as well.

Historian Albert Cowdrey has credited the American actions with preventing a number of postwar scourges: “No one can prove that a great typhus epidemic, mass deaths of prisoners of war, or widespread outbreaks of disease among the German population would have taken place without the efforts of Army doctors of the field forces and the military government.” But, he continued, “conditions were ripe for such tragedies to occur, and Army medics brought both professional knowledge and military discipline to forestalling what might have been the last calamities of the war in Europe.” Thus, as usual, in public health the good news is no news at all.

Thousands of men survived the Vietnam War because of the quality of their hospital care. US hospitals in Vietnam were the best that could be deployed, incorporating several improvements from previous field hospitals. Army doctors were better trained, and they had good facilities at the semi-permanent base camps. As a result, more advanced surgical procedures were possible: more laparotomies, thoracotomies, vascular repairs (including even aortic and carotid repairs), advanced neurosurgery for head wounds, and other medical procedures. Blood transfusions were performed, with massive quantities of blood available for seriously wounded patients; some patients received as many as 50 units of blood. Advances in equipment resulted in the development of intensive care units with mechanical ventilators. There were far more medications available for particular diseases than in earlier conflicts.

With about 30 physicians assigned, the 12th could keep four or five operating tables going all day, and two or three all night. A common practice was delayed primary closure for wounds with a high likelihood of infection. Instead of stitching the wound closed immediately, dirt and contaminants were flushed out, bleeding was controlled, dead flesh was removed (debrided), the wound was packed with sterile gauze, and antibiotics were administered. For a few days the patient healed, while nurses changed the bandages and made sure the wound did
not get worse. Then doctors removed any remaining contaminants or dead flesh and stitched up the wound. This procedure reduced the incidence of infection compared to immediate wound closure at a risk of a larger scar.

In any given year in Vietnam, about one soldier in three was hospitalized for disease. The main causes for hospitalization were malaria, psychiatric problems, and ordinary fevers. Although many men fell sick, competent care was available and most recovered quickly and returned to duty.

The war spurred advances in surgery and medical trauma research. New surgical techniques allowed limbs that previously would have been amputated to remain functional. Nurse anesthetist Rosemary Sue Smith recalled the development of new blood-handling procedures:

We started separating blood into its components, because we were getting a lot of aggregates that were causing a lot of disseminated intravascular coagulopathy in patients, and causing a lot of blood clots, and pulmonary thrombosis, and a lot of ARDS, Adult Respiratory Distress Syndrome, which started in Da Nang and was called Da Nang Lung initially. It has developed into today being called Adult Respiratory Distress Syndrome, and they did a lot of research on this, and they were having us separate our blood into its components, into fresh frozen plasma and into platelets, and then we started doing blood tests to see which the patients would need. If their platelets were low, or if their blood clotting factors were low, we would just give them the particular products. We actually started breaking these products down and administering them in the Vietnam War, and it’s carried over into civilian life now. They’re used today in acute trauma to prevent disseminated intravascular coagulopathy and prevent Adult Respiratory Distress Syndrome on massive traumas that have to be naturally resuscitated with blood and blood products.

In the 1960s, intensive care was still quite new and the 12th had only one (later two) intensive care wards fully equipped and staffed. A key piece of equipment was the ventilator, then called “respirator.” Ventilators worked on pure oxygen until 1969, when research revealed physiological problems from prolonged breathing of pure oxygen. Early ventilators required considerable maintenance; valves needed frequent cleaning or the machines broke down.

Antibiotics were important because of the wide variety of bacteria and large number of penetrating wounds; in the face of a possible systemic infection (the development of sepsis), antibiotics were delivered through an IV. Nurse Rosie Parmeter recalled having to prepare antibiotics to be delivered through an IV several times a day for each patient, a necessary but time-consuming task.

About two-thirds of patients cared for by the 12th were US military; the other third were mainly Vietnamese but also included nonmilitary Americans and Free World Military Assistance Forces personnel. Staff regularly dealt with the Vietnamese, both military and civilian, enemy and friendly. There were wards set aside for enemy prisoners (who were stabilized, then transferred to hospitals at POW camps) and civilians. Wounded South Vietnamese Army soldiers were also stabilized and transferred to hospitals run by the Army of the Republic of Vietnam (ARVN). Civilian patients often stayed longer because the war swamped the available hospitals for Vietnamese civilians.

Through the years of the Vietnam War, US forces sustained 313,616 wounded in action; at peak strength, there were 26 American hospitals. The 12th Evacuation Hospital was at Cu Chi for 4 years and treated just over 37,000 patients. Records for the 12th are incomplete, but the average died-of-wounds rate in Vietnam was about 2.8% of patients who reached a hospital alive. Applied to the 12th, that rate amounted to about 1,036 patients, including prisoners and Vietnamese as well as Americans. But over 36,000 people survived and could return home because of the treatment they received at the 12th Evac.

Sources:

Fort Bayard,  by David Kammer, Establishment of Fort Bayard Army Post
http://newmexicohistory.org/places/fort-bayard
George Ensign Bushnell, Colonel, Medical Corps, U. S. Army
THE ARMY MEDICAL BULLETIN, NUMBER 50 (OCTOBER 1939)
http://history.amedd.army.mil/biographies/bushnell
Chapter One, The Early Years: Fort Bayard, New Mexico
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=332041d7-dbd2-4edf-823f-29a66c0b65ef
Dachau concentration camp (Wikipedia)
http://en.wikipedia.org/wiki/Dachau_concentration_camp
Office of Medical History – United States Army
Esmond R. Long, M. D., TUBERCULOSIS IN WORLD WAR I
Chapter 14 – Tuberculosis
http://history.amedd.army.mil/booksdocs/wwii/PM4/CH14.Tuberculosis.htm

Chapter Four, Tuberculosis in World War I
Chapter Five, “A Gigantic Task”: Treating and Paying for Tuberculosis in the Interwar Period
Chapter Six, “Good Tuberculosis Women”: Tuberculosis Nursing during the Interwar Period
Chapter Seven, Surviving the Great Depression: Fitzsimons and the New Deal
Chapter Eight, Camp Follower: Tuberculosis in World War II
http://www.cs.amedd.army.mil/FileDownload aspx?
Good Tuberculosis Men”: The Army Medical Department’s Struggle with Tuberculosis Carol R. Byerly
http://www.cs.amedd.army.mil/borden/FileDownloadpublic.aspx?docid=986faf8a-b833-46a8-a251-00f72c91da2f

The Global Distribution of Yellow Fever and Dengue
D.J. Rogers1, A.J. Wilson1, S.I. Hay1,2, and A.J. Graham1
Adv Parasitol. 2006 ; 62: 181–220. http://dx.doi.org:/10.1016/S0065-308X(05)62006-4.

http://www.historyofvaccines.org/content/timelines/yellow-fever

History of yellow fever
http://en.wikipedia.org/wiki/History_of_yellow_fever

Additional Reading:

Open Wound: The Tragic Obsession of William Beaumont.
Jason Karlawish

The Great Influenzs. John M. Barry.
Penguin. 2004.
Univ Mich Press. 2011.

Flu. The story of the great influenza pandemic of 1918 and
the search for the virus that caused it.
Gina Kolata.
Touchstone. 1999

Pestilence. A Medieval Tale of Plague.
Jeani Rector
The HorrorZime. 2012

Knife Man: The extraordinary life of John Hunter, Father of Modern Surgery
Wendy Moore.
Broadway Books. 2005

Hospital.
Julie Salamon.
Penguin Press. 2008.

Overdosed America.

John Abramson.
Harper. 2004.

Sick.
Jonathen Cohn.
Harper Collins. 2007.
.

Read Full Post »

The History of Hematology and Related Sciences

Curator: Larry H. Bernstein, MD, FCAP

 

The History of Hematology and Related Sciences: A Historical Review of Hematological Diagnosis from 1880 -1980

 

Blood Description: The Analysis of Blood Elements a Window into Diseases

Diagnosing bacterial infection (BI) remains a challenge for the attending physician. An ex vivo infection model based on human fixed polymorphonuclear neutrophils (PMNs) gives an autofluorescence signal that differs significantly between stimulated and unstimulated cells. We took advantage of this property for use in an in vivo pneumonia mouse model and in patients hospitalized with bacterial pneumonia. A 2-fold decrease was observed in autofluorescence intensity for cytospined PMNs from broncho-alveolar lavage (BAL) in the pneumonia mouse model and a 2.7-fold decrease was observed in patients with pneumonia when compared with control mice or patients without pneumonia, respectively. This optical method provided an autofluorescence mean intensity cut-off, allowing for easy diagnosis of BI. Originally set up on a confocal microscope, the assay was also effective using a standard epifluorescence microscope. Assessing the autofluorescence of PMNs provides a fast, simple, cheap and reliable method optimizing the efficiency and the time needed for early diagnosis of severe infections. Rationalized therapeutic decisions supported by the results from this method can improve the outcome of patients suspected of having an infection.

Monsel A, Le´cart S, Roquilly A, Broquet A, Jacqueline C, et al. (2014) Analysis of Autofluorescence in Polymorphonuclear Neutrophils: A New Tool for Early Infection Diagnosis. PLoS ONE 9(3): e92564.
http://dx.doi.org:/10.1371/journal.pone.0092564

This study was designed to validate or refute the reliability of total lymphocyte count (TLC) and other hematological parameters as a substitute for CD4 cell counts. Participants consisted of two groups, including 416 antiretroviral naive (G1) and 328 antiretroviral experienced (G2) patients. CD4+ T cell counts were performed using a Cyflow machine. Hematological parameters were analyzed using a hematology analyzer. The median ± SEM CD4 count (range) of participants in G1 was 199 ± 10.9 (5–1840 cells/μL) and the median ± SEM TLC (range) was 1. 61 ± 0.05 (0.07–6.63 × 103/μL). The corresponding values among G2 were 421 ± 15.8 (13–1801) and 2.13 ± 0.04 (0.06–5.58), respectively. Using a threshold value of 1.2 × 103/μL for TLC alone, the sensitivity of G1 was 88.4% (specificity (SP) 67.4%, the positive predictive value (PPV) 53.5% and negative predictive value (NPV) of 93.2% for CD4 , 200 cells/μL, the sensitivity for G2 was 83.3%, SP 85.3%, PPV 23.8%, and NPV of 93.2%. Using multiple parameters, including TLC , 1.2 × 103/μL, hemoglobin , 10 g/dL, and platelets , 150 × 103/L, the sensitivity increased to 96.0% (SP, 82.7%; PPV, 80%; NPV, 96.7%) among G1, while no change was observed in the G2 cohort. TLC , 1.2 × 103/μL alone is an insensitive predictor of CD4 count of , 200 cells/μL. Incorporating hemoglobin , 10 g/dL, and platelets , 150 × 103/L enhances the ability of TLC , 1.2 × 103/μL to predict CD4 count , 200 cells/μL among the antiretroviral-naïve cohort. We recommend the use of multiple, inexpensively measured hematological parameters in the form of an algorithm for predicting CD4 count level.

Evaluating Total Lymphocyte Counts and Other Hematological Parameters as a Substitute for CD4 Counts in the Management of HIV Patients in Northeastern Nigeria. BA Denue, AU Abja, IM Kida, AH Gabdo, AA Bukar and CB Akawu.
Retrovirology: Research and Treatment 2013:5 9–16 http://dx.doi.org:/10.4137/RRT.S11562

Sepsis is a syndrome that results in high morbidity and mortality. We investigated the delta neutrophil index (DN) as a predictive marker of early mortality in patients with gram-negative bacteremia. Retrospective study. The DN was measured at onset of bacteremia and 24 hours and 72 hours later. The DN was calculated using an automatic hematology analyzer. Factors associated with 10-day mortality were assessed using logistic regression. A total of 172 patients with gram-negative bacteremia were included in the analysis; of these, 17 patients died within 10 days of bacteremia onset. In multivariate analysis, Sequental organ failure assessment scores (odds ratio [OR]: 2.24, 95% confidence interval [CI]: 1.31 to 3.84; P = 0.003), DN-day 1 ≥ 7.6% (OR: 305.18, 95% CI: 1.73 to 53983.52; P = 0.030) and DN-day 3 ≥ DN day 1 (OR: 77.77, 95% CI: 1.90 to 3188.05; P = 0.022) were independent factors associated with early mortality in gram-negative bacteremia. Of four multivariate models developed and tested using various factors, the model using both DN-day 1 ≥ 7.6% and DN-day 3 ≥ DN-day 1 was most predictive early mortality. DN may be a useful marker of early mortality in patients with gram-negative bacteremia. We found both DN-day 1 and DN trend to be significantly associated with early mortality.

Delta Neutrophil Index as a Prognostic Marker of Early Mortality in Gram Negative Bacteremia. HW Kim, JH Yoon, SJ Jin, SB Kim, NS Ku, SJ Jeong,
et al. Infect Chemother 2014;46(2):94-102. pISSN 2093-2340·eISSN 2092-6448
http://dx.doi.org/10.3947/ic.2014.46.2.94
Various indices derived from red blood cell (RBC) parameters have been described for distinguishing thalassemia and iron deficiency. We studied the microcytic to hypochromic RBC ratio as a discriminant index in microcytic anemia and compared it to traditional indices in a learning set and confirmed our findings in a validation set. The learning set comprised samples from 371 patients with microcytic anemia mean cell volume (MCV < 80 fL), which were measured on a CELL-DYN Sapphire analyzer and various discriminant functions calculated. Optimal cutoff values were established using ROC analysis. These values were used in the validation set of 338 patients. In the learning set, a microcytic to hypochromic RBC ratio >6.4 was strongly indicative of thalassemia (area under the curve 0.948). Green-King and England-Fraser indices showed comparable area under the ROC curve. However, the microcytic to hypochromic ratio had the highest sensitivity (0.964). In the validation set, 91.1% of microcytic patients were correctly classified using the M/H ratio. Overall, the microcytic to hypochromic ratio as measured in CELL-DYN Sapphire performed equally well as the Green-King index in identifying thalassemia carriers, but with higher sensitivity, making it a quick and inexpensive screening tool.
Differential diagnosis of microcytic anemia: the role of microcytic and hypochromic erythrocytes. E. Urrechaga, J.J.M.L. Hoffmann, S. Izquierdo, J.F. Escanero. Intl Jf Lab Hematology Aug 2014. http://dx.doi.org:/10.1111/ijlh.12290

Achievement of complete response (CR) to therapy in chronic lymphocytic leukemia (CLL) has become a feasible goal, directly correlating with prolonged survival. It has been established that the classic definition of CR actually encompasses a variety of disease loads, and more sensitive multiparameter flow cytometry [and polymerase chain reaction methods] can detect the disease burden with a much higher sensitivity. Detection of malignant cells with a sensitivity of 1 tumor cell in 10,000 cells (10–4), using the above-mentioned sophisticated techniques, is the current cutoff for minimal residual disease (MRD). Tumor burdens lower than 10–4 are defined as MRD-negative. Several studies in CLL have determined the achievement of MRD negativity as an independent favorable prognostic factor, leading to prolonged disease-free and overall survival, regardless of the treatment protocol or the presence of other pre-existing prognostic indicators. Minimal residual disease evaluation using flow cytometry is a sensitive and applicable approach which is expected to become an integral part of future prospective trials in CLL designed to assess the role of MRD surveillance in treatment tailoring.

Minimal Residual Disease Surveillance in Chronic Lymphocytic Leukemia by Fluorescence-Activated Cell Sorting. S Ringelstein-Harlev, R Fineman.
Rambam Maimonides Med J. Oct 2014   5 (4)  e0027. http://dx.doi.org:/10.5041/RMMJ.10161

Natural Killer cells (CD3-CD16+CD56+) are a major players in innate immunity, both as direct cytotoxic effectors as well as regulators for other innate immunity cell types. We have shown that, using the FlowCellect™ human NK cell characterization kit, one can achieve accurate phenotyping on a variety of sample types, including whole blood samples. Using the same kit to perform an NK cell cytotoxicity test, we demonstrate that unbound K562 target cells can be clearly distinguished from those that have been engaged by CD56+ NK cells, and each of these populations can be further investigated for viability using the eFluor 660® dye.

Analysis of NK cell subpopulations in whole blood

Analysis of NK cell subpopulations in whole blood

Analysis of NK cell subpopulations in whole blood

A

Proportion of K562 target cells bound to NK cells

Proportion of K562 target cells bound to NK cells

In a 5:1 effector cell:target cell population, 8% of the K562 cells were bound to NK cells (Figure 3B). 84% of the bound K562 cells were viable (Figure 3C) stained with fixable viability dye), while 96% of the unbound K562 cells were viable (Figure 3D). (B,C,D not shown)

Characterization of Natural Killer Cells Using Flow Cytometry.
EMD Millipore is a division of Merck KGaA, Darmstadt, Germany.

Red blood cell distribution width (RDW) is increased in liver disease. Its clinical significance, however, remains largely unknown. The aim of this study was to identify whether RDW was a prognostic index for liver disease. Retrospective: 33 patients with non-cirrhotic HBV chronic hepatitis, 125 patients with liver cirrhosis after HBV infection, 81 newly diagnosed primary epatocellular carcinoma (pHCC) patients, 17 alcoholic liver cirrhosis patients and 42 patients with primary biliary cirrhosis (PBC). Sixty-six healthy individuals represented the control cohort. The relationship between RDW on admission and clinical features: The association between RDW and hospitalization outcome was estimated by receiver operating curve (ROC) analysis and a multivariable logistic regression model. Increased RDW was observed in liver disease patients. RDW was positively correlated with serum bilirubin and creatinine levels, prothrombin time, and negatively correlated with platelet counts and serum albumin concentration. A subgroup analysis, considering the different etiologies, revealed similar findings. Among the patients with liver cirrhosis, RDW increased with worsening of Child-Pugh grade. In patients with PBC, RDW positively correlated with Mayo risk score. Increased RDW was associated with worse hospital outcome, as shown by the AUC [95% confidence interval (CI)] of 0.76 (0.67 – 0.84). RDW above 15.15% was independently associated with poor hospital outcome after adjustment for serum bilirubin, platelet count, prothrombin time, albumin and age, with the odds ratio (95% CI) of 13.29 (1.67 – 105.68). RDW is a potential prognostic index for liver disease.

Red blood cell distribution width is a potential prognostic index for liver disease
Z Hua , Y Suna , Q Wanga , Z Han , Y Huang , X Liu , C Ding, et al.
Clin Chem Lab Med 2013; 51(7):1403–1408.
http://dx.doi.org:/10.1515/cclm-2012-0704

Blood Plasma and Red Blood Cells

Whole blood consists of red and white blood cells, as well as platelets suspended in a liquid referred to as blood plasma. According to the American Red Cross, plasma is 92% water and makes up 55% of blood volume. The permeability of blood plasma is equal to 1.

Red blood cells make up slightly lower blood volume than blood plasma — about 45% of whole blood. As you probably already know, these types of blood cells contain hemoglobin, which in turn consists of iron that helps transport oxygen throughout the body. The permeability of red blood cells is slightly less than 1,
(1 – 3.9e-6). Or to put it in words, red blood cell particles are diamagnetic.

Due to their magnetic properties, red blood cells may be separated from the plasma via a magnetophoretic approach. If the blood were to be in a channel subject to a magnetophoretic force, we could control where the red blood cells and the plasma go within the channels. In other words, because the red blood cells have different permeability, they can be separated from the flow channel. However, such methodology is beyond the year 1980.

Timeline of Major Hematology Landmarks

1877 Paul Ehrlich develops techniques to stain blood cells to improve microscopic visualization.

1897 The Diseases of Infancy and Childhood contains a 20-page chapter on diseases of the blood and is the first American pediatric medical textbook to provide significant hematologic information.

1821–1902 Rudolph Virchow, during a long and illustrious career, demonstrates the importance of fibrin in the blood coagulation process, coins the terms embolism and thrombosis, identifies the disease leukemia, and theorizes that leukocytes are made in response to inflammation.

1901 Karl Landsteiner and colleagues identify blood groups of A, B, AB, and O.

1907 Ludvig Hektoen suggests that the safety of transfusion might be improved by crossmatching blood between donors and patients to exclude incompatible mixtures. Reuben Ottenberg performs the first blood transfusion using blood typing and crossmatching in New York. Ottenberg also observes the Mendelian inheritance of blood groups and recognizes the “universal” utility of group O donors.

1910 The first clinical description of sickle cell published in medical literature.

1914 Sodium citrate is found to prevent blood from clotting, allowing blood to be stored between collection and transfusion.

1924 Pediatrics is the first comprehensive American publication on pediatric hematology.

1925 Alfred P. Hart performs the first exchange transfusion.

1925 Thomas Cooley describes a Mediterranean hematologic syndrome of anemia, erythroblastosis, skeletal disorders, and splenomegaly that is later called Cooley’s anemia and now thalassemia.

1936 Chicago’s Cook County Hospital establishes the first true “blood bank” in the United States.

1938 Dr. Louis Diamond (known as the “father of American pediatric hematology”) along with Dr. Kenneth Blackfan describes the anemia still known as Diamond-Blackfan anemia.

1941 The Atlas of the Blood of Children is published by Blackfan, Diamond, and Leister.

1945 Coombs, Mourant, and Race describe the use of antihuman globulin (later known as the “Coombs Test”) to identify “incomplete” antibodies.

1954 The blood product cryoprecipitate is developed to treat bleeds in people with hemophilia.

1950s The “butterfly” needle and intercath are developed, making IV access easier and safer.

1961 The role of platelet concentrates in reducing mortality from hemorrhage in cancer patients is recognized.

1962 The first antihemophilic factor concentrate to treat coagulation disorders in hemophilia patients is developed through fractionation.

1969 S. Murphy and F. Gardner demonstrate the feasibility of storing platelets at room temperature, revolutionizing platelet transfusion therapy.

1971 Hepatitis B surface antigen testing of blood begins in the United States.

1972 Apheresis is used to extract one cellular component, returning the rest of the blood to the donor.

1974 Hematology of Infancy and Childhood is published by Nathan and Oski.

As I write today my hospital celebrates its 150th anniversary. Great Ormond Street Children’s Hospital was founded on 14 February 1852 by the visionary Dr Charles West followed his belief that hospital care allied to research in children’s diseases would reduce child mortality from above 50% by the age of 15 years. It is foolish to believe that we can progress in medicine without a knowledge of the past and that much of life is based upon experience. When putting together a series of articles on the history of haematology, initially published in BJH, this was the main raison d’être, along with the belief that the practice of medicine has become increasingly serious but should also be fun and interesting and even occasionally uplifting to the spirit.

The central problem of any survey of the history of haematology is usually the question of balance. Achieving a degree of balance among themes and topics that will be satisfactory to practicing haematologists/physicians with an interest in blood diseases is essentially impossible. Our preference has been for themes of general interest rather than those of a purely scientific view into a field that has led the way in understanding the molecular basis of human disease.

  1. M. Hann, London, 2002; O. P. Smith, Dublin, 2002.

Origins of the Discipline `Neonatal Haematology’, 1925-75

In every modern neonatal intensive care unit (NICU), haematological problems are encountered daily. Many of these problems involve varieties of anaemia, neutropenia or thrombocytopenia that are unique to NICU patients. A characteristic aspect of these unique problems is that, if the neonate survives, the haematological problem will remit and will not recur later in life, nor will it evolve into a chronic illness (although the problem might occur in a future newborn sibling). This characteristic comes about because the common haematological problems of NICU patients are not genetic defects but are environmental stresses (such as infection, alloimmunization or a variety of maternal illnesses) that are imposed on a developmentally immature haematopoietic system.

In the USA, and in some parts of Europe, the unique haematological problems that occur among NICU patients are diagnosed and treated by neonatologists, not by paediatric haematologists. Although these haematological conditions were generally first described by haematologists, the conditions occur, obviously, in neonates. Thus, the neonatologist, who is familiar with intensive care management of neonates, has also become familiar with the diagnosis and management of the neonate’s common haematological disorders. A growing number of neonatologists have sought specific additional training in haematology, with the goals of discovering the mechanisms underlying the unique haematological problems of NICU patients and improving the management and outcome of the patients who have these conditions. These physicians have remained as neonatologists and they do not practice paediatric haematology, although their research contributions certainly come under the purview of haematology, or more precisely under the discipline of `neonatal haematology’. In many places in Europe, it is the haematologists rather than the neonatologists who have an academic and clinical interest in neonatal haematology.

The roots of the discipline of neonatal haematology can be traced to the early application of haematological methods to animal and human embryos and fetuses, such as found in the reports of Maximow (1924) and Wintrobe & Schumacker (1936). The clinical underpinnings of this discipline include reports of anaemia (Fikelstein, 1911) and jaundice (Blomfeld, 1901; YlppoÈ, 1913) among neonates.

Before the 1930s, very few studies and very few published clinical case reports originated from premature nurseries. Such nurseries had dubious beginnings, which were criticized by some physicians as more resembling circus exhibitions than medical care wards (Bonar, 1932). These units generally had mortality rates greatly exceeding 50% on the day of admission, with the majority of the first-day survivors having late deaths or serious long-term morbidity.

It was not until publication of the review of premature nursery care at the Children’s Hospital of Michigan, in 1932, that it was clear that some units had instituted systematic attempts to monitor and improve outcomes. A special care nursery had been established at the Children’s Hospital in 1926 and, in 1932, Drs Marsh Poole and Thomas Cooley reported their experience in that unit (Poole & Cooley, 1932). The report included  incubator design with temperature and humidity control, growth curves of patients on various feeding practices, mortality statistics and attempts to determine causes of death.

At the time premature nursery care was beginning to merit academic credentials, reports were published of haematological problems that were unique to the neonate. These papers included the seminal publication on erythroblastosis fetalis by Drs Diamond (Fig 1), Blackfan and Baty (Diamond et al, 1932), and the report of sepsis neonatorum at the Yale New Haven Hospital by Ethyl C. Dunham (Fig 2) (Dunham,

1933).

The first major textbook devoted to clinical haematology, as well as the first textbook of neonatology, contained very little information about what are today’s common NICU haematological problems. For instance, in the first edition of Clinical Hematology by Dr Maxwell M. Wintrobe (Fig 3), of the Johns Hopkins University Hospital (Wintrobe, 1942), several topics related to paediatric haematology were reviewed, but discussions of the haematological problems of neonates were limited to three – erythroblastosis fetalis, haemorrhagic disease of the newborn and the `anaemia of prematurity’. Similarly, Premature Infants: A Manual for

Physicians, the original neonatology textbook, published in 1948 by Dr Ethyl C. Dunham (Fig 2; Dunham, 1948), had only a few pages devoted to haematological problems – the same three discussed by Dr Wintrobe. Also, the classic neonatology text book, `The Physiology of the Newborn Infant’, published in 1945 by Dr Clement A. Smith, contained almost no discussion of haematological problems (Smith, 1945). hrombocytopenia, which is now diagnosed among 25-30% of NICU patients, and neutropenia, now diagnosed in 8-10% of NICU patients, were not mentioned.

The first article published in Paediatrics (1948) dealing with a neonatal haematological problem was in volume two, in which Dr Diamond detailed his technique for performing a replacement transfusion (which later became known as an `exchange’ transfusion) as a treatment for erythroblastosis fetalis (Diamond, 1949). The second paper published by Paediatrics containing aspects of neonatal haematology was 1 year later, when Sliverman & Homan (1949) described leucopenia among neonates with sepsis. Most of the 25 infants they described, who were treated at Babies Hospital in New York over an 11-year period, had `late-onset’ sepsis, beginning after 3 days of life. They reported 14 neonates with Escherichia coli sepsis and four with streptococcal or staphylococcal sepsis, and observed that leucopenia occurred occasionally among these patients but was uncommon. (Indeed, today neutropenia remains uncommon in `late-onset’ sepsis, but common in congenital or `early onset’ sepsis.)

Louis K. Diamond, MD, at Children's Hospital, Boston,

Louis K. Diamond, MD, at Children’s Hospital, Boston,

Louis K. Diamond, MD, at Children’s Hospital, Boston, MA. , date unknown (obtained with the kind assistance of Charles F. Simmons, MD, Harvard University).

Diagnosing neutropenia, anaemia or thrombocytopenia in a neonate obviously requires knowledge of the expected normal range for neutrophil concentration, haematocrit and platelet concentration in the appropriate reference population. Early contributions to neonatal haematology included the publications of these reference ranges. The landmark studies included the range of blood leucocyte and neutrophil concentrations in neonates published in 1935 by Dr Katsuji Kato from the Department of Paediatrics at the University of Chicago (Kato, 1935). He tabulated the leucocyte concentrations and differential counts of 1081 children, ranging from birth to 15 years of age. A striking finding of his report (Fig 4) was the very high neutrophil counts during the first hours and days of life. Blood neutrophil concentrations among neonates with infections were published during the early and mid-1970s by Dr Marietta Xanthou (Fig 5) at the Hammersmith Hospital in London (Xanthou, 1970, 1972), and by Drs Barbara Manroe and Charles Rosenfeld (Fig 6) at the University of Texas Southwestern Medical Center in Dallas, Texas (Manroe et al, 1977).

Normal values for haemoglobin, haematocrit, erythrocyte indices and leucocyte concentrations were refined by DeMarsh et al (1942, 1948), and in a series of publications in the early 1950s in Archives of Diseases of Children by Gairdner et al (1952a, b). These were followed by observations on human fetal haematopoiesis by Thomas and Yoffey in the British Journal of Haematology (Thomas & Yoffey, 1962, 1964), and by the work on blood volume during the 1960s (Usher et al, 1963, Usher & Lind, 1965; Yao et al, 1967, 1968). Normal ranges for blood platelet counts in ill and well preterm and term infants were published in the early 1970s (Sell et al, 1973; Corrigan, 1974).

The first publication addressing the problem of neutropenia accompanying fatal early onset bacterial sepsis was that of Tygstrup et al (1968). This was a report of a near-term male with congenital Listeria sepsis who lived for only 4 h. The platelet count was 80*109/l and the leucocyte count was 13´7*109/l, but no granulocytes were observed on the differential count, which consisted of 84% lymphocytes, 8% monocytes and 8% leucocyte precursors. A sternal marrow aspirate was taken of the infant shortly before death that revealed myeloblasts, promyelocytes and myelocytes, but no band or segmented neutrophils.

An important advance in understanding the blood neutrophil count during neonatal sepsis occurred with the back-to-back papers in Archives of Diseases of Childhood in 1972 by Dr Marietta Xanthou of Hammersmith Hospital, London (Xanthou, 1972), and Drs Gregory and Hey of Babies’ Hospital, Newcastle upon Tyne (Gregory & Hey, 1972). Both papers reported that neonates who had life threatening (or indeed fatal) infections became neutropenic prior to death. Dr Xanthou reported 35 ill preterm and term babies within their first 28 d of life. Twenty-four were ill but not infected, and these had normal blood neutrophil concentrations and morphology. However, among the 11 who were ill with a bacterial infection, neutrophilia was observed in the survivors, but neutropenia, a `left shift’, and toxic granulation were observed in the non-survivors. Consistent with this observation, Gregory and Hey reported three neonates who died with overwhelming bacterial sepsis and noted that all had profound neutropenia. Neutrophilia was common among the survivors and neutropenia, a “left shift’, and specific neutrophil morphological changes were seen among those who subsequently died.

A pivotal publication that launched the search for mechanistic information and successful treatments was that of Dr Barbara Manroe, a fellow in Neonatal Medicine, and her mentor Dr Charles Rosenfeld (Fig 6) from the University of Texas, South-western, Parkland Hospital in Dallas, Texas (Manroe et al, 1977). They evaluated 45 neonates who had culture-proven group B streptococcal infection and found that 39 had abnormal leucocyte counts: 25 neutrophilia and 14 neutropenia, and that 41 had a `left shift’. This paper was the first to quantify the `left shift’ using a method that has since become popular in neonatology – the ratio of immature neutrophils to total neutrophils on the differential cell count.

From these beginning, hundreds of studies using experimental models and clinical observations and trials were published, detailing the kinetic and molecular mechanisms accounting for this common variety of neutropenia. Marked improvements in the survival of neonates with this condition have come about through combined efforts, including early maternal screening for GBS carriage, early anti-microbial administration to ill neonates, non-specific antibody administration and a variety of measures to improve supportive care of neonates with early onset sepsis.

In the early 1930s, Dr Helen Mackay worked as a paediatrician in Mother’s Hospital, a maternity hospital located in the north-east section of London. Acting on the observation of Lichtenstein (1921) that infants of subnormal birth weight regularly became anaemic in the first months of life, she measured and reported serial heel-stick haemoglobin levels on 150 infants during their first 6 months. Thirty-nine of these infants weighed under five pounds at birth (six were under four pounds), 52 weighed five to six pounds, and 59 weighed six pounds and upwards. She showed that babies of the lightest birth weights had the most rapid fall in haemoglobin and that these fell to lower levels than those of babies of heavier birth weight (MacKay et al, 1935). Figure 7 contrasts this fall in babies weighing `3-4 lbs odd at birth’ with those weighing `5 lbs odd at birth’.

Her attempts to prevent the anaemia of prematurity failed,  but her work constituted the first clear definition of the `anaemia of prematurity’ and showed that iron administration did not prevent this condition. In the early 1950s, Douglas Gairdner, John Marks and Janet D. Roscoe, of the Department of Pathology of Cambridge Maternity Hospital, published pioneering studies in blood formation in infancy (Gairdner et al, 1952a, b). Studying 105 blood samples and 102 bone marrow samples, they concluded that `erythropoiesis ceases when the oxygen saturation just after birth increases from about 65% in the umbilical vein to .95% just after birth’. Publications by Dr Irving Schulman, in the mid- to late 1950s, defined three phases of the anaemia of prematurity and provided a mechanistic explanation for the anaemia (Schulman & Smith, 1954; Schulman, 1959). His work illustrated that the early and intermediate phases of this anaemia occur in the face of relative iron excess and are unaffected by prophylactic iron administration.

Haemoglobin levels during the first 25 weeks of life among

Haemoglobin levels during the first 25 weeks of life among

Haemoglobin levels during the first 25 weeks of life among neonates in London [by permission; Archives Diseases of Children, (MacKay, 1935)].

In 1963, Dr Sverre Halvorsen of the Department of Paediatrics at Rikshospatalet in Oslo, Norway (Fig 9), provided an underlying explanation for the observations made by MacKay, Gairdner and Schulman (Halvorson, 1963). He observed that, compared with the blood of healthy adults, umbilical cord blood of healthy neonates had a high erythropoietin concentration, but the concentration was considerably higher in the plasma of severely erythroblastotic, anaemic infants. Among the healthy infants, erythropoietin levels fell to unmeasurably low concentrations after delivery, but levels remained elevated in hypoxic and cyanotic infants. Dr Per Haavardsholm Finne, also of the Children’s Department, Paediatric Research Institute and Department of Obstetrics and Gynaecology at Rikshospitalet in Oslo, observed high oncentrations of erythropoietin in the amniotic fluid and the umbilical cord blood after fetal hypoxia (Finne, 1964, 1967).

In subsequent studies, Dr Halvorsen observed lower plasma erythropoietin concentrations in the cord blood of preterm infants at delivery than in term neonates at delivery (Halvorsen & Finne, 1968). These observations supported the concept of Gairdner et al (1952a, b) that the postnatal fall in erythropoiesis (the `physiologic anaemia’ of neonates) is as a result of an increase in oxygen delivery to tissues following birth and is mediated by a fall in circulating erythropoietin concentration. The observations gave rise to the postulate that the `anaemia of prematurity’ was an exaggeration of this physiological anaemia and involved a limitation of preterm infants to appropriately increase erythropoietin production.

Many landmark reports of haematological findings of neonates that were published between 1925 and 1975 were not detailed in this review because they were outside the restricted topics selected.

Robert D. Christensen, MD, Gainesville, FL
Brit J Haem 2001; 113: 853-860

Towards Molecular Medicine; Reminiscences of the Haemoglobin Field

When historians of medicine in the twentieth century start to piece together the complex web of events that led from a change of emphasis of medical research from studies of patients and their organs to disease at the levels of cells and molecules they will undoubtedly have their attention drawn to the haemoglobin field, particularly the years that followed Linus Pauling’s seminal paper in 1949 which described sickle-cell anaemia as a `molecular disease’. These are personal reminiscences of some of the highlights of those exciting times, and of those who made them happen.

One of my first patients serving the RAMC was a Nepalese Ghurka child who was kept alive from the first few months of life with regular blood transfusion without a diagnosis. Henry Kunkel published a paper which described how, using electrophoresis in slabs of starch, he had found a minor component of human haemoglobin (Hb), Hb A2, the proportion of which was elevated in some carriers of thalassaemia. After several weeks spent knee deep in potato starch, we found that the Ghurka child’s parents had increased Hb A2 levels and, hence, that she was likely to be homozygous for thalassaemia. I was hauled up before the Director General of Medical Services for the Far East Land Forces and told that I could be court marshalled for not getting permission from the War House (Office) to publish information about military personnel. `And, in any case’, he added, `it is bad form to tell the world that one of our pukka regiments has bad genes; don’t do it again’.

Just before the end of my National Service I arranged to go to Johns Hopkins Hospital in Baltimore to train in genetics and haematology. I was told that I was wasting my time working on haemoglobin because there was `nothing left to do’. `Start exploring red cell enzymes’, he suggested. On arriving in Baltimore in 1960 it turned out that human genetics, and the haemoglobin field in particular, were bubbling with excitement and potential. The only lessons for those contemplating careers in medical research from this chapter of academic and military gaffs are that, regardless of the working conditions, when there are sick people there are always interesting research questions to be asked.

The excitement of the haemoglobin field in 1960 reflected the chance amalgamation of several disciplines in the 1950s, particularly X-ray crystallography, protein chemistry, human genetics and haematology.

From the early 1930s the structure of proteins became one of the central problems of biochemistry. At that time, the only way of tackling this problem was by X-ray crystallography. In 1937 Felix Haurowitz suggested to Max Perutz (Fig 1) that an X-ray study of haemoglobin might be a good subject for his doctoral thesis. He was given some large crystals of horse methaemoglobin which gave excellent Xray diffraction patterns.

Max Perutz

Max Perutz

However, there was a major snag; an X-ray diffraction pattern provided only half the information required to solve the structure of a protein, that is the amplitudes of diffracted rays, while the other half, their phases, could not be determined. But in 1953, they discovered that it could be solved in two dimensions by comparison of the diffraction patterns of a crystal of native haemoglobin with that of haemoglobin reacted with mecuribenzoate, which combines with its two reactive sulphydryl groups. In short, to solve the structure in three dimensions required the comparison of the diffraction patterns of at least three crystals, one native and two with heavy atoms combined with different sites on the haemoglobin molecule. In 1959 this approach yielded the first three-dimensional model of haemoglobin, at 5´5 AÊ resolution.

Protein chemistry evolved side-by-side with X-ray crystallography during the 1950s. In 1951 Fred Sanger solved the structure of insulin, a remarkable tour de force which showed that proteins have unique chemical structures and amino acid sequences. Sanger had perfected methods for fractionation and characterization of small peptides by paper chromatography or electrophoresis. In 1956 Vernon Ingram (Fig 2), who, like Max Perutz, was a refugee from Germany, was set the task of studying the structure of haemoglobin from patients with sickle-cell anaemia. Ingram separated the peptides produced after globin had been hydrolysed with the enzyme trypsin, which cuts only at lysine and arginine residues. Although these amino acids accounted for 60 residues per mol of haemoglobin, only 30 tryptic peptides were obtained, indicating that haemoglobin consists of two identical half molecules. Re-examination of the amino-terminal sequences of haemoglobin by groups in the United States and Germany showed 2 mols of valine ± leucine and 2 mols of valine ± histidine ± leucine per mol of globin. These findings, which were in perfect agreement with the X-ray crystallographic results, suggested that haemoglobin is a tetramer composed of two pairs of unlike peptide chains, which were called α and β.

A seminal advance, and one which was to mark the beginning of molecular medicine, was the chance result of an overnight conversation on a train journey between Denver and Chicago. Linus Pauling, the protein chemist, and William Castle (Fig 3), one of the founding fathers of experimental haematology, were returning from a meeting in Denver and Castle mentioned to Pauling that he and his colleagues had noticed that when red cells from patients with sickle-cell anaemia are deoxygenated and sickle they show birefringence in polarized light.

Five generations of Boston haematology. Seated is William Castle. Standing (left to right) are Stuart Orkin, David Nathan and Alan Michelson. The picture on the left is of Dean David Edsall of Harvard Medical School who established the Thorndyke Laboratory at the Boston City Hospital. He was succeeded by Dean Peabody, who recruited both George Minot, who won the Nobel Prize for his work on pernicious anaemia, and William Castle, who should have also received it.

Pauling guessed that this might reflect a structural difference between normal and sickle-cell haemoglobin which could be detected by a change in charge. He gave this problem to one of his postdoctoral students, a young medical graduate called Harvey Itano. At that time they knew that a Swede, Arne Tiselius, had invented a machine for separating proteins according to their charge by electrophoresis. As there was no machine of this kind in Pauling’s laboratory, Itano and his colleagues set to and built one. Eventually they found that the haemoglobin of patients with sickle-cell anaemia behaves differently to that of normal people in an electric field, indicating that it must have a different amino acid composition. Even better, the haemoglobin of sickle-cell carriers was a mixture of both types of haemoglobin. This work was published in Science in 1949, under the title `Sickle-cell anaemia: a molecular disease’.

Perutz and Crick suggested to Ingram that he should apply Sanger’s techniques of peptide analysis to see if he could find any difference between normal and sickle cell haemoglobin. After digesting haemoglobin with trypsin, Ingram separated the peptides by electrophoresis and chromatography in two dimensions to produce what he later called `fingerprints’. He recalls that his first efforts looked like a watercolour that had been left out in the rain. But gradually things improved and he was able to show that the fingerprints of Hbs A and S were identical except for the position of one peptide. Using a method that had been developed a few years earlier by Pehr Edman, which allowed a peptide to be degraded one amino acid at a time in a stepwise fashion, Ingram found that this difference was due to the substitution of valine for glutamic acid at position 6 in the β chain of Hb S.

As well as demonstrating how a crippling disease can result from only a single amino acid difference in the haemoglobin molecule, this beautiful work had broader implications for molecular genetics. Although nothing was known about the nature of the genetic code at the time, the findings were compatible with the notion that the primary product of the β-globin gene is a peptide chain, a further development of the one-gene-one-enzyme concept, suggested earlier by Beadle and Tatum from their studies of Neurospora, and a prelude to the later studies of Yanofsky on Escherichia coli, which were to confirm this principle.

With the advent of simple filter paper electrophoresis, haemoglobin analysis became the province of clinical research laboratories during the 1950s and `new’ abnormal haemoglobins appeared almost by the week. Although many scientists were involved it was Hermann Lehmann (Fig 4) who became the father figure. Like Handel, Hermann was born in Halle and, also like the composer, made his home in Great Britain. He came to England as a refugee and at the beginning of the Second World War had a short period of internment as a `friendly alien’ at Huyton, close to Liverpool, an experience shared with many others, including Max Perutz. He travelled widely during his later war service in the RAMC and developed a wide international network which enabled him to discover 81 haemoglobin variants during his career.

Harvey Itano and Elizabeth Robinson showed that Hb Hopkins 2 is an a chain variant. Hence, it was now clear that there must be at least two unlinked loci involved in regulating haemoglobin production, a and b. The discovery of the λ and δ chains of Hbs F and A2, respectively, meant that there must be at least four loci involved. Subsequent family studies and analyses of unusual variants resulting from the production of δβ or λβ fusion chains led to the ordering of the non-α globin genes.

It had been known for some years that children with severe forms of thalassaemia might have persistent production of HbF and it was found later that some carriers might have elevated levels of Hb A2. The seminal observation in favour of this notion came from the study of patients who had inherited the sickle-cell gene from one parent and thalassaemia from the other. Sickle-cell thalassaemia was first described by Ezio Silvestroni and his wife Ida Bianco in 1946, although at the time they could not have known the full significance of their finding.  Phillip Sturgeon and his colleagues in the USA found that the pattern of haemoglobin production in patients with sickle-cell thalassaemia is quite different to that of heterozygotes for the sickle-cell gene; the effect of the thalassaemia gene is to reduce the amount of Hb A to below that of Hb S, i.e. exactly the  opposite to the ratio observed in sickle-cell carriers. As it was known that the sickle-cell mutation occurs in the β globin gene, it could be inferred that the action of the thalassaemia gene was to reduce the amount of β globin production from the normal allele. Indeed, from the few family studies available in 1960 there was a hint that this form of thalassaemia might be an allele of the β globin gene. Another major observation that was made in the mid-50 s was the association of unusual tetramer haemoglobins, β4 (Hb H) and λ4 (Hb Bart’s), with a thalassaemia phenotype. In 1959 Vernon Ingram and Tony Stretton proposed in a seminal article that there are two major classes, α and β, just as there are two major types of structural haemoglobin variants. They extended the ideas of Linus Pauling and Harvey Itano, who had suggested that defective globin synthesis in thalassaemia might be due to `silent’ mutations of the β globin genes, and postulated that the defects might lie outside the structural gene in the area of DNA in the connecting unit. work on the interactions of thalassaemia and haemoglobin variants in the late 1950s had moved the field to a considerably higher level of understanding than is apparent in the earlier papers of Pauling and Itano. In any case, in their paper Ingram and Stretton generously acknowledged the ideas of other workers, including Lehmann, Gerald, Neel and Ceppellini, that had allowed them to develop their conceptual framework of the general nature of thalassaemia. This interpretation of events, and the input of scientists from many different disciplines into these concepts, is supported by the published discussions of several conferences on haemoglobin held in the late 1950s.

Historical Review. Towards Molecular Medicine; Reminiscences of the Haemoglobin Field. D. J. Weatherall, Weatherall Institute of Molecular Medicine, University of Oxford. Brit J  Haem 115:729-738.

The Emerging Understanding of Sickle Cell Disease

The first indisputable case of sickle cell disease in the literature was described in a dental student studying in Chicago between 1904 and 1907 (Herrick, 1910). Coming from the north of the island of Grenada in the eastern Caribbean, he was first admitted to the Presbyterian Hospital, Chicago, in late December 1904 and a blood test showed the features characteristic of homozygous sickle cell (SS) disease. It was a happy coincidence that he was under the care of Dr James Herrick (Fig 1) and his intern Dr Ernest Irons because both had an interest in laboratory investigation and Herrick had previously presented a paper on the value of blood examination in reaching a diagnosis (Herrick, 1904-05). The resulting blood test report by Dr Irons described and contained drawings of the abnormal red cells (Fig 2) and the photomicrographs, showing irreversibly sickled cells.

People with positive sickle tests were divided into asymptomatic cases, `latent sicklers’, and those with features of the disease, `active sicklers’, and it was Dr Lemuel Diggs of Memphis who first clearly distinguished symptomatic cases called sickle cell anaemia from the latent asymptomatic cases which were termed the sickle cell trait (Diggs et al, 1933).

Prospective data collection in 29 cases of the disease showed sickling in all 42 parents tested (Neel, 1949), providing strong support for the theory of homozygous inheritance. A Colonial Medical Officer working in Northern Rhodesia (Beet, 1949) reached similar conclusions at the same time with a study of one large family (the Kapokoso-Chuni pedigree). The implication that sickle cell anaemia should occur in all communities in which the sickle cell trait was common and that its frequency would be determined by the prevalence of the trait did not appear to fit the observations from Africa. Despite a sickle cell trait prevalence of 27% in Angola, Texeira (1944) noted the active form of the disease to be `extremely rare’ and similar observations were made from East Africa. Lehmann and Raper (1949, 1956) found a positive sickling test in 45% of one community, from which homozygous inheritance would have predicted that nearly 10% of children had SS disease, yet not a single case was found. The discrepancy led to a hypothesis that some factor inherited from non-black ancestors in America might be necessary for expression of the disease (Raper, 1950).

The explanation for this apparent discrepancy gradually emerged. Working with the Jaluo tribe in Kenya, Foy et al (1951) found five cases of sickle cell anaemia among very young children and suggested that cases might be dying at an age before those sampled in surveys. A similar hypothesis was advanced by Jelliffe (1952) and was supported by data from the then Belgian Congo (Lambotte-Legrand Lambotte-Legrand, 1951, Lambotte-Legrand, 1952, Vandepitte, 1952). Although most cases were consistent with the concept of homozygous inheritance, exceptions continued to occur. Patients with a non-sickling parent of Mediterranean ancestry were later recognized to have sickle cell-β thalassaemia (Powell et al, 1950; Silvestroni & Bianco, 1952; Sturgeon et al, 1952; Neel et al, 1953a), a condition also widespread in African and Indian subjects that presents a variable syndrome depending on the molecular basis of the β thalassaemia mutation and the amount of HbA produced.

Phenotypically, there are two major groups in subjects of African origin, sickle cell-β+ thalassaemia manifesting 20-30% HbA and mutations at 229(A,G) or 288(C,T), and sickle cell-β0 thalassaemia with no HbA and mutations at IVS2-849(A,G) or IVS2-1(G,A). In Indian subjects, a more severe β thalassaemia mutation IVS1-5(G,C) results in a sickle cell-β+ thalassaemia condition with 3-5% HbA and a relatively severe clinical course.

Other double heterozygote conditions causing sickle cell disease include sickle cell-haemoglobin C (SC) disease, (Kaplan et al, 1951; Neel et al, 1953b), sickle cellhaemoglobin O Arab (Ramot et al, 1960), sickle cellhaemoglobin Lepore Boston (Stammatoyannopoulos & Fessas, 1963) and sickle cell-haemoglobin D Punjab (Cooke & Mack, 1934). The latter condition was first described in siblings in 1934, who were reinvestigated for confirmation of HbD (Itano, 1951), the clinical features reported (Sturgeon et al, 1955) and who were finally identified as HbD Punjab (Babin et al, 1964), representing a remarkable example of longitudinal observation and investigation in the same family over 30 years.

The maintenance of high frequencies of the sickle cell trait in the presence of almost obligatory losses of homozygotes in Equatorial Africa implied that there was either a very high frequency of HbS arizing by fresh mutations or that the sickle cell trait conveyed a survival advantage in the African environment. There followed a remarkable period in the 1950s when three prominent scientists were each addressing this problem in East Africa, Dr Alan Raper and Dr Hermann Lehmann in Uganda and Dr Anthony Allison in Kenya. It was quickly calculated that mutation rates were far too low to balance the loss of HbS genes from deaths of homozygotes (Allison, 1954a). An increased fertility of heterozygotes was proposed (Foy et al, 1954; Allison, 1956a) but never convincingly demonstrated. Raper (1949) was the first to suggest that the sickle cell trait might have a survival advantage against some adverse condition in the tropics and Mackey & Vivarelli (1952) suggested that this factor might be malaria. The close geographical association between the distribution of malaria and the sickle cell gene supported this concept (Allison, 1954b) and led to an exciting period in the history of research in sickle cell disease.

The first observations on malaria and the sickle cell trait were from Northern Rhodesia where Beet (1946, 1947) noted that malarial parasites were less frequent in blood films from subjects with the sickle cell trait. Allison (1954c) drew attention to this association, concluding that persons with the sickle cell trait developed malaria less frequently and less severely than those without the trait. This communication marked the beginning of a considerable controversy.Two studies failed to document differences in parasite densities between `sicklers’ and `non-sicklers’ (Moore et al, 1954; Archibald & Bruce-Chwatt, 1955) and Beutler et al (1955) were unable to reproduce the inoculation experiments of Allison (1954c). Raper (1955) speculated that some feature of Allison’s observations had accentuated a difference of lesser magnitude and postulated that the sickle cell trait might inhibit the establishment of malaria in non-immune subjects. The conflicting results in these and other studies appear to have occurred because the protective effect of the sickle cell trait was overshadowed by the role of acquired immunity. Examination of young children before the development of acquired immunity confirmed both lower parasite rates and densities in children with the sickle cell trait (Colbourne & Edington, 1956; Edington & Laing, 1957; Gilles et al, 1967) and it is now generally accepted that the sickle cell trait confers some protection against falciparum malaria during a critical period of early childhood between the loss of passively acquired immunity and the development of active immunity (Allison, 1957; Rucknagel & Neel, 1961; Motulsky, 1964). The mechanism of such an effect is still debated, although possible factors include selective sickling of parasitized red cells (Miller et al, 1956; Luzzatto et al, 1970) resulting in their more effective removal by the reticulo-endothelial system, inhibition of parasite growth by the greater potassium loss and low pH of sickled red cells (Friedman et al, 1979), and greater endothelial adherence of parasitized red cells (Kaul et al, 1994).

The occurrence of the sickle cell mutation and the survival advantage conferred by malaria together determine the primary distribution of the sickle cell gene. Equatorial Africa is highly malarial and the sickle cell mutation appears to have arisen independently on at least three and probably four separate occasions in the African continent, and the mutations were subsequently named after the areas where they were first described and designated the Senegal, Benin, Bantu and Cameroon haplotypes of the disease (Kulozik et al, 1986; Chebloune et al, 1988; Lapoumeroulie et al, 1992). The disease seen in North and South America, the Caribbean and the UK is predominantly of African origin and mostly of the Benin haplotype, although the Bantu is proportionately more frequent in Brazil (Zago et al, 1992). It is therefore easy to understand the common misconception held in these areas that the disease is of African origin.

However, the sickle cell gene is widespread around the Mediterranean, occurring in Sicily, southern Italy, northern Greece and the south coast of Turkey, although these are all of the Benin haplotype and so, ultimately, of African origin. In the Eastern province of Saudi Arabia and in central India, there is a separate independent occurrence of the HbS gene, the Asian haplotype. The Shiite population of the Eastern Province traditionally marry first cousins, tending to increase the prevalence of SS disease above that expected from the gene frequency (Al-Awamy et al, 1984). Furthermore, extensive surveys performed by the Anthropological Survey of India estimate an average sickle cell trait frequency of 15% across the states of Orissa, Madhya Pradesh and Masharastra which, with the estimated population of 300 million people, implies that there may be more cases of sickle cell disease born in India than in Africa. The Asian haplotype of sickle cell disease is generally associated with very high frequencies of alpha thalassaemia and high levels of fetal haemoglobin, both factors believed to ameliorate the severity of the disease.

The promotion of sickling by low oxygen tension and acid conditions was first recognized by Hahn & Gillespie (1927) and further investigated by others (Lange et al, 1951; Allison, 1956b; Harris et al, 1956). The morphological and some functional characteristics of irreversibly sickled cells were described (Diggs & Bibb, 1939; Shen et al, 1949), but the essential features of the polymerization of reduced HbS molecules had to await the developments of electron microscopy (Murayama, 1966; Dobler & Bertles, 1968; Bertles & Dobler, 1969; White & Heagan, 1970) and Xray diffraction (Perutz & Mitchison, 1950; Perutz et al, 1951). The early observations on the inducement of sickling by hypoxia led to the first diagnostic tests utilizing sealed chambers in which oxygen was removed by white cells (Emmel, 1917), reducing agents such as sodium metabisulphite (Daland & Castle, 1948) or bacteria such as Escherichia coli (Raper, 1969). These slide sickling tests are very reliable with careful sealing and the use of positive controls, but require a microscope and some expertise in its use. An alternative method of detecting HbS utilizes its relative insolubility in hypermolar phosphate buffers (Huntsman et al, 1970), known as the solubility test. Both the slide sickle test and the solubility test detect the presence of HbS, but fail to make the vital distinction between the sickle cell trait and forms of sickle cell disease. This requires the process of haemoglobin electrophoresis, which detects the abnormal mobility of HbS, HbC and many other abnormal haemoglobins within an electric field.

The contributions of several workers on the determinants of sickling (Daland & Castle, 1948), birefringence of deoxygenated sickled cells (Sherman, 1940) the lesser degree of sickling in very young children which implied that it was a feature of adult haemoglobin (Watson, 1948) led Pauling to perform Tiselius moving boundary electrophoresis on haemoglobin solutions from subjects with sickle cell anaemia and the sickle cell trait. The demonstration of electrophoretic and, hence, implied chemical differences between normal, sickle cell trait and sickle cell disease led to the proposal that it was a molecular disease (Pauling et al, 1949). The chance encounter between Castle and Pauling who shared a train compartment returning from a meeting in Denver in 1945, its background and implications, has passed into the folklore of medical research (Conley, 1980; Feldman & Tauber, 1997).

The nature of this difference was soon elucidated. The haem groups appeared identical, suggesting that the difference resided in the globin, but early chemical analyses revealed no distinctive differences (Schroeder et al, 1950; Huisman et al, 1955). Analyses of terminal amino acids also failed to reveal differences, although an excess of valine in HbS was noted but considered an experimental error (Havinga, 1953). The development of more sensitive methods of fingerprinting combining high voltage electrophoresis and chromatography allowed the identification of the essential difference between HbA and HbS. This method enabled the separation of constituent peptides and demonstrated that a peptide in HbS was more positively charged than in HbA (Ingram, 1956). This peptide was found to contain less glutamic acid and more valine, suggesting that valine had replaced glutamic acid (Ingram, 1957). The sequence of this peptide was shown to be Val-His-Leu-Thr-Pro-Val-Glu-Lys in HbS instead of the Val-His-Leu-Thr-Pro-Glu-Glu-Lys in HbA (Hunt & Ingram, 1958), a sequence which was subsequently identified as the amino-terminus of the b chain (Hunt & Ingram, 1959). This amino acid substitution was consistent with the genetic code and was subsequently found to be attributable to the nucleotide change from GAG to GTG (Marotta et al, 1977).

Haemolysis and anaemia. The presence of anaemia and jaundice in the first four cases suggested accelerated haemolysis, which was supported by elevated reticulocyte counts (Sydenstricker et al, 1923) and expansion of the bone marrow (Sydenstricker et al, 1923; Graham, 1924). The bone changes of medullary expansion and cortical thinning were noted in early radiological reports (Vogt & Diamond, 1930; LeWald, 1932; Grinnan, 1935). Drawing on a comparison of sickle cell disease and hereditary spherocytosis, Sydenstricker (1924) introduced the term `haemolytic crisis’ that has persisted in the literature to this day, despite the lack of evidence for such an entity in sickle cell disease. The increased requirements of folic acid and the consequence of a deficiency leading to megaloblastic change was not noted until much later (Zuelzer & Rutzky, 1953; Jonsson et al, 1959; MacIver & Went, 1960).

The haemoglobin level in SS disease of African origin is typically between 6 and 9 g/dl and is well tolerated, partly because of a marked shift in the oxygen dissociation curve (Scriver & Waugh, 1930; Seakins et al, 1973) so that HbS within the red cell behaves with a low oxygen affinity. This explains why patients at their steady state haemoglobin levels rarely show classic symptoms of anaemia and fail to benefit clinically from blood transfusions intended to improve oxygen delivery.

Graham R. Serjeant
Sickle Cell Trust, Kingston, Jamaica
Brit J Haem 2001; 112: 3-18

The Immune Haemolytic Anaemias

The growth in knowledge of the scientific basis of haemolytic anaemias, which have been a main interest of the author, has been remarkable, as have consequent advances in the practice of medicine since the mid-1930s. At that time, the cause and mechanism of important disorders such as the acquired antibody determined (immune) haemolytic anaemias, haemolytic disease of the newborn, hereditary spherocytosis and paroxysmal nocturnal haemoglobinuria were unknown or but partially understood.

According to Crosby (1952), William Hunter of London, in an article on pernicious anaemia published in 1888, was the first to use the term `haemolytic’ to denote an anaemia caused by excessive blood destruction. By the turn of the century, the term was being widely used in clinical literature. Peyton Rous, in his comprehensive review `Destruction of the red blood corpuscles in health and disease’ (Rous, 1923), concluded that the generally held view in the early 1930s was that about one-fifteenth of the erythrocyte mass was destroyed daily. Rous was aware of the pioneer work of Winifred Ashby (1919), who, by following the survival of serologically distinct but compatible transfused erythrocytes, had found that normal erythrocytes might live for up to 100 d in the recipients’ circulation. Subsequent work using radioactive chromium (51Cr) as an erythrocyte label, showed that Ashby’s data and conclusions were in fact correct, i.e. that normal erythrocytes in health circulate in the peripheral blood for approximately 110 d. Erythrocyte labelling with 51Cr also had a further advantage over the Ashby method in addition to enabling the life-span of the patients’ erythrocytes to be assessed in the circulation by surface counting, to detect and measure the accumulation of radioactivity in the spleen and liver, and thereby assess the organs’ role in haemolysis

In the first decade of the twentieth century Widal et al (1908a) and Le Gendre & Brulea (1909) reported that autohaemoagglutination was a striking finding in some cases of icteare heamolytique acquis, and also Chauffard & Trosier (1908) and Chauffard & Vincent (1909) had described the presence of haemolysins in the serum of patients suffering from intense haemolysis. The conclusion was that abnormal immune processes, i.e. the development of auto-antibodies damaging the patients’ own erythrocytes, might play a part in the genesis of some cases of acquired haemolytic anaemia. This was indeed antedated by the classic observations of Donath & Landsteiner (1904) and Eason (1906) on the mechanism of haemolysis in paroxysmal cold haemoglobinuria.

That blood might auto-agglutinate when chilled had been described by Landsteiner (1903) and that an unusual degree of the phenomenon might complicate some types of respiratory disease was reported by Clough & Richter (1918) and later by Wheeler et al (1939). A few years later Peterson et al (1943) and Horstmann & Tatlock (1943) reported that cold auto-agglutinins at high titres were frequently found in the serum of patients who had suffered from the then so called primary atypical pneumonia.

Stats & Wasserman’s (1943) review on cold haemagglutination was a valuable contribution to contemporary knowledge. They listed in a table as many as 94 references to papers published between 1890 and 1943 in which cold haemagglutination had been described. In 32 of the papers the patients referred to had suffered from increased haemolysis

Recognition that cold auto-antibodies played an important role in the pathogenesis of some cases of haemolytic anaemia led to the concept that auto-immune haemolytic anaemia (AIMA) might usefully be classified into warm antibody or cold-antibody types, according to whether the patient is forming (warm) antibodies which react (perhaps optimally) at body temperature or (cold) antibodies which react strongly at low temperatures (e.g. 48C) but progressively less well as the temperature is raised and are perhaps inactive at 37oC. The clinical syndrome suffered by the patient would depend not only on the amount of antibody produced but also on its temperature requirement. Another important advance in understanding has been the realization that both types of AIHA could develop in association with a wide range of underlying disorders (secondary AIHA) as well as `idiopathically’, i.e. for no obvious cause (primary AIHA). The author’s own experience was summarized in a review (Dacie & Worlledge, 1969): 99 out of 210 cases of warm AIHA were judged to be secondary as were 39 out of 85 cases of cold AIHA. Petz & Garratty (1980), summarized the data from six centres: 55% out of a total of 656 cases had been reported as secondary. They listed the disorders with which warm antibody AIHA had been associated as chronic lymphocytic leukaemia, Hodgkin’s disease, non-Hodgkin’s lymphomas, thymomas, multiple myeloma, Waldenstrom’s macroglobulinaemia, systemic lupus erythematosus, scleroderma, rheumatoid arthritis, infectious disease/ childhood viral disorders, hypogammaglobulinaemia, dysglobulinaemias, other immune deficiency syndromes, and ulcerative colitis.

Conley (1981), in an interesting review of warm-antibody AIHA patients seen at the Johns Hopkins Hospital, emphasized how important it was to carry out a careful enquiry into the patient’s past history and also to undertake a prolonged follow-up. He stated that a retrospective review of 33 patients whose illnesses in the past have been designated `idiopathic” had revealed an associated immunologically related disorder in 19 of them. An additional three patients had developed a lymphoma 2±10 years after they had developed AIHA. As already referred to, warm-antibody AIHA is now known to complicate a wide range of underlying diseases, particularly malignant lymphoproliferative disorders, other auto-immune disorders and immune deficiency syndromes. What proportion of patients suffering from a lymphoproliferative disorder develop AIHA is an interesting question. Duehrsen et al (1987) stated that this had occurred in 12 out of 637 patients. Early data on the incidence of a positive DAT in SLE were provided by Harvey et al (1954) – in six out of 34 patients tested the DAT had been positive. Later, Mongan et al (1967), who had studied a large number of patients suffering from a variety of connective tissue disorders, reported that the DAT had been positive in 15 out of 23 patients with SLE, none of whom, however, had suffered from overt haemolytic anaemia. It has also been realized since the 1960s that warm-antibody AIHA may develop in patients suffering from a variety of immune deficiency syndromes, both congenital and acquired.

It was in the mid-1960s that it was realized that, in a significant proportion of patients thought to have `idiopathic’ warm-antibody AIHA, the development of the causal auto-antibodies had been triggered in some way by a drug the patient was taking. The first drug implicated was the antihypertensive drug a-methyldopa (Aldomet) (Carstairs et al, 1966a,b). Following the finding that treating hypertensive patients with a-methyldopa led to the formation of anti-erythrocyte auto-antibodies in a significant percentage of patients, renewed interest was taken in the possibility that other drugs might have the same effect. Two main hypotheses have been advanced in relation to how certain drugs in some patients appear to have caused the development of anti-erythrocyte auto-antibodies. One hypothesis was that the drug or its metabolites act on the immune system so as to impair immune tolerance; the other was that the drug affects antigens at the erythrocyte surface in such a way that a normally active immune system responds by developing anti-erythrocyte antibodies. Clearly, too, the patient’s individuality must be an important factor, for only a proportion of patients receiving the same dosage of the offending drug for the same period of time develop a positive DAT and only a small percentage develop overt AIHA.

An interesting development in the history of the immune haemolytic anaemias was the realization in the mid-1950s that, rather rarely, haemolysis was brought about by the patient developing antibodies that were directed against a drug the patient had been taking and that the erythrocytes were in some way secondarily involved. The first drug to be implicated was Fuadin (stibophen), which had been used to treat a patient with schistosomiasis (Harris, 1954, 1956). The patient’s serum contained an antibody that agglutinated his own or normal erythrocytes and/or sensitized them to agglutination by antiglobulin sera; however, this occurred only in the presence of the drug.

In the late 1940s, several accounts of patients with AIHA who had persistently low platelet counts were published, e.g. Fisher (1947) and Evans & Duane (1949); and it was suggested that the patients might have been forming autoantibodies directed against platelets. This concept was further developed by Evans et al (1951). Eight out of their 18 patients with AIHA were thrombocytopenic; four had clinically obvious purpura. Evans et al (1951) suggested that there exists `a spectrum-like relationship between acquired haemolytic anaemia and thrombocytopenic purpura’; also that `on the one hand, acquired haemolytic anaemia with sensitization of the red cells is often accompanied with thrombocytopenia, while, on the other hand, primary thrombocytopenic purpura is frequently accompanied with red cell sensitization with or without haemolytic anaemia’. Many further case reports of AIHA accompanied by severe thrombocytopenia have since been published

There are two features in the blood film of a patient with an acquired haemolytic anaemia which indicate that he or she is suffering from AIHA; one is auto-agglutination, the other is erythrophagocytosis. Spherocytosis, although often present to a marked degree, is of course found in other types of haemolytic anaemia.

The pioneer French observations on auto-agglutination already referred to were generally overlooked until the late 1930s, and serological studies seem seldom to have been undertaken until the publication of Dameshek & Schwartz’s (1938b) report in which they described the presence of `haemolysins’ in cases of acute apparently acquired haemolytic anaemia. Dameshek & Schwartz (1940) summarized contemporary knowledge in an extensive review. They concluded that it was not improbable that haemolysins of various types and `dosages’ were in fact responsible for many cases of human haemolytic anaemias, including congenital haemolytic anaemia, which they suggested might be caused by the `more or less continued action of an haemolysin’.

Six years were to pass before the concept that an abnormal immune mechanism played a decisive role in some cases of acquired haemolytic anaemia was clearly demonstrated by Boorman et al (1946), who reported that the erythrocytes of five patients with acquired acholuric jaundice had been agglutinated by an antiglobulin serum, i.e. that the newly described antiglobulin reaction or Coombs test (Coombs et al, 1945) was positive, while the test had been negative in 28 patients suffering from congenital acholuric jaundice. This work aroused great interest and was soon confirmed.

Until the 1950s, the auto-antibodies responsible for AIHA were generally concluded to be `non-specific’. According to Wiener et al (1953), `Red cell auto-antibodies react not only with the individual’s own red cells but also with the erythrocytes of all other human beings. The substances on the red blood cell envelope with which the auto-antibodies combine are agglutinogens like the ABO, MN and RhHr systems, except that, in the former case, the blood factors with which the auto-antibodies react are not type specific but are shared by all human beings.’ They suggested that the auto-antibodies might be directed to the `nucleus of the RhHr substance’. Earlier work had, however, indicated that the sensitivity of normal group-compatible erythrocytes to a patient’s auto-antibody might vary considerably (Denys & van den Broucke, 1947; Kuhns & Wagley, 1949). That auto-antibodies might have a clearly defined Rh specificity, e.g. anti-e, was described by Race & Sanger (1954) in the second edition of their book. Referring to Wiener et al (1953), they wrote: `This beautifully clear investigation made the present authors realize that a curious result obtained by one of them (Ruth Sanger) in 1953 in Australia had after all been true; the serum of a man who had died of a haemolytic anaemia 3000 miles away contained anti-e; his cells were clearly CDe-cde’. A similar finding, i.e. an auto-anti-e, was described by Weiner et al (1953).

A further development in the unravelling of a complicated story was the realization that some of the antibodies which appeared to be specific were reacting with more basic antigens, although showing a preference for specific antigens, i.e. some specific auto-antibodies appeared to be less specific than their allo-antibody counterparts. Moreover, some antibodies, reacting with specific antigens, have been shown to be partially or completely absorbable by antigen negative cells.

Many apparently `non-specific’ antidl antibodies have been shown to be not strictly `nonspecific’ but to react with antigens of very high frequency, e.g. to be anti-Wrb, anti-Ena, anti-LW or anti-U. Issitt et al (1980)) listed six additional very common antigens that had been identified as targets for anti-dl auto-antibodies, i.e. Hr, Hro, Rh34, Rh29, Kpb and K13.

In relation to human acquired haemolytic anaemia, the discovery in the late 1940s and 1950s that many cases were apparently brought about by the development of damaging anti-erythrocyte antibodies led to intense interest and speculation into the why and how of auto-antibody formation. Of seminal importance at the time were the experiments and theoretical arguments of Burnet (Burnet & Fenner, 1949; Burnet, 1957, 1959, 1972) and the studies on transplantation immunity of Medawar (Billingham et al, 1953; Medawar, 1961). Of particular interest, too, was the report by Bielschowsky et al (1959) of the occurrence of AIHA in an inbred strain of mice – the NZB/BL strain. Remarkably, by the time the mice were 9-months-old the DAT was positive in almost every mouse. Burnet (1963) referred to the gift of the mice to the Walter and Eliza Hall Institute of Medical Research, Melbourne as `the finest gift the Institute has ever received’.

Exactly how is it that auto-antibodies reacting with an erythrocyte surface antigen result in the cell’s premature destruction? The possible role of auto-agglutination in bringing about haemolysis was emphasized by Castle and colleagues as the result of a series of studies carried out in the 1940s and 1950s. As summarized by Castle et al (1950), an antibody which appears to be incapable of causing `lysis in vitro might bring about the following sequence of events in vivo. (1) Red cell agglutination in the peripheral blood; (2) red cell sequestration and separation from plasma in tissue capillaries; (3) ischaemic injury of tissue cells with release of substances that increase the osmotic and mechanical fragilities of red cells locally; (4) local osmotic lysis of red cells or subsequent escape of mechanically fragile red cells into the blood stream where the traumatic motion of the circulation causes their destruction’.

We can expect, as the years pass, that more and more will be known as to the intricate mechanisms that bring about self-tolerance and the mechanisms underlying the occurrence of auto-immune disorders in general, including the role of infectious agents, drugs and genetic factors. Patients with immune haemolytic anaemias can be expected to benefit from the new knowledge; for in parallel with a better understanding as to how immune self-tolerance breaks down will hopefully be the development of more effective drugs and therapies aimed at controlling the breakdown.

The Immune Haemolytic Anaemias: A Century of Exciting Progress in Understanding.  Sir John Dacie, Emeritus Professor of Haematology.
Brit J Haem 2001; 114: 770-785.

A History of Pernicious Anaemia

This is a review of the ideas and observations that have led to our current understanding of pernicious anaemia (PA). PA is a megaloblastic anaemia (MA) due to atrophy of the mucosa of the body of the stomach which, in turn, is brought about by autoimmune factors.

A case report by Osler & Gardner (1877) in Montreal could be that of PA. This anaemic patient had numbness of the fingers, hands and forearms; the red blood cells were large; at autopsy the gastric mucosa appeared atrophic and the marrow had large numbers of erythroblasts with finely granular nuclei. The increased marrow cellularity had also been noted by Cohnheim (1876).

Ehrlich (1880) (Fig 1) distinguished between cells he termed megaloblasts present in the blood in PA from normoblasts present in anaemia as a result of blood loss. Not only were large red blood cells noted in PA, but irregular red cells, ? poikilocytes, were reported in wet blood preparations by Quincke (1877). Megaloblasts in the marrow during life were first noted by Zadek (1921). Hypersegmented neutrophils in peripheral blood in PA were described by Naegeli (1923) and came to be widely recognized after Cooke’s study (Cooke, 1927). The giant metamyelocytes in the marrow were described by Tempka & Braun (1932).

Paul Ehrlich

Paul Ehrlich

Fig 1. Paul Ehrlich (Wellcome Institute Library, London).

The association between PA and spinal cord lesions was described by Lichtheim (1887) and a full account was published by Russell et al (1900), who coined the term `subacute combined degeneration of the spinal cord’ (SCDC) although they were not convinced of its relation to PA. Arthur Hurst at Guy’s Hospital, London, confirmed the association of the neuropathy with PA and added, too, the association of loss of hydrochloric acid in the gastric juice (Hurst & Bell, 1922). Cabot (1908) found that numbness and tingling of the extremities were present in almost all of his 1200 patients and 10% had ataxia. William Hunter (1901) noted the prevalence of a sore tongue in PA, which was present in 40% of Cabot’s series.

In 1934, the Nobel Prize in medicine and physiology was awarded to Whipple, Minot and Murphy. Was there ever an award more deserved? They saved the lives of their patients and pointed the way forward for further research. What was there in liver that was lacking in patients with PA? The effect of liver in restoring the anaemia in Whipple’s iron-deficient dogs was by supplying iron which is  abundant in liver.

Liver given by mouth also provides Cbl and folic acid. But patients with PA cannot absorb Cbl, although some 1% of an oral dose can cross the intestinal mucosa by passive diffusion; this, presumably, is what happened when large amounts of liver were eaten. Beef liver contains about 110 mg of Cbl per 100 g and about 140 mg of folate per 100 g. Cbl is stable and generally resistant to heat; folate is labile unless preserved with reducing agents. The daily requirement of Cbl by man is l-2 mg. The liver diet, if consumed, had enough of these haematinics to provide a response in most MAs.

George Richard Minot

George Richard Minot

George Richard Minot (Wellcome Institute Library, London).

The availability of liver extracts brought about interest in the nature of the haematological response. An optimal response required a peak rise of reticulocytes 5±7 d after the injection of liver extract and the height of the peak was greatest in those with severe anaemia; the flood of reticulocytes was as a result of a synchronous maturation of a vast number of megaloblasts into red cells. There is a steady rise in the red cell count to reach 3 x 1012/l in the 3rd week (Minot & Castle, 1935). Many liver extracts did not have enough antianaemic factor to achieve this and some assayed by the author had only 1-2 mg of Cbl.  It took another 22 years for a pure antianaemic factor to be isolated, although, admittedly, the Second World War intervened; in 1948, an American group led by Karl Folkers and an English group led by E. Lester-Smith published, within weeks of each other, the isolation of a red crystalline substance termed vitamin B12 and subsequently renamed cobalamin.

The structure of this red crystalline compound was studied by the nature of its degradation products and by X-ray crystallography. It soon became apparent that there was a cobalt atom at the heart of the structure and this heavy atom was of great aid to the crystallographers, so much so that, with additional information from the chemists, they were the first to come up with the complete structure. To quote Dorothy Hodgkin: `To be able to write down a chemical structure very largely from purely crystallographic evidence on the arrangement of atoms in space – and the chemical structure of a quite formidably large molecule at that – is for any crystallographer, something of a dream-like situation’. As Lester-Smith (1965) pointed out, it also required some 10 million calculations. In 1964, Dorothy Hodgkin was awarded the Nobel Prize for chemistry.

Barker et al (1958) published an account of the metabolism of glutamate by a Clostridium. The glutamate underwent an isomerization and an orange-coloured co-enzyme was involved that turned out to be Cbl with a deoxyadenosyl group attached to the cobalt.

This Cbl co-enzyme, deoxyadenosylCbl, is the major form of Cbl in tissues; it is also extremely sensitive to light, being changed rapidly to hydroxoCbl. DeoxyadenosylCbl is concerned with the metabolism of methylmalonic acid in man (Flavin & Ochoa, 1957). The other functional form of Cbl is methylCbl involved in conversion of homocysteine to methionine (Sakami & Welch, 1950). Both these pathways are impaired in PA in relapse.

Cbl consists of a ring of four pyrrole units very similar to that present in haem. These, however, have the cobalt atom in the centre instead of iron and the ring is called the corrin nucleus. The cobalamins have a further structure, a base, termed benzimidazole, set at right angles to the corrin nucleus and this may have a link to the cobalt atom (base on position).

By the time Cbl had been isolated from liver it was already known that it was also present in fermentation flasks growing bacteria such as streptomyces species. Other organisms gave higher yields so that kilogram quantities of pure Cbl were obtained; these sources have replaced liver in the production of Cbl. By adding radioactive form of cobalt to the fermentation flasks instead of ordinary cobalt, labelled Cbl became available (Chaiet et al, 1950). The importance of labelled Cbl is that it made it possible to carry out Cbl absorption tests in patients, to design isotope dilution assays for serum Cbl, to design ways of assaying intrinsic factor (IF), to detect antibodies to IF and even to measure glomerular filtratration rate, as free Cbl is excreted by the glomerulus without any reabsorption by the renal tubules.

William Castle at the Thorndike Memorial Laboratory, Boston City Hospital, devised experiments to explore the relationship between gastric juice, the anti-anaemic factor that Castle assumed, correctly, was also present in beef, and the response in PA. The question Castle asked was `Was it possible that the stomach of the normal person could derive something from ordinary food that for him was equivalent to eating liver?’.

The experiment in untreated patients with PA consisted of two consecutive periods of 10 d or more during which daily reticulocyte counts were made. During the first period of 10 d, the PA patient received 200 g of lean beef muscle (steak) each day. There was no reticulocyte response. During the second period, the contents of the stomach of a healthy man were recovered 1 h after the ingestion of 300 g of steak; about 100 g could not be recovered. The gastric contents were incubated for a few hours until liquefied and then given to the PA patient through a tube. This was done daily. On day 6 there was a rise in reticulocytes reaching a peak on day 10, followed by a rise in the red cell count. The response was similar to that obtained with large amounts of oral liver.

Thus, Castle concluded that a reaction was taking place between an unknown intrinsic factor (IF) in the gastric juice and an unknown extrinsic factor in beef muscle. Whereas Minot & Murphy (1926) found that 200-300 g of liver daily was needed to get a response in PA, 10 g liver was adequate when incubated with 10-20 ml normal gastric juice (Reiman & Fritsch, 1934). Castle’s extrinsic factor is the same as the anti-anaemic factor that is Cbl, and IF is needed for its absorption. Presumably the gastric juice in PA lacks IF.

The elegant studies of Hoedemaeker et al (1964) in Holland using autoradiography of frozen sections of human stomach incubated with [57Co]-Cbl showed that IF was produced in the gastric parietal cell. The binding of Cbl to

the parietal cell was abolished by first incubating the section with a serum containing antibodies to IF. The parietal cell in man is thus the source of both hydrochloric acid and IF. The parietal cell is the only source of IF in man as a total gastrectomy is invariably followed by a MA due to Cbl deficiency. IF is a glycoprotein with a molecular weight of 45 000.

Assay of protein fractions of serum after electrophoresis showed that endogenous Cbl is in the position of α-1 globulin. Chromatography of serum after addition of [57Co]-Cbl on Sephadex G-200 showed that Cbl was attached to two proteins, one eluting before the albumin termed transcobalamin I (TCI) and the other after the albumin termed transcobalamin II (TCII). Charles Hall showed that, when labelled Cbl given by mouth is absorbed, it first appears in the position of TCII and later in the position of TCI as well (Hall and Finkler, l965). They concluded that TCII is the prime Cbl transport protein carrying Cbl from the gut into the blood and then to the liver from where it is redistributed by both new TCII as well as TCI. Congenital absence of a functional TCII causes a severe MA in the first few months of life owing to an inability to transport Cbl. Most of the Cbl in serum is on TCI because it has a relatively long half-life of 9±10 d, whereas the half-life of TCII is about 1.5 h. Thus, in assaying the serum Cbl level, it is mainly TCI-Cbl that is being assayed.

With the availability of labelled Cbl, Cbl absorption tests began to be widely used in the 1950s. The commonest method was the urinary excretion test described by Schilling (1953). Here, an oral dose of radioactive Cbl is followed by an injection of 1000 mg of cyano-Cbl. The free cyano-Cbl is largely excreted into the urine over the next 24 h and carries with it about one third of the absorbed labelled Cbl.

Parietal cell antibodies (Taylor et al, 1962) are present in serum in 76-93% of different series of PAs and in the serum of 36% of the relatives of PA patients. The antibody is present in sera from 32% of patients with myxoedema, 28% of patients with Graves’ disease, 20% of relatives of thyroid patients and 23% of patients with Addison’s disease. Parietal cell antibodies are found in between 2-16% of controls, the high 16% figure being in elderly women. There is a higher frequency of PA in women, the female to male ratio being 1.7 to 1.0. The parietal cell antibody is probably important in the production of gastric atrophy. Thyroid antibodies are present in sera from 55% of PAs, in sera from 50% of PA relatives, in 87% of sera from myxoedema patients, in 53% of sera in Graves’ disease and in 46% of relatives of patients with thyroid disease.

There is a high frequency of PA among those disorders that have antibodies against the target organ. Thus, among 286 patients with myxoedema, 9.0% also had PA (Chanarin, 1979), as compared with a frequency of PA of about 1 per 1000 (0.01%) in the general population. Of 102 consecutive patients with vitiligo,
eight also had PA.

Patients with acquired hypogammaglobulinaemia are unable to make humoral antibodies; nevertheless, one third have PA as well. This cannot be as a result of action of IF antibodies and must be because of specific cell-mediated immunity. Tai & McGuigan (1969) demonstrated lymphocyte transformation in the presence of IF in six out of 16 PA patients and Chanarin & James (1974) found 10 out of 51 tests were positive.

Twenty-five patients with PA were tested for the presence of humoral IF antibody in serum and gastric juice and for cell-mediated immunity against IF. All but one gave positive results in one or more tests. It was concluded that these findings establish the autoimmune nature of PA and that the immunity is not merely an interesting byproduct.

Patients with PA treated with steroids show a reversal of the abnormal findings characterizing the disease. If they are still megaloblastic, the anaemia will respond in the first instance (Doig et al, 1957), but in the longer term Cbl neuropathy may be precipitated. The absorption of Cbl improves and may become `normal’ (Frost & Goldwein, 1958). There is a return of IF in the gastric juice (Kristensen and Friis, 1960) and a decline in the amount of IF antibody in serum (Taylor, 1959). In some patients there is return of acid in the gastric juice. Gastric biopsy shows a return of parietal and chief cells (Ardeman & Chanarin, 1965b; Jeffries, 1965). All this is as a result of suppression of cell-mediated immunity against the parietal cell and against IF. Withdrawal of steroids leads to a slow return to the status quo.

The author has dipped freely into the two volumes by the late M. M. Wintrobe. These are: Wintrobe, M.M. (1985) Hematology, the Blossoming of a Science. Lea & Febinge

A History of Pernicious Anaemia
I. Chanarin, Richmond, Surrey
Brit J Haem 111: 407-415
History of Folic Acid

1928 Lucy Wills studied macrocytic anaemia in pregnancy in Bombay, India

1932 Janet Vaughn studied macrocytic anemia associated with coeliac disease and idiopathic steatorrhea (1932) showed a response to marmite

1941 Folic acid extracted from spinach and is a growth factor for S. Faecalis

1941 pteroylglutamic acid synthesized at Amer Cyanamide – Pteridine ring, paraminobenzoic acid, glutamine –  PGA differed from natural compound in some respects

1945 PGA resolved the macrocytic anemia, but not the neuropathy

1979 Stokstad and associates at Berkeley obtained the first purified mammalian enzymes involved in synthesis

Folate antagonists inhibit tumor growth (Hitchings and Elion)(Nobel)

  • Misincorporation of uracil instead of thymine into DNA

Sidney Farber introduced Aminopterine and also Methotrexate for treatment of childhood lymphoblastic leukemia

  • MTX inhibits DHFR enzyme (dihydrofolate reductase) necessary for THF

Wellcome introduces trimethoprim (antibacterial), and also pyramethoprime (antimalarial)

Homocysteine isolated by Du Vineaud, but it was not noticed

Finkelstein and Mudd demonstrated the importance of remethylation for tHy and worked out the transsulfuration pathway

  1. Function of methyl THF is remethylation of homocysteine
  2. Synthesized by MTHFR
Metabolism of folate

Metabolism of folate

Metabolism of folate

Allosterically regulated by S-adenosyl methionine (Stokstad)

MTHF also inhibits glycine methyl transferase controlling excess SAM – transmethylation

JD Finkelstein

JD Finkelstein

James D Finkelstein

  • Homocysteinuria – mental retardation, skeletal malformation, thromboembolic disease; deficiency of cystathionine synthase (controls trans-sulfuration)
  • NTDs – pregnancy
  • Hyperhomocysteinemia and VD

AD Hoffbrand and DG Weir
Brit J Haem 2001; 113: 579-589

The History of Haemophilia in the Royal Families of Europe Queen Victoria.

On 17 July 1998 a historic ceremony of mourning and commemoration took place in the ancestral church of the Peter and Paul Fortress in St Petersburg. President Boris Yeltsin, in a dramatic eleventh-hour change of heart, decided to represent his country when the bones of the last emperor, Tsar Nicholas II, and his family were laid to rest 80 years to the day after their assassination in Yekaterinberg (Binyon, 1998). He described it as ‘ironic that the Orthodox Church, for so long the bedrock of the people’s faith, should find it difficult to give this blessing the country had expected’. ‘I have studied the results of DNA testing carried out in England and abroad and am convinced that the remains are those of the Tsar and his family’ (The Times, 1998a). Unfortunately, politicians and the hierarchy of the Russian Orthodox Church had argued about what to do with the bones previously stored in plastic bags in a provincial city mortuary. Politics, ecclesiastical intrigue, secular ambition, and emotions had fuelled the debate. Yeltsin and the Church wanted to honour a man many consider to be a saint, but many of the older generation are opposed to the rehabilitation of a family which symbolizes the old autocracy.

Our story starts, almost inevitably, with Queen Victoria of England who had nine children by Albert, Prince of Saxe-Coburg-Gotha. Victoria was certainly an obligate carrier for haemophilia as over 20 individuals subsequently inherited the condition (Figs 1 and 2). Princess Alice (1843–78) was Victoria’s third child and second daughter. Having married the Duke of Hesse at an early age, Alice went on to have seven children, one of whom, Frederick (‘Frittie’) was a haemophiliac who died at the age of 3 following a fall from a window.

Prince Leopold with Sir William Jenner at Balmoral in 1877

Prince Leopold with Sir William Jenner at Balmoral in 1877

Prince Leopold with Sir William Jenner at Balmoral in 1877. (Hulton Deutsch Collection Ltd.)

Alexandra was the sixth child and was only 6 years old when her mother and youngest sister died. ‘Sunny’, as she became known, was a favourite of Queen Victoria, who as far as possible directed her upbringing from across the channel: Alexandra (Alix) was forced to eat her baked apples and rice pudding with the same regularity as her English cousins. Alix visited her older sister Elizabeth (Ella) on her marriage to Grand Duke Serge and met Tsarevich Nicholas for the first time: she was 12 and not impressed. Five years later they met again and Alix fell in love, but by now she had been confirmed in the Lutheran Church and religion became the solemn core of her life.

Victoria had other aspirations for Alix. She hoped that she would marry her grandson Albert Victor (The Duke of Clarence) and the eldest son of the Prince of Wales (later Edward VII). The Duke was an unimpressive young man who was somewhat deaf and had limited intellectual abilities. If this arrangement had proceeded then Alix’s haemophilia carrier status would have been introduced into the British Royal Family and the possibility of a British monarch with haemophilia might have become a reality; however, the Duke died in 1892.

Nicholas and Alexandra. Alix and Nicholas were married in 1894 one week after the death of Nicholas’s father (Alexander III). In the same way that Victoria, with her personal aspirations of a marriage between Alix and the Duke of Clarence, had not considered the possibility of haemophilia, neither did the St Petersburg hierarchy consider a marriage to Nicholas undesirable. Haemophilia was already well recognized in Victoria’s descendants. Her youngest son, Leopold, had already died, as had Frittie her grandson. The inheritance of haemophilia had been known for some time since its description by John Conrad Otto (Otto, 1803). However, it was as late as 1913 before the first royal marriage was declined because of the risk of haemophilia, when the Queen of Rumania decided against an association between her son, Crown Prince Ferdinand, and Olga, the eldest daughter of Nicholas and Alexandra. The Queen of Rumania was herself a granddaughter of Queen Victoria and therefore a potential haemophilia carrier!

Alix was received into the Russian Orthodox Church, taking the name of Alexandra Fedorova. The first duty of a Tsarina was to maintain the dynasty and produce a male heir, but between 1895 and 1901 Alix produced four princesses, Olga, Tatiana, Maria and Anastasia. Failure to produce a son made Alix increasingly neurotic and she had at least one false pregnancy. However, in early 1904 she was definitely pregnant.

For a month or so all seemed well with little Alexis, but it was then noticed that the Tsarevitch was bleeding excessively from the umbilicus (a relatively uncommon feature of haemophilia). At first the diagnosis was not admitted by the parents, but eventually the truth had to be faced although even then only by the doctors and immediate family. Alix was grief stricken: ‘she hardly knew a day’s happiness after she realized her boy’s fate’. As a newly diagnosed haemophilia carrier she dwelt morbidly on the fact that she had transmitted the disease. These feelings are well known to some haemophiliac mothers but the situation was different in Russia in the early twentieth century. The people regarded any defect as divine intervention. The Tsar, as head of the Church and leader of the people, must be free of any physical defect, so the Tsarevich’s haemophilia was concealed. The family retreated into greater isolation and were increasingly dominated by the young heir’s affliction (Fig 3).

Up to a third of haemophiliac males do not have a family history of the condition. This is usually thought to be the result of a relatively high mutation rate occurring in either affected males or female carriers. None of Queen Victoria’s ancestors, for many generations, showed any evidence of haemophilia. Victoria was therefore either a victim of a mutation, or the Duke of Kent was not her father.The mutation is unlikely to have been in her mother, Victoire, who had a son and daughter by her first marriage, and there is no sign of haemophilia in their numerous descendants.

Victoire was under considerable pressure to produce an heir. The year before Victoria was born, Princess Charlotte, the only close heir to the throne, had died and the Duke of Kent had somewhat reluctantly agreed to marry Victoire with the aim of producing an heir. The postulate that the Queen’s gardener had a limp has not been substantiated!

The Duke of Kent had no evidence of haemophilia (he was 51 when Victoria was born) but did inherit another condition from his father (George III): porphyria. While a young man in Gibralter he suffered bilious attacks which were recognized as being similar to his father’s complaint.

Had Queen Victoria carried the gene for porphyria we might expect that she would have at least as many descendants with this condition as had haemophilia. Until recently only two possible cases of porphyria have been suggested amongst Victoria’s descendants: Kaiser Wilhelm’s sister and niece (MacAlpine & Hunter, 1969), but they could have inherited it from their Hohenzollern ancestor, Frederick the Great. A recent television programme (Secret History, 1998) claims to have identified two more cases in Victoria’s descendants, Princess Victoria, the Queen’s eldest daughter, and Prince William of Gloucester, nephew of George V. If these two cases are correct then they would tend to confirm that Victoria was indeed the daughter of the Duke of Kent, but the apparent lack of more cases in Victoria’s extended family is difficult to understand. The gene for acute intermittent porphyria has been isolated on chromosome 11. There is still plenty of scope for further genetic analysis on the European Royal Families!

We can only speculate as to the impact on European events over the last 150 years if the marriages within the Royal houses had been different. What is evident is the dramatic effect of haemophilia on the Royal Princes and their families.

Empress Alexandra at the Tsarevich’s bedside during a haemophiliac crisis

Empress Alexandra at the Tsarevich’s bedside during a haemophiliac crisis

Empress Alexandra at the Tsarevich’s bedside during a haemophiliac crisis in 1912. (Radio Times Hulton Picture Library.)

Richard F. Stevens
Royal Manchester Children’s Hospital
Brit J Haem 1999, 105, 25–32

`The longer you can look back ± the further you can look forward’: Winston Churchill in an address to The Royal College of Physicians, London 1944. At the time that Churchill was speaking in 1944, leukaemia was a fatal disease that had been identified 100 years before. The disease was described as the dreaded leukaemias, sinister and poorly understood.

Thomas Hodgkin chose a career in medicine and enrolled as a pupil at Guy’s Hospital in London. Being a Quaker, however, he could not enter the English universities of Oxford and Cambridge and decided to follow the medical courses at Edinburgh. At that times, Aristotelian and Hippocratic medicine were greatly influencing British physicians. Hodgkin, still a medical student, wrote a paper `On the Uses of the Spleen’ where he reported his beliefs on the purposes of the spleen: to regulate fluid volume, clean impurities from the body, supply expandability to the portal system. The subject was a presage of the disease that bears his name.

Hodgkin interrupted his studies at Edinburgh to spend a year in Paris where he met many people who had a great influence in his life and future activities. Among them, were Laennec (Hodgkin played an important role in bringing the stethoscope to Great Britain); Baron von Humboldt who introduced Hodgkin to the field of anthropology; Baron Cuvier, a distinguished anatomist and palaeontologist; and Thomas A. Bowditch, whose expeditions to Africa had a great impact on Hodgkin’s future activities.

In 1825, Thomas Hodgkin returned to London to join the staff at Guy’s Hospital, and in 1826 he was made `Inspector of the Dead’ and `Curator of the Museum of Morbid Anatomy’. In developing the museum he had accumulated, by 1829, over 1600 specimens demonstrating the effects of disease. The correlation of clinical disease to pathological material was quite new: from analyses of pathological specimens Hodgkin was able to describe appendicitis with perforation and peritonitis, the local spread of cancer to draining lymph nodes, noting that the tumour had similar characteristics at both sides, and features of other diseases.

In his historic paper `On Some Morbid Appearances of the Absorbent Glands and Spleen’ (Hodgkin, 1832), he briefly described the clinical histories and gross postmortem findings on six patients from the experience at Guy’s Hospital and included another case sent to him in a detailed drawing by his friend Carswell (Fig 2). In the very first paragraph he wrote: `The morbid alterations of structure which I am about to describe are probably familiar to many practical morbid anatomists, since they can scarcely have failed to have fallen under their observation in the course of cadaveric inspection’. Hodgkin’s studies had convinced him that he was dealing with a primary disease of the absorbent (lymphatic) glands. `This enlargement of the glands appeared to be a primitive affection of those bodies, rather than the result of an irritation propagated to them from some ulcerated surface or other inflamed texture – Unless the word inflammation be allowed to have a more indefinite and loose eaning, this affection – can hardly be attributed to that cause’ was stated on pages 85 and 86 of his 1832 paper. Hodgkin also mentioned that the first reference that he could find to this or similar disease was in fact by Malpighi in 1666.

Wilks (1865) described the disease in detail and, made aware by Bright that the first observations were done by Hodgkin, linked his name permanently to this new entity in a paper entitled `Cases of Enlargement of the Lymphatic Glands and Spleen (or Hodgkin’s Disease) with Remarks’ (Fig 3).

In 1837 Thomas Hodgkin was the outstanding candidate for the position of Assistant Physician at Guy’s Hospital in succession to Thomas Addison who had been promoted to Physician. After 10 years spent as Inspector of the Dead, he had published a great deal, including a two-volume work entitled The Morbid Anatomy of Serous and Mucous Membrane.

Hodgkin, acting in his other capacity, had sent Benjamin Harrison a report on the terrible consequences to native Indians of monopoly trading and on the inhuman treatment they received from officials of the Hudson Bay Company, of which Harrison was the financier. when the opportunity to appoint an Assistant Physician occurred, Harrison exercised an autocratic rule over the hospital and presided at the appointment made by the General Court. Thomas Hodgkin did not get the job and the next day he resigned all his appointments at Guy’s Hospital. Social medicine, medical problems associated with poverty, antislavery, concern for underpriviledged groups such as American Indians and Africans, as well as a strong sense of responsibility defined his life after this separation.

Sternberg (1898) and Reed (1902) are generally credited with the first definitive and thorough descriptions of the histopathology of Hodgkin’s disease. Based on the findings observed in her case series, Dorothy Reed concluded `We believe then, from the descriptions in the literature and the findings in 8 cases examined, that Hodgkin’s disease has a peculiar and typical histological picture and could thus rightly be considered a histopathological disease entity’.

During the successive decades, pathologists began to describe a broader spectrum of histological features. However, it was Jackson and Parker who, in scientific papers and in their well-known book Hodgkin’s Disease and Allied Disorders (Jackson & Parker, 1947), presented the first serious effort at a histopathological classification. They assigned the name `Hodgkin’s granuloma’ to the main body of typical cases. A much more malignant variant, usually characterized by a great abundance of pleomorphic and anaplastic Reed-Sternberg cells and seen in a relativelysmall number of cases was named `Hodgkin’s sarcoma’. A third, similarly infrequent, variant characterized by an extremely slow clinical evolution, a relative paucity of Reed-Sternberg cells and a great abundance of lymphocytes was termed `Hodgkin’s paragranuloma’. It was only approximately 20 years later that Lukes & Butler (1966) reported a characteristic subtype of the heterogeneous `granuloma’ category, to which they assigned the name `nodular sclerosis’. They also proposed a new histopathological classification, still in use to date, with an appreciably greater prognostic relevance and usefulness than the

previous Jackson-Parker classification.

The first human bone marrow transfusion was given to a patient with aplastic anemia in 1939.9 This patient received daily blood transfusions, and an attempt to raise her leukocyte and platelet counts was made using intravenous injection of bone marrow. After World War II and the use of the atomic bomb, researchers tried to find ways to restore the bone marrow function in aplasia caused by radiation exposure. In the 1950s, it was proven in a mouse model that marrow aplasia secondary to radiation can be overcome by syngeneic marrow graft.10 In 1956, Barnes and colleagues published their experiment on two groups of mice with acute leukemia: both groups were irradiated as anti-leukemic therapy and both were salvaged from marrow aplasia by bone marrow transplantation.

The topics of leukemias and lymphomas will not be discussed further in  this discussion.

The related references are:

Leukaemia – A Brief Historical Review from Ancient Times to 1950
British Journal of Haematology, 2001, 112, 282-292

The Story of Chronic Myeloid Leukaemia
British Journal of Haematology, 2000, 110, 2-11

Historical Review of Lymphomas
British Journal of Haematology 2000, 109, 466-476

Historical Review of Hodgkin’s Disease
British Journal of Haematology, 2000, 110, 504-511

Multiple Myeloma: an Odyssey of Discovery
British Journal of Haematology, 2000, 111, 1035-1044

The History of Blood Transfusion
British Journal of Haematology, 2000, 110, 758-767

Hematopoietic Stem Cell Transplantation—50 Years of Evolution and Future Perspectives. Henig I, Zuckerman T.
Rambam Maimonides Med J 2014;5 (4):e0028.
http://dx.doi.org/10.5041/RMMJ.10162

Landmarks in the history of blood transfusion.

1666 Richard Lower (Oxford) conducts experiments involving transfusion of blood from one animal to another

1667 Jean Denis (Paris) transfuses blood from animals to humans

1818 James Blundell (London) is credited with being the first person to transfuse blood from one human to another

1901 Karl Landsteiner (Vienna) discovers ABO blood groups. Awarded Nobel Prize for Medicine in 1930

1908 Alexis Carrel (New York) develops a surgical technique for transfusion, involving anastomosis of vein in the recipient with artery in the donor. Awarded Nobel Prize for Medicine in 1912

1915 Richard Lewinsohn (New York) develops 0.2% sodium citrate as anticoagulant

1921 The first blood donor service in the world was established in London by Percy Oliver

1937 Blood bank established in a Chicago hospital by Bernard Fantus

1940 Landsteiner and Wiener (New York) identify Rhesus antigens in man

1940 Edwin Cohn (Boston) develops a method for fractionation of plasma proteins. The following year, albumin produced by this method was used for the first time to treat victims of the Japanese attack on Pearl Harbour

1945 Antiglobulin test devised by Coombs (Cambridge), which also facilitated identification of several other antigenic systems such as Kell (Coombs et al, 1946), Duffy (Cutbush et al, 1950) and Kidd (Cutbush et al, 1950)

1948 National Blood Transfusion Service (NBTS) established in the UK

1951 Edwin Cohn (Boston) and colleagues develop the first blood cell separator

1964 Judith Pool (Palo Alto, California) develops cryoprecipitate for the treatment of haemophilia

1966 Cyril Clarke (Liverpool) reports the use of anti-Rh antibody to prevent haemolytic disease of the newborn

Read Full Post »

Selected Contributions to Chemistry from 1880 to 1980

Curator: Larry H. Bernstein, MD, FCAP

 

FUNDAMENTALS OF CHEMISTRY – Vol. I  The Contribution of Nobel Laureates to Chemistry

– Ferruccio Trifiro

http://www.eolss.net/sample-chapters/c06/e6-11-01-04.pdf

This chapter deals with the contribution to the development of chemistry of all the Nobel Prize winners in chemistry up to the end of the twentieth century, together with some in physics and medicine or physiology that have had particular relevance for the advances achieved in chemistry. The contributions of the various Nobel laureates cited are briefly summarized. The Nobel laureates in physics dealt with in this chapter are those who made important contributions to ard the understanding of the properties of atoms, the development of theoretical tools to treat the chemical bond, or the development of new analytical instrumentation. The Nobel laureates in medicine or physiology cited here are those whose contributions have been in the area of using chemistry to understand natural processes, such as the physiological aspects of living organisms through electron and ion exchange processes, enzymatic catalysis, and DNA-based chemistry. Eight areas of thought or thematic areas were chosen into which the contributions of the Nobel laureates to chemistry can be subdivided.

  1. The Properties of Molecules

4.1. The Discovery of Coordination and Metallorganic Compounds

4.2. The Discovery of New Organic Molecules

4.3. The Emergence of Quantum Chemistry

  1. The Dynamics of Chemical Reactions

6.1. Kinetics of Heterogeneous and Homogeneous Processes

6.2. The Identification of the Activated State

  1. The Understanding of Natural Processes

8.1. From Ferments to Enzymes

8.2. Understanding the Mechanism of Action of Enzymes

8.3. Mechanisms of Important Natural Processes

8.4. Characterization of Biologically Important Molecules

  1. The Identification of Chemical Entities

9.1. Analytical Methods

9.2. New Separation Techniques

9.3. The Development of New Instrumentation for Structure Analysis

The Nobel Prize in Chemistry: The Development of Modern Chemistry

by Bo G. Malmström and Bertil Andersson*

http://www.nobelprize.org/nobel_prizes/themes/chemistry/malmstrom/

Introduction

1.1 Chemistry at the Borders to Physics and Biology

The turn of the century 1900 was also a turning point in the history of chemistry. A survey of the Nobel Prizes in Chemistry during this century provides a view toward important trends in the development of Chemistry at the center of the sciences, bordering onto physics, which provides its theoretical foundation, on one side, and onto biology on the other. The fact that chemistry flourished during the beginning of the 20th century is intimately connected with fundamental developments in physics.

In 1897 Sir Joseph John Thomson of Cambridge announced his discovery of the electron, for which he was awarded the Nobel Prize for Physics in 1906. It took a number of years before its relevance to chemistry was seen. In 1911 Ernest Rutherford, who had worked in Thomson’s laboratory in the 1890s, formulated an atomic model, which depicted a cloud of electrons circling around the nucleus. Rutherford had received the Nobel Prize for Chemistry in 1908 for his work on radioactivity.

In Rutherford’s atomic model the stability of atoms was at variance with the laws of classical physics. Niels Bohr from Copenhagen brought clarity to this dilemma in the distinct lines observed in the spectra of atoms, the regularities of which had been discovered in 1890 by the physics professor Johannes (Janne) Rydberg at Lund University. This was the basis for Bohr’s formulation (1913) of an alternative atomic model. Only certain circular orbits of the electrons are allowed. In this model light is emitted (or absorbed), when an electron makes a transition from one orbit to another. For this, Bohr received the Nobel Prize for Physics in 1922

Gilbert Newton Lewis next suggested in 1916 that strong (covalent) bonds between atoms involve a sharing of two electrons between these atoms (electron-pair bond). Lewis also contributed fundamental work in chemical thermodynamics, and his brilliant textbook, Thermodynamics (1923), written together with Merle Randall, is counted as one of the masterworks in the chemical literature. Lewis never received a Nobel Prize.

However, important work was published in the 1890s, considered by the first Nobel Committee for Chemistry (see Section 2). Three of the Laureates during the first decade, Jacobus Henricus van’t Hoff, Svante Arrhenius and Wilhelm Ostwald, are generally regarded as the founders of a new branch of chemistry, physical chemistry. Fundamental work was also recognized in organic chemistry and in the chemistry of natural products, which is clearly reflected in the early prizes. Further, the Nobel Committee, recognized the border towards biology in 1907, with the prize to Eduard Buchner “for his biochemical researches and his discovery of cell-free fermentation”.

  1. The First Decade of Nobel Prizes for Chemistry

So much fundamental work in chemistry had been carried out during the last two decades of the 19th century that a decision for the first several prizes was not easy.  In 1901 the Academy had to consider 20 nominations, but no less than 11 of these named van’t Hoff, who was selected. van’t Hoff had already established the four valences for the carbon atom in his PhD thesis in Utrecht in 1874, foundation work for  modern organic chemistry. But the Nobel Prize was awarded for his later work on chemical kinetics and equilibria and on the osmotic pressure in solution, published in 1884 and 1886.

In his 1886 work van’t Hoff showed that most dissolved chemical compounds give an osmotic pressure equal to the gas pressure they would have exerted in the absence of the solvent. An apparent exception was aqueous solutions of electrolytes (acids, bases and their salts), but in the following year Arrhenius showed that this anomaly could be explained, if it is assumed that electrolytes in water dissociate into ions. Arrhenius had already presented the rudiments of his dissociation theory in his doctoral thesis, which was defended in Uppsala in 1884 and was not entirely well received by the faculty. It was, however, strongly supported by Ostwald in Riga, who, in fact, travelled to Uppsala to initiate a collaboration with Arrhenius. In 1886-1990 Arrhenius did work with Ostwald, first in Riga and then in Leipzig, and also with van’t Hoff in Berlin. Arrhenius was awarded the Nobel Prize for Chemistry in 1903,  and he was also nominated for the Prize for Physics (see Section 1).

The award of the Nobel Prize for Chemistry in 1909 to Ostwald was chiefly in recognition of his work on catalysis and the rates of chemical reactions. Ostwald had in his investigations, following up observations in his thesis in 1878, shown that the rate of acid-catalyzed reactions is proportional to the square of the strength of the acid, as measured by titration with base. His work offered support not only to Arrhenius’ theory of dissociation but also to van’t Hoff’s theory for osmotic pressure. Ostwald was founder and editor of Zeitschrift für Physikalische Chemie, the publication of which is generally regarded as the birth of this new branch of chemistry.

Three of the Nobel Prizes for Chemistry during the first decade were awarded for pioneering work in organic chemistry. In 1902 Emil Fischer, then in Berlin, was given the prize for “his work on sugar and purine syntheses”. Fischer’s work is an example of the growing interest biologically important substances, and was a foundation for the development of biochemistry. Another major influence from organic chemistry was the development of chemical industry, and a chief contributor here was Fischer’s teacher, Adolf von Baeyer in Munich, who was awarded the prize in 1905 “in recognition of his services in the advancement of organic chemistry and the chemical industry, … ” His contributions include, in particular, structure determination of organic

Ernest Rutherford [Lord Rutherford since 1931], professor of physics in Manchester, was awarded the Nobel Prize for Chemistry in 1908. In his studies of uranium disintegration he found two types of radiation, named α- and β-rays, and by their deviation in electric and magnetic fields he could show that α-rays consist of positively charged particles. He had received many nominations for the Nobel Prize for Physics (see Section 1).

In 1897 Eduard Buchner, at the time professor in Tübingen, published results demonstrating that the fermentation of sugar to alcohol and carbon dioxide can take place in the absence of yeast cells. Louis Pasteur had earlier maintained that alcoholic fermentation can only occur in the presence of living yeast cells. Buchner’s experiments showed unequivocally that fermentation is a catalytic process caused by the action of enzymes, as had been suggested by Berzelius for all life processes. Because of Buchner’s experiment, 1897 is generally regarded as the birth date for biochemistry proper. Buchner was awarded the Nobel Prize for Chemistry in 1907, when he was professor at the agricultural college in Berlin. This confirmed the prediction of his former teacher, Adolf von Baeyer: “This will make him famous, in spite of the fact that he lacks talent as a chemist.”

  1. The Nobel Prizes for Chemistry 1911-2000

3.1 General and Physical Chemistry

The Nobel Prize for Chemistry in 1914 was awarded to Theodore William Richards of Harvard University for “his accurate determinations of the atomic weight of a large number of chemical elements”. In 1913 Richards had discovered that the atomic weight of natural lead and of that formed in radioactive decay of uranium minerals differ. This pointed to the existence of isotopes, i.e. atoms of the same element with different atomic weights, which was accurately demonstrated by Francis William Aston at Cambridge University, with the aid of an instrument developed by him, the mass spectrograph. For his achievements Aston received the Nobel Prize for Chemistry in 1922.

One branch of physical chemistry deals with chemical events at the interface of two phases, for example, solid and liquid, and phenomena at such interfaces have important applications all the way from technical to physiological processes. Detailed studies of adsorption on surfaces, were carried out by Irving Langmuir at the research laboratory of General Electric Company, who was awarded the Nobel Prize for Chemistry in 1932, the first industrial scientist to receive this distinction.

Two of the Prizes for Chemistry in more recent decades have been given for fundamental work in the application of spectroscopic methods (Prizes for Physics in 1952, 1955 and 1961) to chemical problems. Gerhard Herzberg, a physicist at the University of Saskatchewan, received the Nobel Prize for Chemistry in 1971 for his molecular spectroscopy studies “of the electronic structure and geometry of molecules, particularly free radicals”. The most used spectroscopic method in chemistry is undoubtedly NMR (nuclear magnetic resonance), and Richard R. Ernst at ETH in Zürich was given the Nobel Prize for Chemistry in 1991 for “the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy”. Ernst’s methodology has now made it possible to determine the structure in solution (in contrast to crystals; cf. Section 3.5) of large molecules, such as proteins.

3.2 Chemical Thermodynamics

The Nobel Prize for Chemistry to van’t Hoff was in part for work in chemical thermodynamics, and many later contributions in this area have also been recognized with Nobel Prizes.  Walther Hermann Nernst of Berlin received this award in 1920 for work in thermochemistry, despite a 16-year opposition to this recognition from Arrhenius. Nernst had shown that it is possible to determine the equilibrium constant for a chemical reaction from thermal data, and in so doing he formulated what he himself called the third law of thermodynamics. This states that the entropy, a thermodynamic quantity, which is a measure of the disorder in the system, approaches zero as the temperature goes towards absolute zero. van’t Hoff had derived the mass action equation in 1886, with the aid of the second law which says, that the entropy increases in all spontaneous processes [this had already been done in 1876 by J. Willard Gibbs at Yale, who certainly had deserved a Nobel Prize].  Nernst showed in 1906 that it is possible with the aid of the third law, to derive the necessary parameters from the temperature dependence of thermochemical quantities. Nernst carried out thermo-chemical measurements at very low temperatures to prove his heat theorem. G.N. Lewis (see Section 1.1) in Berkeley extended these studies in the 1920s and his new formulation of the third law was confirmed by his student, William Francis Giauque, who extended the temperature range experimentally accessible by introducing the method of adiabatic demagnetization in 1933. He managed to reach temperatures a few thousandths of a degree above absolute zero and could thereby provide extremely accurate entropy estimates. He also showed that it is possible to determine entropies from spectroscopic data. Giauque was awarded the Nobel Prize for Chemistry in 1949 for his contributions to chemical thermodynamics.

The next Nobel Prize given for work in thermodynamics went to Lars Onsager of Yale University in 1968 for contributions to the thermodynamics of irreversible processes. Classical thermodynamics deals with systems at equilibrium, in which the chemical reactions are said to be reversible, but many chemical systems, for example, the most complex of all, living organisms, are far from equilibrium and their reactions are said to be irreversible. Onsager developed his so-called reciprocal relations in 1931, describing the flow of matter and energy in such systems, but the importance of his work was not recognized until the end of the 1940s. A further step forward in the development of non-equilibrium thermodynamics was taken by Ilya Prigogine in Bruxelles, whose theory of dissipative structures was awarded the Nobel Prize for Chemistry in 1977.

3.3 Chemical Change

The chief method to get information about the mechanism of chemical reactions is chemical kinetics, i.e. measurements of the rate of the reaction as a function of reactant concentrations as well as its dependence on temperature, pressure and reaction medium. Important work in this area had been done already in the 1880s by two of the early Laureates, van’t Hoff and Arrhenius, who showed that it is not enough for molecules to collide for a reaction to take place. Only molecules with sufficient kinetic energy in the collision do, in fact, react, and Arrhenius derived an equation in 1889 allowing the calculation of this activation energy from the temperature dependence of the reaction rate. With the advent of quantum mechanics in the 1920s (see Section 3.4), Eyring developed his transition-state theory in 1935 which showed that the activation entropy is also important. Strangely, Eyring never received a Nobel Prize (see Section 1.2).

In 1956 Sir Cyril Norman Hinshelwood of Oxford and Nikolay Nikolaevich Semenov from Moscow shared the Nobel Prize for Chemistry “for their researches into the mechanism of chemical reactions”.  A limit in investigating reaction rates is set by the speed with which the reaction can be initiated. If this is done by rapid mixing of the reactants, the time limit is about one thousandth of a second (millisecond). In the 1950s Manfred Eigen from Göttingen developed chemical relaxation methods that allow measurements in times as short as a thousandth or a millionth of a millisecond (microseconds or nanoseconds). The methods involve disturbing an equilibrium by rapid changes in temperature or pressure and then follow the passage to a new equilibrium. Another way to initiate some reactions rapidly is flash photolysis, i.e. by short light flashes, a method developed by Ronald G.W. Norrish at Cambridge and George Porter (Lord Porter since 1990) in London. Eigen received one-half and Norrish and Porter shared the other half of the Nobel Prize for Chemistry in 1967. The milli- to picosecond time scales gave important information on chemical reactions. However, it was not until it was possible to generate femtosecond laser pulses (10-15 s) that it became possible to reveal when chemical bonds are broken and formed. Ahmed Zewail (born 1946 in Egypt) at California Institute of Technology received the Nobel Prize for Chemistry in 1999 for his development of “femtochemistry” and in particular for being the first to experimentally demonstrate a transition state during a chemical reaction. His experiments relate back to 1889 when Arrhenius (Nobel Prize, 1903) made the important prediction that there must exist intermediates (transition states) in the transformation from reactants to products.

Henry Taube of Stanford University was awarded the Nobel Prize for Chemistry in 1983 “for his work on the mechanism of electron transfer reactions, especially in metal complexes”. Even if Taube’s work was on inorganic reactions, electron transfer is important in many catalytic processes used in industry and also in biological systems, for example, in respiration and photosynthesis.

3.4 Theoretical Chemistry and Chemical Bonding

Quantum mechanics, developed in the 1920s, offered a tool towards a more basic understanding of chemical bonds. In 1927 Walter Heitler and Fritz London showed that it is possible to solve exactly the relevant equations for the hydrogen molecule ion, i.e. two hydrogen nuclei sharing a single electron, and thereby calculate the attractive force between the nuclei. A pioneer in developing such methods was Linus Pauling at California Institute of Technology, who was awarded the Nobel Prize for Chemistry in 1954 “for his research into the nature of the chemical bond …” Pauling’s valence-bond (VB) method is rigorously described in his 1935 book Introduction to Quantum Mechanics (written together with E. Bright Wilson, Jr., at Harvard). A few years later (1939) he published an extensive non-mathematical treatment in The Nature of the Chemical Bond, a book which is one of the most read and influential in the entire history of chemistry. Pauling was not only a theoretician, but he also carried out extensive investigations of chemical structure by X-ray diffraction (see Section 3.5). On the basis of results with small peptides, which are building blocks of proteins, he suggested the α-helix as an important structural element. Pauling was awarded the Nobel Peace Prize for 1962, and he is the only person to date to have won two unshared Nobel Prizes.

α-helix   Pauling’s α-helix

α-carbon atoms are black, other carbon atoms grey, nitrogen atoms blue, oxygen atoms red and hydrogen atoms white; R designates amino-acid side chains. The dotted red lines are hydrogen bonds between amide and carbonyl groups in the peptide bonds.

Pauling’s VB method cannot give an adequate description of chemical bonding in many complicated molecules, and a more comprehensive treatment, the molecular-orbital (MO) method, was introduced already in 1927 by Robert S. Mulliken from Chicago and later developed further. MO theory considers, in quantum-mechanical terms, the interaction between all atomic nuclei and electrons in a molecule. Mulliken also showed that a combination of MO calculations with experimental (spectroscopic) results provides a powerful tool for describing bonding in large molecules. Mulliken received the Nobel Prize for Chemistry in 1966.

Theoretical chemistry has also contributed significantly to our understanding of chemical reaction mechanisms. In 1981 the Nobel Prize for Chemistry was shared between Kenichi Fukui in Kyoto and Roald Hoffmann of Cornell University “for their theories, developed independently, concerning the course of chemical reactions”. Fukui introduced in 1952 the frontier-orbital theory, according to which the occupied MO with the highest energy and the unoccupied one with the lowest energy have a dominant influence on the reactivity of a molecule. Hoffmann formulated in 1965, together with Robert B. Woodward (see Section 3.8), rules based on the conservation of orbital symmetry, for the reactivity and stereochemistry in chemical reactions.
3.5 Chemical Structure

The most commonly used method to determine the structure of molecules in three dimensions is X-ray crystallography. The diffraction of X-rays was discovered by Max von Laue in 1912, and this gave him the Nobel Prize for Physics in 1914. Its use for the determination of crystal structure was developed by Sir William Bragg and his son, Sir Lawrence Bragg, and they shared the Nobel Prize for Physics in 1915. The first Nobel Prize for Chemistry for the use of X-ray diffraction went to Petrus (Peter) Debye, then of Berlin, in 1936. Debye did not study crystals, however, but gases, which give less distinct diffraction patterns.

Many Nobel Prizes have been awarded for the determination of the structure of biological macromolecules (proteins and nucleic acids). Proteins are long chains of amino-acids, as shown by Emil Fischer (see Section 2), and the first step in the determination of their structure is to determine the order (sequence) of these building blocks. An ingenious method for this tedious task was developed by Frederick Sanger of Cambridge, and he reported the amino-acid sequence for a protein, insulin, in 1955. For this achievement he was awarded the Nobel Prize for Chemistry in 1958. Sanger later received part of a second Nobel Prize for Chemistry for a method to determine the nucleotide sequence in nucleic acids (see Section 3.12), and he is the only scientist so far who has won two Nobel Prizes for Chemistry.

The first protein crystal structures were reported by Max Perutz and Sir John Kendrew in 1960, and these two investigators shared the Nobel Prize for Chemistry in 1962. Perutz had started studying the oxygen-carrying blood pigment, hemoglobin, with Sir Lawrence Bragg in Cambridge already in 1937, and ten years later he was joined by Kendrew, who looked at crystals of the related muscle pigment, myoglobin. These proteins are both rich in Pauling’s α-helix (see Section 3.4), and this made it possible to discern the main features of the structures at the relatively low resolution first used. The same year that Perutz and Kendrew won their prize, the Nobel Prize for Physiology or Medicine went to Francis Crick, James Watson and Maurice Wilkins “for their discoveries concerning the molecular structure of nucleic acids … .” Two years later (1964) Dorothy Crowfoot Hodgkin received the Nobel Prize for Chemistry for determining the crystal structures of penicillin and vitamin B12.

Crystallographic electron microscopy was developed by Sir Aaron Klug in Cambridge, who was awarded the Nobel Prize for Chemistry in 1982. Attempts to prepare crystals of membrane proteins for structural studies were unsuccessful, but in 1982 Hartmut Michel managed to crystallize a photosynthetic reaction center after a painstaking series of experiments. He then proceeded to determine the three-dimensional structure of this protein complex in collaboration with Johann Deisenhofer and Robert Huber, and this was published in 1985. Deisenhofer, Huber and Michel shared the Nobel Prize for Chemistry in 1988. Michel has later also crystallized and determined the structure of the terminal enzyme in respiration, and his two structures have allowed detailed studies of electron transfer (cf. Sections 3.3 and 3.4) and its coupling to proton pumping, key features of the chemiosmotic mechanism for which Peter Mitchell had already received the Nobel Prize for Chemistry in 1978 (see Section 3.12). Functional and structural studies on the enzyme ATP synthase, connected to this proton pumping mechanism, was awarded one-half of the Nobel Prize for Chemistry in 1997, shared between Paul D. Boyer and John Walker (see Section 3.12).

3.6 Inorganic and Nuclear Chemistry

Much of the progress in inorganic chemistry during the 20th century has been associated with investigations of coordination compounds, i.e., a central metal ion surrounded by a number of coordinating groups, called ligands. In 1893 Alfred Werner in Zürich presented his coordination theory, and in 1905 he summarized his investigations in this new field in a book (Neuere Anschauungen auf dem Gebiete der anorganischen Chemie), which appeared in no less than five editions from 1905-1923. . Werner showed that a structure for compounds in which a metal ion binds several other molecules (ligands), all the ligand molecules are bound directly to the metal ion. Werner was awarded the Nobel Prize for Chemistry in 1913. Taube’s investigations of electron transfer, awarded in 1983 (see Section 3.3), were mainly carried out with coordination compounds, and vitamin B12 as well as the proteins hemoglobin and myoglobin, investigated by the Laureates Hodgkin, Perutz and Kendrew (see Section 3.5), also belong to this category.

Much inorganic chemistry in the early 1900s was a consequence of the discovery of radioactivity in 1896, for which Henri Becquerel from Paris was awarded the Nobel Prize for Physics in 1903, together with Pierre and Marie Curie. In 1911 Marie Curie received the Nobel Prize for Chemistry for her discovery of the elements radium and polonium and for the isolation of radium and studies of its compounds, and this made her the first investigator to be awarded two Nobel Prizes. The prize in 1921 went to Frederick Soddy of Oxford for his work on the chemistry of radioactive substances and on the origin of isotopes. In 1934 Frédéric Joliot and his wife Irène Joliot-Curie, the daughter of the Curies, discovered artificial radioactivity, i.e., new radioactive elements produced by the bombardment of non-radioactive elements with a-particles or neutrons. They were awarded the Nobel Prize for Chemistry in 1935 for “their synthesis of new radioactive elements”.

Many elements are mixtures of non-radioactive isotopes (see Section 3.1), and in 1934 Harold Urey of Columbia University had been given the Nobel Prize for Chemistry for his isolation of heavy hydrogen (deuterium). Urey had also separated uranium isotopes, and his work was an important basis for the investigations by Otto Hahn from Berlin. In attempts to make transuranium elements, i.e., elements with a higher atomic number than 92 (uranium), by radiating uranium atoms with neutrons, Hahn discovered that one of the products was barium, a lighter element. Lise Meitner, at the time a refugee from Nazism in Sweden, who had earlier worked with Hahn and taken the initiative for the uranium bombardment experiments, provided the explanation, namely, that the uranium atom was cleaved and that barium was one of the products. Hahn was awarded the Nobel Prize for Chemistry in 1944 “for his discovery of the fission of heavy nuclei”, and it can be wondered why Meitner was not included. Hahn’s original intention with his experiments was later achieved by Edwin M. McMillan and Glenn T. Seaborg of Berkeley, who were given the Nobel Prize for Chemistry in 1951 for “discoveries in the chemistry of transuranium elements”.

The use of stable as well as radioactive isotopes have important applications, not only in chemistry, but also in fields as far apart as biology, geology and archeology. In 1943 George de Hevesy from Stockholm received the Nobel Prize for Chemistry for his work on the use of isotopes as tracers, involving studies in inorganic chemistry and geochemistry as well as on the metabolism in living organisms. The prize in 1960 was given to Willard F. Libby of the University of California, Los Angeles (UCLA), for his method to determine the age of various objects (of geological or archeological origin) by measurements of the radioactive isotope carbon-14.

3.7 General Organic Chemistry

Contributions in organic chemistry have led to more Nobel Prizes for Chemistry than work in any other of the traditional branches of chemistry. Like the first prize in this area, that to Emil Fischer in 1902 (see Section 2), most of them have, however, been awarded for advances in the chemistry of natural products and will be treated separately (Section 3.9). Another large group, preparative organic chemistry, has also been given its own section (Section 3.8), and here only the prizes for more general contributions to organic chemistry will be discussed.

In 1969 the Nobel Prize for Chemistry went to Sir Derek H. R. Barton from London, and Odd Hassel from Oslo for developing the concept of conformation, i.e. the spatial arrangement of atoms in molecules, which differ only by the orientation of chemical groups by rotation around a single bond. This stereochemical concept rests on the original suggestion by van’t Hoff of the tetrahedral arrangement of the four valences of the carbon atom (see Section 2), and most organic molecules exist in two or more stable conformations.

The Nobel Prize for Chemistry in 1975 to Sir John Warcup Cornforth of the University of Sussex and Vladimir Prelog of ETH in Zürich was also based on research in stereochemistry. Not only can a compound have more than one geometric form, but chemical reactions can also have specificity in their stereochemistry, thereby forming a product with a particular three-dimensional arrangement of the atoms. This is especially true of reactions in living organisms, and Cornforth has mainly studied enzyme-catalyzed reactions, so his work borders onto biochemistry (Section 3.12). One of Prelog’s main contributions concerns chiral molecules, i.e. molecules that have two forms differing from one another as the right hand does from the left. Stereochemically specific reactions have great practical importance, as many drugs, for example, are active only in one particular geometric form.

Organometallic compounds constitute a group of organic molecules containing one or more carbon-metal bond, and they are thus the organic counterpart to Werner’s inorganic coordination. In 1952 Ernst Otto Fischer and Sir Geoffrey Wilkinson independently described a completely new group of organometallic molecules, called sandwich compounds in which compounds a metal ion is bound not to a single carbon atom but is “sandwiched” between two aromatic organic molecules. Fischer and Wilkinson shared the Nobel Prize for Chemistry in 1973.

3.8 Preparative Organic Chemistry

One of the chief goals of the organic chemist is to be able to synthesize increasingly complex compounds of carbon in combination with various other elements. The first Nobel Prize for Chemistry recognizing pioneering work in preparative organic chemistry was that to Victor Grignard from Nancy and Paul Sabatier from Toulouse in 1912. Grignard had discovered that organic halides can form compounds with magnesium. Sabatier was given the prize for developing a method to hydrogenate organic compounds in the presence of metallic catalysts. The prize in 1950 was presented to Otto Diels from Kiel and Kurt Alder from Cologne “for their discovery and development of the diene synthesis”, developed in 1928, by which organic compounds containing two double bonds (“dienes”) can effect the syntheses of many cyclic organic substances.

The German organic chemist Hans Fischer from Munich had already done significant work on the structure of hemin, the organic pigment in hemoglobin, when he synthesized it from simpler organic molecules in 1928. He also contributed much to the elucidation of the structure of chlorophyll, and for these important achievements he was awarded the Nobel Prize for Chemistry in 1930 (cf. Section 3.5). He finished his determination of the structure of chlorophyll in 1935, and by the time of his death he had almost completed its synthesis as well.

Robert Burns Woodward from Harvard is rightly considered the founder of the most advanced, modern art of organic synthesis. He designed methods for the total synthesis of a large number of complicated natural products, for example, cholesterol, chlorophyll and vitamin B12. He received the Nobel Prize for Chemistry in 1965, and he would probably have received a second chemistry prize in 1981 for his part in the formulation of the Woodward-Hoffmann rules (see Section 3.4), had it not been for his early death.

The Nobel Prize for Chemistry in 1984 was given to Robert Bruce Merrifield of Rockefeller University “for his development of methodology for chemical synthesis on a solid matrix”. Specifically, the synthesis of large peptides and small proteins.

3.9 Chemistry of Natural Product

The synthesis of complex organic molecules must be based on detailed knowledge of their structure. Early work on plant pigments was carried out by Richard Willstätter, a student of Adolf von Baeyer from Munich (see Section 2). Willstätter showed a structural relatedness between chlorophyll and hemin, and he demonstrated that chlorophyll contains magnesium as an integral component. He also carried out pioneering investigations on other plant pigments, such as the carotenoids, and he was awarded the Nobel Prize for Chemistry in 1915 for these achievements. Willstätter’s work laid the ground for the synthetic accomplishments of Hans Fischer (see Section 3.8). In addition, Willstätter contributed to the understanding of enzyme reactions.

The prizes for 1927 and 1928 were both presented to Heinrich Otto Wieland from Munich and Adolf Windaus from Göttingen, respectively, at the Nobel ceremony in 1928. These two chemists had done closely related work on the structure of steroids. The award to Wieland was primarily for his investigations of bile acids, whereas Windaus was recognized mainly for his work on cholesterol and his demonstration of the steroid nature of vitamin D. Wieland had already in 1912, before his prize-winning work, formulated a theory for biological oxidation, according to which removal of hydrogen (dehydrogenation) rather than reaction with oxygen is the dominating process.

Investigations on vitamins were recognized in 1937 and 1938 with the prizes to Sir Norman Haworth from Birmingham and Paul Karrer from Zürich and to Richard Kuhn from Heidelberg. Haworth did outstanding work in carbohydrate chemistry, establishing the ring structure of glucose. He was the first chemist to synthesize vitamin C, and this is the basis for the present large-scale production of this nutrient. Haworth shared the prize with Karrer, who determined the structure of carotene and of vitamin A. Kuhn also worked on carotenoids, and he published the structure of vitamin B2 at the same time as Karrer. He also isolated vitamin B6. In 1939 the Nobel Prize for Chemistry was shared between Adolf Butenandt from Berlin and Leopold Ruzicka (1887-1976) of ETH, Zurich. Butenandt was recognized “for his work on sex hormones”, having isolated estrone, progesterone and androsterone. Ruzicka synthesized androsterone and also testosterone.

The awards for outstanding work in natural-product chemistry continued after World War II. In 1947 Sir Robert Robinson from Oxford received the prize for his studies on plant substances, particularly alkaloids, such as morphine. Robinson also synthesized steroid hormones, and he elucidated the structure of penicillin. Many hormones are of a polypeptide nature, and in 1955 Vincent du Vigneaud of Cornell University was given the prize for his synthesis of two such hormones, vasopressin and oxytocin. Finally, in this area, Alexander R. Todd (Lord Todd since 1962) was recognized in 1957 “for his work on nucleotides and nucleotide co-enzymes”. Todd had synthesized ATP (adenosine triphosphate) and ADP (adenosine diphosphate), the main energy carriers in living cells, and he determined the structure of vitamin B12 (cf. Section 3.5) and of FAD (flavin-adenine dinucleotide).

3.10 Analytical Chemistry and Separation Science

A prize in analytical chemistry was given to Jaroslav Heyrovsky from Prague in 1959 for his development of polarographic methods of analysis. In these a dropping mercury electrode is employed to determine current-voltage curves for electrolytes. A given ion reacts at a specific voltage, and the current is a measure of the concentration of this ion.

The analysis of macromolecular constituents in living organisms requires specialized methods of separation. Ultracentrifugation wad developed by The Svedberg from Uppsala a few years before he was awarded the Nobel Prize for Chemistry in 1926 “for his work on disperse systems” (see Section 3.11). Svedberg’s student, Arne Tiselius, studied the migration of protein molecules in an electric field, and with this method, named electrophoresis, he demonstrated the complex nature of blood proteins. Tiselius also refined adsorption analysis, a method first used by the Russian botanist, Michail Tswett, for the separation of plant pigments and named chromatography by him. In 1948 Tiselius was given the prize for these achievements. A few years later (1952) Archer J.P. Martin from London and Richard L.M. Synge from Bucksburn (Scotland) shared the prize “for their invention of partition chromatography”, and this method was a major tool in many biochemical investigations later awarded with Nobel Prizes (see Section 3.12).

3.11 Polymers and Colloids

The Svedberg who received the Nobel Prize for Chemistry in 1926, also investigated gold sols. He used Zsigmond’s ultramicroscope to study the Brownian movement of colloidal particles, so named after the Scottish botanist Robert Brown, and confirmed a theory developed by Albert Einstein in 1905 and, independently, by M. Smoluchowski. His greatest achievement was, however, the construction of the ultracentrifuge, with which he studied not only the particle size distribution in gold sols but also determined the molecular weight of proteins, for example, hemoglobin. In the same year as Svedberg got the prize the Nobel Prize for Physics was awarded to Jean Baptiste Perrin of Sorbonne for developing equilibrium sedimentation in colloidal solutions, a method which Svedberg later perfected in his ultracentrifuge. Svedberg’s investigations with the ultracentrifuge and Tiselius’s electrophoresis studies (see Section 3.10) were instrumental in establishing that protein molecules have a unique size and structure, and this was a prerequisite for Sanger’s determination of their amino-acid sequence and the crystallographic work of Kendrew and Perutz (see Section 3.5).

3.12 Biochemistry

The second Nobel Prize for discoveries in biochemistry came in 1929, when Sir Arthur Harden from London and Hans von Euler-Chelpin from Stockholm shared the prize for investigations of sugar fermentation, which formed a direct continuation of Buchner’s work awarded in 1907. With his young co-worker, William John Young, Harden had shown in 1906 that fermentation requires a dialysable substance, called co-zymase, which is not destroyed by heat. Harden and Young also demonstrated that the process stops before all sugar (glucose) has been used up, but it starts again on addition of inorganic phosphate, and they suggested that hexose phosphates are formed in the early steps of fermentation. von Euler had done important work on the structure of co-zymase, shown to be nicotinamide adenine dinucleotide (NAD, earlier called DPN). As the number of Laureates can be three, it may seem appropriate for Young to have been included in the award, but Euler’s discovery was published together with Karl Myrbäck, and the number of Laureates is limited to three.

The next biochemical Nobel Prize was given in 1946 for work in the protein field. James B. Sumner of Cornell University received half the prize “for his discovery that enzymes can be crystallized” and John H. Northrop together with Wendell M. Stanley, both of the Rockefeller Institute, shared the other half “for their preparation of enzymes and virus proteins in a pure form”. Sumner had in 1926 crystalized an enzyme, urease, from jack beans and suggested that the crystals were the pure protein. His claim was, however, greeted with great scepticism, and the crystals were suggested to be inorganic salts with the enzyme adsorbed or occluded. Just a few years after Sumner’s discovery Northrop, however, managed to crystalize three digestive enzymes, pepsin, trypsin and chymotrypsin, and by painstaking experiments shown them to be pure proteins. Stanley started his attempt to purify virus proteins in the 1930s, but not until 1945 did he get virus crystals, and this then made it possible to show that viruses are complexes of protein and nucleic acid. The pioneering studies of these three investigators form the basis for the enormous number of new crystal structures of biological macromolecules, which have been published in the second half of the 20th century (cf. Section 3.5).

Several Nobel Prizes for Chemistry have been awarded for work in photosynthesis and respiration, the two main processes in the energy metabolism of living organisms (cf. Section 3.5). In 1961 Melvin Calvin of Berkeley received the prize for elucidating the carbon dioxide assimilation in plants. With the aid of carbon-14 (cf. Section 3.6) Calvin had shown that carbon dioxide is fixed in a cyclic process involving several enzymes. Peter Mitchell of the Glynn Research Laboratories in England was awarded in 1978 for his formulation of the chemiosmotic theory. According to this theory, electron transfer (cf. Sections 3.3 and 3.4) in the membrane-bound enzyme complexes in both respiration and photosynthesis, is coupled to proton translocation across the membranes, and the electrochemical gradient thus created is used to drive the synthesis of ATP (adenosine triphosphate), the energy storage molecule in all living cells. Paul D. Boyer of UCLA and John C. Walker of the MRC Laboratory in Cambridge shared one-half of the 1997 prize for their elucidation of the mechanism of ATP synthesis; the other half of the prize went to Jens C. Skou in Aarhus for the first discovery of an ion-transporting enzyme. Walker had determined the crystal structure of ATP synthase, and this structure confirmed a mechanism earlier proposed by Boyer, mainly on the basis of isotopic studies.

Luis F. Leloir from Buenos Aires was awarded in 1970 “for the discovery of sugar nucleotides and their role in the biosynthesis of carbohydrates”. In particular, Leloir had elucidated the biosynthesis of glycogen, the chief sugar reserve in animals and many microorganisms. Two years later the prize went with one half to Christian B. Anfinsen of NIH and the other half shared by Stanford Moore and William H. Stein, both from Rockefeller University, for fundamental work in protein chemistry. Anfinsen had shown, with the enzyme ribonuclease, that the information for a protein assuming a specific three-dimensional structure is inherent in its amino-acid sequence, and this discovery was the starting point for studies of the mechanism of protein folding, one of the major areas of present-day biochemical research. Moore and Stein had determined the amino-acid sequence of ribonuclease, but they received the prize for discovering anomalous properties of functional groups in the enzyme’s active site, which is a result of the protein fold.

Naturally a number of Nobel Prizes for Chemistry have been given for work in the nucleic acid field. In 1980 Paul Berg of Stanford received one half of the prize for studies of recombinant DNA, i.e. a molecule containing parts of DNA from different species, and the other half was shared by Walter Gilbert from Harvard and Frederick Sanger (see Section 3.5) for developing methods for the determination of the base sequences of nucleic acids. Berg’s work provides the basis of genetic engineering, which has led to the large biotechnology industry. Base sequence determinations are essential steps in recombinant-DNA technology, which is the rationale for Gilbert and Sanger sharing the prize with Berg.

Sidney Altman of Yale and Thomas R. Cech of the University of Colorado shared the prize in 1989 “for their discovery of the catalytic properties of RNA”. The central dogma of molecular biology is: DNA –> RNA –> enzyme. The discovery that not only enzymes but also RNA possesses catalytic properties have led to new ideas about the origin of life. The 1993 prize was shared by Kary B. Mullis from La Jolla and Michael Smith from Vancouver, who both have given important contributions to DNA technology. Mullis developed the PCR (“polymerase chain reaction”) technique, which makes it possible to replicate millions of times a specific DNA segment in a complicated genetic material. Smith’s work forms the basis for site-directed mutagenesis, a technique by which it is possible to change a specific amino-acid in a protein and thereby illuminate its functional role.

  1. Concluding Remarks

The first eighty years of Nobel Prizes for Chemistry outlines the development of modern chemistry. The prizes cover a broad spectrum of the basic chemical sciences, from theoretical chemistry to biochemistry, and also a number of contributions to applied chemistry. Organic chemistry dominates with no less than 25 awards. This is not surprising, since the special valence properties of carbon result in an almost infinite variation in the structure of organic compounds. Also, a large number of the prizes in organic chemistry were given for investigations of the chemistry of natural products of increasing complexity, and have lead to pharmaceutical development .

As many as 11 prizes have been awarded for biochemical discoveries. The first biochemical prize was already given in 1907 (Buchner), but only three awards in this area came in the first half of the century, illustrating the explosive growth of biochemistry in recent decades (8 prizes in 1970-1997). At the other end of the chemical spectrum, physical chemistry, including chemical thermodynamics and kinetics, dominates with 14 prizes, but there have also been 6 prizes in theoretical chemistry. Chemical structure is a large area with 8 prizes, including awards for methodological developments as well as for the determination of the structure of large biological molecules or molecular complexes. Industrial chemistry was first recognized in 1931 (Bergius, Bosch), but many more recent prizes for basic contributions lie close to industrial applications.

Read Full Post »

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Author and Curator: Larry H. Bernstein, MD, FCAP

Article ID #160: Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer. Published on 11/9/2014

WordCloud Image Produced by Adam Tubman

This summary is the last of a series on the impact of transcriptomics, proteomics, and metabolomics on disease investigation, and the sorting and integration of genomic signatures and metabolic signatures to explain phenotypic relationships in variability and individuality of response to disease expression and how this leads to  pharmaceutical discovery and personalized medicine.  We have unquestionably better tools at our disposal than has ever existed in the history of mankind, and an enormous knowledge-base that has to be accessed.  I shall conclude here these discussions with the powerful contribution to and current knowledge pertaining to biochemistry, metabolism, protein-interactions, signaling, and the application of the -OMICS to diseases and drug discovery at this time.

The Ever-Transcendent Cell

Deriving physiologic first principles By John S. Torday | The Scientist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41282/title/The-Ever-Transcendent-Cell/

Both the developmental and phylogenetic histories of an organism describe the evolution of physiology—the complex of metabolic pathways that govern the function of an organism as a whole. The necessity of establishing and maintaining homeostatic mechanisms began at the cellular level, with the very first cells, and homeostasis provides the underlying selection pressure fueling evolution.

While the events leading to the formation of the first functioning cell are debatable, a critical one was certainly the formation of simple lipid-enclosed vesicles, which provided a protected space for the evolution of metabolic pathways. Protocells evolved from a common ancestor that experienced environmental stresses early in the history of cellular development, such as acidic ocean conditions and low atmospheric oxygen levels, which shaped the evolution of metabolism.

The reduction of evolution to cell biology may answer the perennially unresolved question of why organisms return to their unicellular origins during the life cycle.

As primitive protocells evolved to form prokaryotes and, much later, eukaryotes, changes to the cell membrane occurred that were critical to the maintenance of chemiosmosis, the generation of bioenergy through the partitioning of ions. The incorporation of cholesterol into the plasma membrane surrounding primitive eukaryotic cells marked the beginning of their differentiation from prokaryotes. Cholesterol imparted more fluidity to eukaryotic cell membranes, enhancing functionality by increasing motility and endocytosis. Membrane deformability also allowed for increased gas exchange.

Acidification of the oceans by atmospheric carbon dioxide generated high intracellular calcium ion concentrations in primitive aquatic eukaryotes, which had to be lowered to prevent toxic effects, namely the aggregation of nucleotides, proteins, and lipids. The early cells achieved this by the evolution of calcium channels composed of cholesterol embedded within the cell’s plasma membrane, and of internal membranes, such as that of the endoplasmic reticulum, peroxisomes, and other cytoplasmic organelles, which hosted intracellular chemiosmosis and helped regulate calcium.

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.  ….

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.

Given that the unicellular toolkit is complete with all the traits necessary for forming multicellular organisms (Science, 301:361-63, 2003), it is distinctly possible that metazoans are merely permutations of the unicellular body plan. That scenario would clarify a lot of puzzling biology: molecular commonalities between the skin, lung, gut, and brain that affect physiology and pathophysiology exist because the cell membranes of unicellular organisms perform the equivalents of these tissue functions, and the existence of pleiotropy—one gene affecting many phenotypes—may be a consequence of the common unicellular source for all complex biologic traits.  …

The cell-molecular homeostatic model for evolution and stability addresses how the external environment generates homeostasis developmentally at the cellular level. It also determines homeostatic set points in adaptation to the environment through specific effectors, such as growth factors and their receptors, second messengers, inflammatory mediators, crossover mutations, and gene duplications. This is a highly mechanistic, heritable, plastic process that lends itself to understanding evolution at the cellular, tissue, organ, system, and population levels, mediated by physiologically linked mechanisms throughout, without having to invoke random, chance mechanisms to bridge different scales of evolutionary change. In other words, it is an integrated mechanism that can often be traced all the way back to its unicellular origins.

The switch from swim bladder to lung as vertebrates moved from water to land is proof of principle that stress-induced evolution in metazoans can be understood from changes at the cellular level.

http://www.the-scientist.com/Nov2014/TE_21.jpg

A MECHANISTIC BASIS FOR LUNG DEVELOPMENT: Stress from periodic atmospheric hypoxia (1) during vertebrate adaptation to land enhances positive selection of the stretch-regulated parathyroid hormone-related protein (PTHrP) in the pituitary and adrenal glands. In the pituitary (2), PTHrP signaling upregulates the release of adrenocorticotropic hormone (ACTH) (3), which stimulates the release of glucocorticoids (GC) by the adrenal gland (4). In the adrenal gland, PTHrP signaling also stimulates glucocorticoid production of adrenaline (5), which in turn affects the secretion of lung surfactant, the distension of alveoli, and the perfusion of alveolar capillaries (6). PTHrP signaling integrates the inflation and deflation of the alveoli with surfactant production and capillary perfusion.  THE SCIENTIST STAFF

From a cell-cell signaling perspective, two critical duplications in genes coding for cell-surface receptors occurred during this period of water-to-land transition—in the stretch-regulated parathyroid hormone-related protein (PTHrP) receptor gene and the β adrenergic (βA) receptor gene. These gene duplications can be disassembled by following their effects on vertebrate physiology backwards over phylogeny. PTHrP signaling is necessary for traits specifically relevant to land adaptation: calcification of bone, skin barrier formation, and the inflation and distention of lung alveoli. Microvascular shear stress in PTHrP-expressing organs such as bone, skin, kidney, and lung would have favored duplication of the PTHrP receptor, since sheer stress generates radical oxygen species (ROS) known to have this effect and PTHrP is a potent vasodilator, acting as an epistatic balancing selection for this constraint.

Positive selection for PTHrP signaling also evolved in the pituitary and adrenal cortex (see figure on this page), stimulating the secretion of ACTH and corticoids, respectively, in response to the stress of land adaptation. This cascade amplified adrenaline production by the adrenal medulla, since corticoids passing through it enzymatically stimulate adrenaline synthesis. Positive selection for this functional trait may have resulted from hypoxic stress that arose during global episodes of atmospheric hypoxia over geologic time. Since hypoxia is the most potent physiologic stressor, such transient oxygen deficiencies would have been acutely alleviated by increasing adrenaline levels, which would have stimulated alveolar surfactant production, increasing gas exchange by facilitating the distension of the alveoli. Over time, increased alveolar distension would have generated more alveoli by stimulating PTHrP secretion, impelling evolution of the alveolar bed of the lung.

This scenario similarly explains βA receptor gene duplication, since increased density of the βA receptor within the alveolar walls was necessary for relieving another constraint during the evolution of the lung in adaptation to land: the bottleneck created by the existence of a common mechanism for blood pressure control in both the lung alveoli and the systemic blood pressure. The pulmonary vasculature was constrained by its ability to withstand the swings in pressure caused by the systemic perfusion necessary to sustain all the other vital organs. PTHrP is a potent vasodilator, subserving the blood pressure constraint, but eventually the βA receptors evolved to coordinate blood pressure in both the lung and the periphery.

Gut Microbiome Heritability

Analyzing data from a large twin study, researchers have homed in on how host genetics can shape the gut microbiome.
By Tracy Vence | The Scientist Nov 6, 2014

Previous research suggested host genetic variation can influence microbial phenotype, but an analysis of data from a large twin study published in Cell today (November 6) solidifies the connection between human genotype and the composition of the gut microbiome. Studying more than 1,000 fecal samples from 416 monozygotic and dizygotic twin pairs, Cornell University’s Ruth Ley and her colleagues have homed in on one bacterial taxon, the family Christensenellaceae, as the most highly heritable group of microbes in the human gut. The researchers also found that Christensenellaceae—which was first described just two years ago—is central to a network of co-occurring heritable microbes that is associated with lean body mass index (BMI).  …

Of particular interest was the family Christensenellaceae, which was the most heritable taxon among those identified in the team’s analysis of fecal samples obtained from the TwinsUK study population.

While microbiologists had previously detected 16S rRNA sequences belonging to Christensenellaceae in the human microbiome, the family wasn’t named until 2012. “People hadn’t looked into it, partly because it didn’t have a name . . . it sort of flew under the radar,” said Ley.

Ley and her colleagues discovered that Christensenellaceae appears to be the hub in a network of co-occurring heritable taxa, which—among TwinsUK participants—was associated with low BMI. The researchers also found that Christensenellaceae had been found at greater abundance in low-BMI twins in older studies.

To interrogate the effects of Christensenellaceae on host metabolic phenotype, the Ley’s team introduced lean and obese human fecal samples into germ-free mice. They found animals that received lean fecal samples containing more Christensenellaceae showed reduced weight gain compared with their counterparts. And treatment of mice that had obesity-associated microbiomes with one member of the Christensenellaceae family, Christensenella minuta, led to reduced weight gain.   …

Ley and her colleagues are now focusing on the host alleles underlying the heritability of the gut microbiome. “We’re running a genome-wide association analysis to try to find genes—particular variants of genes—that might associate with higher levels of these highly heritable microbiota.  . . . Hopefully that will point us to possible reasons they’re heritable,” she said. “The genes will guide us toward understanding how these relationships are maintained between host genotype and microbiome composition.”

J.K. Goodrich et al., “Human genetics shape the gut microbiome,” Cell,  http://dx.doi.org:/10.1016/j.cell.2014.09.053, 2014.

Light-Operated Drugs

Scientists create a photosensitive pharmaceutical to target a glutamate receptor.
By Ruth Williams | The Scentist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41279/title/Light-Operated-Drugs/

light operated drugs MO1

light operated drugs MO1

http://www.the-scientist.com/Nov2014/MO1.jpg

The desire for temporal and spatial control of medications to minimize side effects and maximize benefits has inspired the development of light-controllable drugs, or optopharmacology. Early versions of such drugs have manipulated ion channels or protein-protein interactions, “but never, to my knowledge, G protein–coupled receptors [GPCRs], which are one of the most important pharmacological targets,” says Pau Gorostiza of the Institute for Bioengineering of Catalonia, in Barcelona.

Gorostiza has taken the first step toward filling that gap, creating a photosensitive inhibitor of the metabotropic glutamate 5 (mGlu5) receptor—a GPCR expressed in neurons and implicated in a number of neurological and psychiatric disorders. The new mGlu5 inhibitor—called alloswitch-1—is based on a known mGlu receptor inhibitor, but the simple addition of a light-responsive appendage, as had been done for other photosensitive drugs, wasn’t an option. The binding site on mGlu5 is “extremely tight,” explains Gorostiza, and would not accommodate a differently shaped molecule. Instead, alloswitch-1 has an intrinsic light-responsive element.

In a human cell line, the drug was active under dim light conditions, switched off by exposure to violet light, and switched back on by green light. When Gorostiza’s team administered alloswitch-1 to tadpoles, switching between violet and green light made the animals stop and start swimming, respectively.

The fact that alloswitch-1 is constitutively active and switched off by light is not ideal, says Gorostiza. “If you are thinking of therapy, then in principle you would prefer the opposite,” an “on” switch. Indeed, tweaks are required before alloswitch-1 could be a useful drug or research tool, says Stefan Herlitze, who studies ion channels at Ruhr-Universität Bochum in Germany. But, he adds, “as a proof of principle it is great.” (Nat Chem Biol, http://dx.doi.org:/10.1038/nchembio.1612, 2014)

Enhanced Enhancers

The recent discovery of super-enhancers may offer new drug targets for a range of diseases.
By Eric Olson | The Scientist Nov 1, 2014
http://www.the-scientist.com/?articles.view/articleNo/41281/title/Enhanced-Enhancers/

To understand disease processes, scientists often focus on unraveling how gene expression in disease-associated cells is altered. Increases or decreases in transcription—as dictated by a regulatory stretch of DNA called an enhancer, which serves as a binding site for transcription factors and associated proteins—can produce an aberrant composition of proteins, metabolites, and signaling molecules that drives pathologic states. Identifying the root causes of these changes may lead to new therapeutic approaches for many different diseases.

Although few therapies for human diseases aim to alter gene expression, the outstanding examples—including antiestrogens for hormone-positive breast cancer, antiandrogens for prostate cancer, and PPAR-γ agonists for type 2 diabetes—demonstrate the benefits that can be achieved through targeting gene-control mechanisms.  Now, thanks to recent papers from laboratories at MIT, Harvard, and the National Institutes of Health, researchers have a new, much bigger transcriptional target: large DNA regions known as super-enhancers or stretch-enhancers. Already, work on super-enhancers is providing insights into how gene-expression programs are established and maintained, and how they may go awry in disease.  Such research promises to open new avenues for discovering medicines for diseases where novel approaches are sorely needed.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions (Cell, 153:307-19, 2013). They also appear to bind a large percentage of the transcriptional machinery compared to typical enhancers, allowing them to better establish and enforce cell-type specific transcriptional programs (Cell, 153:320-34, 2013).

Super-enhancers are closely associated with genes that dictate cell identity, including those for cell-type–specific master regulatory transcription factors. This observation led to the intriguing hypothesis that cells with a pathologic identity, such as cancer cells, have an altered gene expression program driven by the loss, gain, or altered function of super-enhancers.

Sure enough, by mapping the genome-wide location of super-enhancers in several cancer cell lines and from patients’ tumor cells, we and others have demonstrated that genes located near super-enhancers are involved in processes that underlie tumorigenesis, such as cell proliferation, signaling, and apoptosis.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions.

Genome-wide association studies (GWAS) have found that disease- and trait-associated genetic variants often occur in greater numbers in super-enhancers (compared to typical enhancers) in cell types involved in the disease or trait of interest (Cell, 155:934-47, 2013). For example, an enrichment of fasting glucose–associated single nucleotide polymorphisms (SNPs) was found in the stretch-enhancers of pancreatic islet cells (PNAS, 110:17921-26, 2013). Given that some 90 percent of reported disease-associated SNPs are located in noncoding regions, super-enhancer maps may be extremely valuable in assigning functional significance to GWAS variants and identifying target pathways.

Because only 1 to 2 percent of active genes are physically linked to a super-enhancer, mapping the locations of super-enhancers can be used to pinpoint the small number of genes that may drive the biology of that cell. Differential super-enhancer maps that compare normal cells to diseased cells can be used to unravel the gene-control circuitry and identify new molecular targets, in much the same way that somatic mutations in tumor cells can point to oncogenic drivers in cancer. This approach is especially attractive in diseases for which an incomplete understanding of the pathogenic mechanisms has been a barrier to discovering effective new therapies.

Another therapeutic approach could be to disrupt the formation or function of super-enhancers by interfering with their associated protein components. This strategy could make it possible to downregulate multiple disease-associated genes through a single molecular intervention. A group of Boston-area researchers recently published support for this concept when they described inhibited expression of cancer-specific genes, leading to a decrease in cancer cell growth, by using a small molecule inhibitor to knock down a super-enhancer component called BRD4 (Cancer Cell, 24:777-90, 2013).  More recently, another group showed that expression of the RUNX1 transcription factor, involved in a form of T-cell leukemia, can be diminished by treating cells with an inhibitor of a transcriptional kinase that is present at the RUNX1 super-enhancer (Nature, 511:616-20, 2014).

Fungal effector Ecp6 outcompetes host immune receptor for chitin binding through intrachain LysM dimerization 
Andrea Sánchez-Vallet, et al.   eLife 2013;2:e00790 http://elifesciences.org/content/2/e00790#sthash.LnqVMJ9p.dpuf

LysM effector

LysM effector

http://img.scoop.it/ZniCRKQSvJOG18fHbb4p0Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9

While host immune receptors

  • detect pathogen-associated molecular patterns to activate immunity,
  • pathogens attempt to deregulate host immunity through secreted effectors.

Fungi employ LysM effectors to prevent

  • recognition of cell wall-derived chitin by host immune receptors

Structural analysis of the LysM effector Ecp6 of

  • the fungal tomato pathogen Cladosporium fulvum reveals
  • a novel mechanism for chitin binding,
  • mediated by intrachain LysM dimerization,

leading to a chitin-binding groove that is deeply buried in the effector protein.

This composite binding site involves

  • two of the three LysMs of Ecp6 and
  • mediates chitin binding with ultra-high (pM) affinity.

The remaining singular LysM domain of Ecp6 binds chitin with

  • low micromolar affinity but can nevertheless still perturb chitin-triggered immunity.

Conceivably, the perturbation by this LysM domain is not established through chitin sequestration but possibly through interference with the host immune receptor complex.

Mutated Genes in Schizophrenia Map to Brain Networks
From www.nih.gov –  Sep 3, 2013

Previous studies have shown that many people with schizophrenia have de novo, or new, genetic mutations. These misspellings in a gene’s DNA sequence

  • occur spontaneously and so aren’t shared by their close relatives.

Dr. Mary-Claire King of the University of Washington in Seattle and colleagues set out to

  • identify spontaneous genetic mutations in people with schizophrenia and
  • to assess where and when in the brain these misspelled genes are turned on, or expressed.

The study was funded in part by NIH’s National Institute of Mental Health (NIMH). The results were published in the August 1, 2013, issue of Cell.

The researchers sequenced the exomes (protein-coding DNA regions) of 399 people—105 with schizophrenia plus their unaffected parents and siblings. Gene variations
that were found in a person with schizophrenia but not in either parent were considered spontaneous.

The likelihood of having a spontaneous mutation was associated with

  • the age of the father in both affected and unaffected siblings.

Significantly more mutations were found in people

  • whose fathers were 33-45 years at the time of conception compared to 19-28 years.

Among people with schizophrenia, the scientists identified

  • 54 genes with spontaneous mutations
  • predicted to cause damage to the function of the protein they encode.

The researchers used newly available database resources that show

  • where in the brain and when during development genes are expressed.

The genes form an interconnected expression network with many more connections than

  • that of the genes with spontaneous damaging mutations in unaffected siblings.

The spontaneously mutated genes in people with schizophrenia

  • were expressed in the prefrontal cortex, a region in the front of the brain.

The genes are known to be involved in important pathways in brain development. Fifty of these genes were active

  • mainly during the period of fetal development.

“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” King says. “Mutations can lead to loss of integrity of a whole pathway,
not just of a single gene.”

These findings support the concept that schizophrenia may result, in part, from

  • disruptions in development in the prefrontal cortex during fetal development.

James E. Darnell’s “Reflections”

A brief history of the discovery of RNA and its role in transcription — peppered with career advice
By Joseph P. Tiano

James Darnell begins his Journal of Biological Chemistry “Reflections” article by saying, “graduate students these days

  • have to swim in a sea virtually turgid with the daily avalanche of new information and
  • may be momentarily too overwhelmed to listen to the aging.

I firmly believe how we learned what we know can provide useful guidance for how and what a newcomer will learn.” Considering his remarkable discoveries in

  • RNA processing and eukaryotic transcriptional regulation

spanning 60 years of research, Darnell’s advice should be cherished. In his second year at medical school at Washington University School of Medicine in St. Louis, while
studying streptococcal disease in Robert J. Glaser’s laboratory, Darnell realized he “loved doing the experiments” and had his first “career advancement event.”
He and technician Barbara Pesch discovered that in vivo penicillin treatment killed streptococci only in the exponential growth phase and not in the stationary phase. These
results were published in the Journal of Clinical Investigation and earned Darnell an interview with Harry Eagle at the National Institutes of Health.

Darnell arrived at the NIH in 1956, shortly after Eagle  shifted his research interest to developing his minimal essential cell culture medium, still used. Eagle, then studying cell metabolism, suggested that Darnell take up a side project on poliovirus replication in mammalian cells in collaboration with Robert I. DeMars. DeMars’ Ph.D.
adviser was also James  Watson’s mentor, so Darnell met Watson, who invited him to give a talk at Harvard University, which led to an assistant professor position
at the MIT under Salvador Luria. A take-home message is to embrace side projects, because you never know where they may lead: this project helped to shape
his career.

Darnell arrived in Boston in 1961. Following the discovery of DNA’s structure in 1953, the world of molecular biology was turning to RNA in an effort to understand how
proteins are made. Darnell’s background in virology (it was discovered in 1960 that viruses used RNA to replicate) was ideal for the aim of his first independent lab:
exploring mRNA in animal cells grown in culture. While at MIT, he developed a new technique for purifying RNA along with making other observations

  • suggesting that nonribosomal cytoplasmic RNA may be involved in protein synthesis.

When Darnell moved to Albert Einstein College of Medicine for full professorship in 1964,  it was hypothesized that heterogenous nuclear RNA was a precursor to mRNA.
At Einstein, Darnell discovered RNA processing of pre-tRNAs and demonstrated for the first time

  • that a specific nuclear RNA could represent a possible specific mRNA precursor.

In 1967 Darnell took a position at Columbia University, and it was there that he discovered (simultaneously with two other labs) that

  • mRNA contained a polyadenosine tail.

The three groups all published their results together in the Proceedings of the National Academy of Sciences in 1971. Shortly afterward, Darnell made his final career move
four short miles down the street to Rockefeller University in 1974.

Over the next 35-plus years at Rockefeller, Darnell never strayed from his original research question: How do mammalian cells make and control the making of different
mRNAs? His work was instrumental in the collaborative discovery of

  • splicing in the late 1970s and
  • in identifying and cloning many transcriptional activators.

Perhaps his greatest contribution during this time, with the help of Ernest Knight, was

  • the discovery and cloning of the signal transducers and activators of transcription (STAT) proteins.

And with George Stark, Andy Wilks and John Krowlewski, he described

  • cytokine signaling via the JAK-STAT pathway.

Darnell closes his “Reflections” with perhaps his best advice: Do not get too wrapped up in your own work, because “we are all needed and we are all in this together.”

Darnell Reflections - James_Darnell

Darnell Reflections – James_Darnell

http://www.asbmb.org/assets/0/366/418/428/85528/85529/85530/8758cb87-84ff-42d6-8aea-96fda4031a1b.jpg

Recent findings on presenilins and signal peptide peptidase

By Dinu-Valantin Bălănescu

γ-secretase and SPP

γ-secretase and SPP

Fig. 1 from the minireview shows a schematic depiction of γ-secretase and SPP

http://www.asbmb.org/assets/0/366/418/428/85528/85529/85530/c2de032a-daad-41e5-ba19-87a17bd26362.png

GxGD proteases are a family of intramembranous enzymes capable of hydrolyzing

  • the transmembrane domain of some integral membrane proteins.

The GxGD family is one of the three families of

  • intramembrane-cleaving proteases discovered so far (along with the rhomboid and site-2 protease) and
  • includes the γ-secretase and the signal peptide peptidase.

Although only recently discovered, a number of functions in human pathology and in numerous other biological processes

  • have been attributed to γ-secretase and SPP.

Taisuke Tomita and Takeshi Iwatsubo of the University of Tokyo highlighted the latest findings on the structure and function of γ-secretase and SPP
in a recent minireview in The Journal of Biological Chemistry.

  • γ-secretase is involved in cleaving the amyloid-β precursor protein, thus producing amyloid-β peptide,

the main component of senile plaques in Alzheimer’s disease patients’ brains. The complete structure of mammalian γ-secretase is not yet known; however,
Tomita and Iwatsubo note that biochemical analyses have revealed it to be a multisubunit protein complex.

  • Its catalytic subunit is presenilin, an aspartyl protease.

In vitro and in vivo functional and chemical biology analyses have revealed that

  • presenilin is a modulator and mandatory component of the γ-secretase–mediated cleavage of APP.

Genetic studies have identified three other components required for γ-secretase activity:

  1. nicastrin,
  2. anterior pharynx defective 1 and
  3. presenilin enhancer 2.

By coexpression of presenilin with the other three components, the authors managed to

  • reconstitute γ-secretase activity.

Tomita and Iwatsubo determined using the substituted cysteine accessibility method and by topological analyses, that

  • the catalytic aspartates are located at the center of the nine transmembrane domains of presenilin,
  • by revealing the exact location of the enzyme’s catalytic site.

The minireview also describes in detail the formerly enigmatic mechanism of γ-secretase mediated cleavage.

SPP, an enzyme that cleaves remnant signal peptides in the membrane

  • during the biogenesis of membrane proteins and
  • signal peptides from major histocompatibility complex type I,
  • also is involved in the maturation of proteins of the hepatitis C virus and GB virus B.

Bioinformatics methods have revealed in fruit flies and mammals four SPP-like proteins,

  • two of which are involved in immunological processes.

By using γ-secretase inhibitors and modulators, it has been confirmed

  • that SPP shares a similar GxGD active site and proteolytic activity with γ-secretase.

Upon purification of the human SPP protein with the baculovirus/Sf9 cell system,

  • single-particle analysis revealed further structural and functional details.

HLA targeting efficiency correlates with human T-cell response magnitude and with mortality from influenza A infection

From www.pnas.org –  Sep 3, 2013 4:24 PM

Experimental and computational evidence suggests that

  • HLAs preferentially bind conserved regions of viral proteins, a concept we term “targeting efficiency,” and that
  • this preference may provide improved clearance of infection in several viral systems.

To test this hypothesis, T-cell responses to A/H1N1 (2009) were measured from peripheral blood mononuclear cells obtained from a household cohort study
performed during the 2009–2010 influenza season. We found that HLA targeting efficiency scores significantly correlated with

  • IFN-γ enzyme-linked immunosorbent spot responses (P = 0.042, multiple regression).

A further population-based analysis found that the carriage frequencies of the alleles with the lowest targeting efficiencies, A*24,

  • were associated with pH1N1 mortality (r = 0.37, P = 0.031) and
  • are common in certain indigenous populations in which increased pH1N1 morbidity has been reported.

HLA efficiency scores and HLA use are associated with CD8 T-cell magnitude in humans after influenza infection.
The computational tools used in this study may be useful predictors of potential morbidity and

  • identify immunologic differences of new variant influenza strains
  • more accurately than evolutionary sequence comparisons.

Population-based studies of the relative frequency of these alleles in severe vs. mild influenza cases

  • might advance clinical practices for severe H1N1 infections among genetically susceptible populations.

Metabolomics in drug target discovery

J D Rabinowitz et al.

Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ.
Cold Spring Harbor Symposia on Quantitative Biology 11/2011; 76:235-46.
http://dx.doi.org:/10.1101/sqb.2011.76.010694 

Most diseases result in metabolic changes. In many cases, these changes play a causative role in disease progression. By identifying pathological metabolic changes,

  • metabolomics can point to potential new sites for therapeutic intervention.

Particularly promising enzymatic targets are those that

  • carry increased flux in the disease state.

Definitive assessment of flux requires the use of isotope tracers. Here we present techniques for

  • finding new drug targets using metabolomics and isotope tracers.

The utility of these methods is exemplified in the study of three different viral pathogens. For influenza A and herpes simplex virus,

  • metabolomic analysis of infected versus mock-infected cells revealed
  • dramatic concentration changes around the current antiviral target enzymes.

Similar analysis of human-cytomegalovirus-infected cells, however, found the greatest changes

  • in a region of metabolism unrelated to the current antiviral target.

Instead, it pointed to the tricarboxylic acid (TCA) cycle and

  • its efflux to feed fatty acid biosynthesis as a potential preferred target.

Isotope tracer studies revealed that cytomegalovirus greatly increases flux through

  • the key fatty acid metabolic enzyme acetyl-coenzyme A carboxylase.
  • Inhibition of this enzyme blocks human cytomegalovirus replication.

Examples where metabolomics has contributed to identification of anticancer drug targets are also discussed. Eventual proof of the value of

  • metabolomics as a drug target discovery strategy will be
  • successful clinical development of therapeutics hitting these new targets.

 Related References

Use of metabolic pathway flux information in targeted cancer drug design. Drug Discovery Today: Therapeutic Strategies 1:435-443, 2004.

Detection of resistance to imatinib by metabolic profiling: clinical and drug development implications. Am J Pharmacogenomics. 2005;5(5):293-302. Review. PMID: 16196499

Medicinal chemistry, metabolic profiling and drug target discovery: a role for metabolic profiling in reverse pharmacology and chemical genetics.
Mini Rev Med Chem.  2005 Jan;5(1):13-20. Review. PMID: 15638788 [PubMed – indexed for MEDLINE] Related citations

Development of Tracer-Based Metabolomics and its Implications for the Pharmaceutical Industry. Int J Pharm Med 2007; 21 (3): 217-224.

Use of metabolic pathway flux information in anticancer drug design. Ernst Schering Found Symp Proc. 2007;(4):189-203. Review. PMID: 18811058

Pharmacological targeting of glucagon and glucagon-like peptide 1 receptors has different effects on energy state and glucose homeostasis in diet-induced obese mice. J Pharmacol Exp Ther. 2011 Jul;338(1):70-81. http://dx.doi.org:/10.1124/jpet.111.179986. PMID: 21471191

Single valproic acid treatment inhibits glycogen and RNA ribose turnover while disrupting glucose-derived cholesterol synthesis in liver as revealed by the
[U-C(6)]-d-glucose tracer in mice. Metabolomics. 2009 Sep;5(3):336-345. PMID: 19718458

Metabolic Pathways as Targets for Drug Screening, Metabolomics, Dr Ute Roessner (Ed.), ISBN: 978-953-51-0046-1, InTech, Available from: http://www.intechopen.com/books/metabolomics/metabolic-pathways-as-targets-for-drug-screening

Iron regulates glucose homeostasis in liver and muscle via AMP-activated protein kinase in mice. FASEB J. 2013 Jul;27(7):2845-54.
http://dx.doi.org:/10.1096/fj.12-216929. PMID: 23515442

Metabolomics and systems pharmacology: why and how to model the human metabolic network for drug discovery

Drug Discov. Today 19 (2014), 171–182     http://dx.doi.org:/10.1016/j.drudis.2013.07.014

Highlights

  • We now have metabolic network models; the metabolome is represented by their nodes.
  • Metabolite levels are sensitive to changes in enzyme activities.
  • Drugs hitchhike on metabolite transporters to get into and out of cells.
  • The consensus network Recon2 represents the present state of the art, and has predictive power.
  • Constraint-based modelling relates network structure to metabolic fluxes.

Metabolism represents the ‘sharp end’ of systems biology, because changes in metabolite concentrations are

  • necessarily amplified relative to changes in the transcriptome, proteome and enzyme activities, which can be modulated by drugs.

To understand such behaviour, we therefore need (and increasingly have) reliable consensus (community) models of

  • the human metabolic network that include the important transporters.

Small molecule ‘drug’ transporters are in fact metabolite transporters, because

  • drugs bear structural similarities to metabolites known from the network reconstructions and
  • from measurements of the metabolome.

Recon2 represents the present state-of-the-art human metabolic network reconstruction; it can predict inter alia:

(i) the effects of inborn errors of metabolism;

(ii) which metabolites are exometabolites, and

(iii) how metabolism varies between tissues and cellular compartments.

However, even these qualitative network models are not yet complete. As our understanding improves

  • so do we recognise more clearly the need for a systems (poly)pharmacology.

Introduction – a systems biology approach to drug discovery

It is clearly not news that the productivity of the pharmaceutical industry has declined significantly during recent years

  • following an ‘inverse Moore’s Law’, Eroom’s Law, or
  • that many commentators, consider that the main cause of this is
  • because of an excessive focus on individual molecular target discovery rather than a more sensible strategy
  • based on a systems-level approach (Fig. 1).
drug discovery science

drug discovery science

Figure 1.

The change in drug discovery strategy from ‘classical’ function-first approaches (in which the assay of drug function was at the tissue or organism level),
with mechanistic studies potentially coming later, to more-recent target-based approaches where initial assays usually involve assessing the interactions
of drugs with specified (and often cloned, recombinant) proteins in vitro. In the latter cases, effects in vivo are assessed later, with concomitantly high levels of attrition.

Arguably the two chief hallmarks of the systems biology approach are:

(i) that we seek to make mathematical models of our systems iteratively or in parallel with well-designed ‘wet’ experiments, and
(ii) that we do not necessarily start with a hypothesis but measure as many things as possible (the ’omes) and

  • let the data tell us the hypothesis that best fits and describes them.

Although metabolism was once seen as something of a Cinderella subject,

  • there are fundamental reasons to do with the organisation of biochemical networks as
  • to why the metabol(om)ic level – now in fact seen as the ‘apogee’ of the ’omics trilogy –
  •  is indeed likely to be far more discriminating than are
  • changes in the transcriptome or proteome.

The next two subsections deal with these points and Fig. 2 summarises the paper in the form of a Mind Map.

metabolomics and systems pharmacology

metabolomics and systems pharmacology

http://ars.els-cdn.com/content/image/1-s2.0-S1359644613002481-gr2.jpg

Metabolic Disease Drug Discovery— “Hitting the Target” Is Easier Said Than Done

David E. Moller, et al.   http://dx.doi.org:/10.1016/j.cmet.2011.10.012

Despite the advent of new drug classes, the global epidemic of cardiometabolic disease has not abated. Continuing

  • unmet medical needs remain a major driver for new research.

Drug discovery approaches in this field have mirrored industry trends, leading to a recent

  • increase in the number of molecules entering development.

However, worrisome trends and newer hurdles are also apparent. The history of two newer drug classes—

  1. glucagon-like peptide-1 receptor agonists and
  2. dipeptidyl peptidase-4 inhibitors—

illustrates both progress and challenges. Future success requires that researchers learn from these experiences and

  • continue to explore and apply new technology platforms and research paradigms.

The global epidemic of obesity and diabetes continues to progress relentlessly. The International Diabetes Federation predicts an even greater diabetes burden (>430 million people afflicted) by 2030, which will disproportionately affect developing nations (International Diabetes Federation, 2011). Yet

  • existing drug classes for diabetes, obesity, and comorbid cardiovascular (CV) conditions have substantial limitations.

Currently available prescription drugs for treatment of hyperglycemia in patients with type 2 diabetes (Table 1) have notable shortcomings. In general,

Therefore, clinicians must often use combination therapy, adding additional agents over time. Ultimately many patients will need to use insulin—a therapeutic class first introduced in 1922. Most existing agents also have

  • issues around safety and tolerability as well as dosing convenience (which can impact patient compliance).

Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics,

  • the quantification and analysis of metabolites produced by the body.

It refers to the direct measurement of metabolites in an individual’s bodily fluids, in order to

  • predict or evaluate the metabolism of pharmaceutical compounds, and
  • to better understand the pharmacokinetic profile of a drug.

Alternatively, pharmacometabolomics can be applied to measure metabolite levels

  • following the administration of a pharmaceutical compound, in order to
  • monitor the effects of the compound on certain metabolic pathways(pharmacodynamics).

This provides detailed mapping of drug effects on metabolism and

  • the pathways that are implicated in mechanism of variation of response to treatment.

In addition, the metabolic profile of an individual at baseline (metabotype) provides information about

  • how individuals respond to treatment and highlights heterogeneity within a disease state.

All three approaches require the quantification of metabolites found

relationship between -OMICS

relationship between -OMICS

http://upload.wikimedia.org/wikipedia/commons/thumb/e/eb/OMICS.png/350px-OMICS.png

Pharmacometabolomics is thought to provide information that

Looking at the characteristics of an individual down through these different levels of detail, there is an

  • increasingly more accurate prediction of a person’s ability to respond to a pharmaceutical compound.
  1. the genome, made up of 25 000 genes, can indicate possible errors in drug metabolism;
  2. the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed;
  3. and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions.

Pharmacometabolomics complements the omics with

  • direct measurement of the products of all of these reactions, but with perhaps a relatively
  • smaller number of members: that was initially projected to be approximately 2200 metabolites,

but could be a larger number when gut derived metabolites and xenobiotics are added to the list. Overall, the goal of pharmacometabolomics is

  • to more closely predict or assess the response of an individual to a pharmaceutical compound,
  • permitting continued treatment with the right drug or dosage
  • depending on the variations in their metabolism and ability to respond to treatment.

Pharmacometabolomic analyses, through the use of a metabolomics approach,

  • can provide a comprehensive and detailed metabolic profile or “metabolic fingerprint” for an individual patient.

Such metabolic profiles can provide a complete overview of individual metabolite or pathway alterations,

This approach can then be applied to the prediction of response to a pharmaceutical compound

  • by patients with a particular metabolic profile.

Pharmacometabolomic analyses of drug response are

Pharmacogenetics focuses on the identification of genetic variations (e.g. single-nucleotide polymorphisms)

  • within patients that may contribute to altered drug responses and overall outcome of a certain treatment.

The results of pharmacometabolomics analyses can act to “inform” or “direct”

  • pharmacogenetic analyses by correlating aberrant metabolite concentrations or metabolic pathways to potential alterations at the genetic level.

This concept has been established with two seminal publications from studies of antidepressants serotonin reuptake inhibitors

  • where metabolic signatures were able to define a pathway implicated in response to the antidepressant and
  • that lead to identification of genetic variants within a key gene
  • within the highlighted pathway as being implicated in variation in response.

These genetic variants were not identified through genetic analysis alone and hence

  • illustrated how metabolomics can guide and inform genetic data.

en.wikipedia.org/wiki/Pharmacometabolomics

Benznidazole Biotransformation and Multiple Targets in Trypanosoma cruzi Revealed by Metabolomics

Andrea Trochine, Darren J. Creek, Paula Faral-Tello, Michael P. Barrett, Carlos Robello
Published: May 22, 2014   http://dx.doi.org:/10.1371/journal.pntd.0002844

The first line treatment for Chagas disease, a neglected tropical disease caused by the protozoan parasite Trypanosoma cruzi,

  • involves administration of benznidazole (Bzn).

Bzn is a 2-nitroimidazole pro-drug which requires nitroreduction to become active. We used a

  • non-targeted MS-based metabolomics approach to study the metabolic response of T. cruzi to Bzn.

Parasites treated with Bzn were minimally altered compared to untreated trypanosomes, although the redox active thiols

  1. trypanothione,
  2. homotrypanothione and
  3. cysteine

were significantly diminished in abundance post-treatment. In addition, multiple Bzn-derived metabolites were detected after treatment.

These metabolites included reduction products, fragments and covalent adducts of reduced Bzn

  • linked to each of the major low molecular weight thiols:
  1. trypanothione,
  2. glutathione,
  3. g-glutamylcysteine,
  4. glutathionylspermidine,
  5. cysteine and
  6. ovothiol A.

Bzn products known to be generated in vitro by the unusual trypanosomal nitroreductase, TcNTRI,

  • were found within the parasites,
  • but low molecular weight adducts of glyoxal, a proposed toxic end-product of NTRI Bzn metabolism, were not detected.

Our data is indicative of a major role of the

  • thiol binding capacity of Bzn reduction products
  • in the mechanism of Bzn toxicity against T. cruzi.

 

 

Read Full Post »

« Newer Posts - Older Posts »