Feeds:
Posts
Comments

Posts Tagged ‘diagnostics’

Pharmaceutical Companies Racing Together to Find a Cure for COVID-19

Reporter: Irina Robu, PhD

The global outbreak has put pressure on companies and the Food and Drug Administration (FDA) to act quickly to make medications available to patients. Several companies are working together to find solutions to treat those infected by the virus and prevent it from spreading.

AstraZeneca is responding to the COVID-19 (novel coronavirus) outbreak to accelerate the development of its di diagnostic testing capabilities to scale-up screening and is also working in partnership with governments on existing screening programs to supplement testing. In addition, AstraZeneca is working to identify monoclonal antibodies to progress towards clinical trial evaluation as a treatment to prevent COVID-19.

Bayer, German multinational pharmaceutical and life sciences company is donating malaria drug, Resochin to the US government for possible use to treat COVID-19. Resochin, made of chloroquine phosphate is a current approve treatment for malaria. China is evaluating it for potential use of COVID-19 and presented decent effects against the first SARS virus in 2003. Doctors consider it a promising treatment for seriously ill coronavirus patients.

AbbVie is research-driven biopharmaceutical company dedicated to developing innovative advanced therapies for four primary therapeutic areas: immunology, oncology, virology and neuroscience. The company declared plans to evaluate HIV medicine as COVID-19 treatment and go into partnerships with health authorities in various countries to explore the efficacy and antiviral activity of the medication.

Boehringer Ingelheim, research driven company  is collaborating with the German Center for Infectious Research to develop therapies and diagnostic tools for COVID-19. Their research teams are screening their entire molecule library with more than one million compounds to identify novel small molecules with activity against the virus.

EMD Serono is the biopharmaceutical business of Merck KGaA, Germany donated interferon beta-1a to French Institute of National Health and Medical Research to use for a clinical trial. Interferon beta-1a is presently in use to treat multiple sclerosis and is under investigation as potential treatment for people with COVID-19 coronavirus disease caused by the SARS-nCoV-2 virus. When confronted with the virus, each cell shoots an emergency flare of interferon to tell the immune system to strengthen its defenses. The interferon beta1a cytokine activates macrophages that engulf antigens and natural killer cells, which are integral to innate immune system. The trial is subsidized by INSERM and its start has been announced by the French Health authorities on March 11. To date, Merck interferon beta-1a is not approved by any regulatory authority for the treatment of COVID-19 or for use as an antiviral agent.

GLAXOSMITHKLINE (GSK) has been working to make vaccine using its established pandemic vaccine adjuvant platform technology available. Sanofi and GSK announced on April 14, 2020 they will collaborate to develop an adjuvanted vaccine for COVID-19, using innovative technology from both companies. Sanofi will donate its S-protein COVID-19 antigen, which is based on recombinant DNA technology. This technology gives an exact genetic match to proteins found on the surface of the virus and the DNA sequence encoding this antigen has been combined into the DNA of the baculovirus expression platform, the basis of Sanofi’s licensed recombinant influenza product in the US.GSK will contribute its proven pandemic adjuvant technology to the collaboration, since it may reduce the amount of vaccine protein required per dose, letting more vaccine doses to be produced and consequently contributing to protect more people.

JOHNSON & JOHNSON has started research into a vaccine, leveraging the same innovative technology used for  Ebola vaccine. Janssen, the pharmaceutical arm of J&J has donated medicines for use in laboratory-based investigations to support efforts in finding a resolution against COVID-19.

Eli Lilly entered into an agreement with AbCellera to co-develop antibody products for the treatment and prevention of COVID-19. The collaboration will leverage AbCellera’s rapid pandemic response platform, established under the DARPA Pandemic Prevention Platform Program, along with Lilly’s global capabilities for rapid development, manufacturing and distribution of therapeutic antibodies. Eli Lilly has also entered an agreement with NIH, NIAID to study baricitinib as an arm in NIAID’s Adaptive COVID-19 treatment trial. Baricitinib, an oral JAK1/JAK2 inhibitor is accepted in more than 65 countries as a treatment for adults with moderately to severely active rheumatoid arthritis. Because of the inflammatory cascade in COVID-19, baricitinib’s anti-inflammatory activity has been hypothesized to have a potential beneficial effect in COVID-19 and needs further study in patients with this infection. Eli Lilly is also using an investigational selective monoclonal antibody against Angiopoientin-2 to Phase 2 testing in pneumonia patients hospitalized with COVID-19 who are at higher risk of delveoping acute respiratory distress syndrome. The company will look whether inhibiting the effects of Angiopoientin-2 with monoclonal antibody which can reduce the progression of acute respiratory distress syndrome. The trial will start in April 2020.

Pfizer and BioNTech work together to develop a potential COVID-19 vaccine which aims to accelerate development of BioNTech’s potential first-in-class COVID-19 mRNA vaccine program, BNT162 . A clinical study is expected to start by the end of April 2020. The collaboration is a continuation of the original agreement in 2019 between the two companies to develop mRNA-based vaccines for prevention of influenza.

Roche, Canada has been designated as a participant in a Phase III clinical trial evaluating the safety and efficacy of one of Roche’s portfolio medicines in hospitalized adult patients with severe COVID-19 pneumonia. The company announced the future launch of its Elecsys Anti-SARS-CoV-2 serology test to detect antibodies in people who have been exposed to SARS-CoV-2 that causes the COVID-19 disease. Antibody testing is vital to help detect people who have been infected by the virus, particularly those who may have been infected but did not display symptoms. Furthermore, the test can support priority screening of high-risk groups who might by now have advanced a certain level of immunity and can continue serving and/or return to work.

Takeda Pharmaceutical Company is initiating the development of an anti-SARS-CoV-2 polyclonal hyperimmune globulin (H-IG) to treat high-risk individuals with COVID-19, although also investigating whether Takeda’s currently marketed products may be effective treatments for infected patients. Hyperimmune globulins are plasma derived-therapies that have been effective in the treatment of severe acute viral respiratory infections and could be a treatment option for COVID-19. Takeda has the research expertise to develop and manufacture a potential anti-SARS-CoV-2 polyclonal H-IG.

Takeda is presently in discussions  with multiple national health and regulatory agencies and health care partners in the US, Asia, and Europe to expeditiously move the research into anti-SARS-CoV-2 polyclonal H-IG forward. The research requires access to source plasma from people who have efficaciously recovered from COVID-19. The donors have developed antibodies to the virus that could possibly alleviate severity of illness in COVID-19 patients and perhaps prevent it. By transferring the antibodies to a new patient, it may help that person’s immune system respond to the infection and increase their chance of recovery. These efforts to find a vaccine are at an early stage nevertheless being given a high priority within the company.

SOURCE

https://www.marketwatch.com/story/these-nine-companies-are-working-on-coronavirus-treatments-or-vaccines-heres-where-things-stand-2020-03-06

Read Full Post »

Clinical Biomarkers Overview

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Paving the Road for Clinical Biomarkers

Where Trackless Terrain Once Challenged Biomarker Development, Clearer Paths Are Emerging

http://www.genengnews.com/gen-articles/paving-the-road-for-clinical-biomarkers/5757/

http://www.genengnews.com/Media/images/Article/thumb_ArcherDX_AnalyticalSensitivity2362411344.jpg

Fusion detection can be carried out with traditional opposing primer-based library preparation methods, which require target- and fusion-specific primers that define the region to be sequenced. With these methods, primers are needed that flank the target region and the fusion partner, so only known fusions can be detected. An alternative method, ArcherDX’ Anchored Multiplex PCR (AMP), can be used to detect the target of interest, plus any known and unknown fusion partners. This is because AMP uses target-specific unidirectional primers, along with reverse primers, that hybridize to the sequencing adapter that is ligated to each fragment prior to amplification.

 

  • In time, the narrow, tortuous paths followed by pioneers become wider and straighter, whether the pioneers are looking to settle new land or bring new biomarkers to the clinic.

In the case of biomarkers, we’re still at the stage where pioneers need to consult guides and outfitters or, in modern parlance, consultants and technology providers. These hardy souls tend to congregate at events like the Biomarker Conference, which was held recently in San Diego.

At this event, biomarker experts discussed ways to avoid unfortunate detours on the trail from discovery and development to clinical application and regulatory approval. Of particular interest were topics such as the identification of accurate biomarkers, the explication of disease mechanisms, the stratification of patient groups, and the development of standard protocols and assay platforms. In each of these areas, presenters reported progress.

Another crucial subject is the integration of techniques such as next-generation sequencing (NGS). This particular technique has been instrumental in advancing clinical cancer genomics and continues to be the most feasible way of simultaneously interrogating multiple genes for driver mutations.

Enriching nucleic acid libraries for target genes of interest prior to NGS greatly enhances the sensitivity of
detecting mutations, as the enriched regions are sequenced multiple times. This is particularly useful when analyzing clinical samples, which generate low amounts of poor-quality nucleic acids.

However, NGS has been limited in its ability to identify gene fusions and translocations, which underlie oncogenesis in a variety of cancers. “These challenges are largely related to the enrichment chemistry used to produce sequencing libraries,” commented Joshua Stahl, chief scientific officer and general manager, ArcherDX.

Most target-enrichment strategies require prior knowledge of both ends of the target region to be sequenced. Therefore, only gene fusions with known partners can be amplified for downstream NGS assays.

Archer’s Anchored Multiplex PCR (AMP™) technology overcomes this limitation, as it can enrich for novel fusions, while only requiring knowledge of one end of the fusion pair. At the heart of the AMP chemistry are unique Molecular Barcode (MBC) adapters, ligated to the 5′ ends of DNA fragments prior to amplification. The MBCs contain universal primer binding sites for PCR and a molecular barcode for identifying unique molecules. When combined with 3′ gene-specific primers, MBCs enable amplification of target regions with unknown 5′ ends.

“AMP is ideal for identifying gene fusions and other driver mutations from FFPE samples,” asserted Mr. Stahl. “Its robust utility was demonstrated for detection of gene fusions, point mutations, insertions, deletions, and copy number changes from low amounts of clinical formalin-fixed, paraffin-embedded (FFPE) RNA and DNA samples.

“Tagging each molecule of input nucleic acid with a unique molecular barcode allows for de-duplication, error correction, and quantitative analysis, resulting in high sequencing consensus. With its low error rate and low limits of detection, AMP is revolutionizing the field of cancer genomics.”

In a proof-of-concept study, a single-tube 23-plex panel was designed to amplify the kinase domains of ALK, RET, ROS1, and MUSK genes by AMP. This enrichment strategy enabled identification of gene fusions with multiple partners and alternative splicing events in lung cancer, thyroid cancer, and glioblastoma specimens by NGS.

Ignyta, a precision medicine company, adopted Archer’s AMP technology in Trailblaze Pharos™, a multiplex assay employed in their STARTRK-2 trial for identifying actionable NTRK, ROS1, and ALK gene rearrangements in solid tumors that can be treated with the novel kinase inhibitor, entrectinib. “Gene fusions are incredibly important in personalized medicine right now,” stated Mr. Stahl. “Archer’s FusionPlex assays are quickly becoming the new gold standard.”

Reading Cancer Signatures

This image, from the Massachusetts General Hospital Cancer Center, shows multicolor fluorescence in situ hybridization (FISH) analysis of cells from a patient with esophagogastric cancer. Remarkably, the FISH analysis revealed that co-amplification of the MET gene (red signal) and the EGFR gene (green signal) existed simultaneously in the same tumor cells. A chromosome 7 control probe is shown in blue.
“Each year 23,000 kidneys are transplanted, and over 175,000 kidney transplants are functional today,” noted Daniel R. Salomon, M.D., medical program director, Scripps Center for Organ Transplantation, Scripps Research Institute. “However, in just 5 years, 3 out of every 10 patients will be back on dialysis, and in 15 years, at least 75% of all patients will lose their kidney grafts.“Tumor biomarkers are critical for predicting and following patient responses to today’s cancer therapies,” said Darrell Borger, Ph.D., co-director of the Translational Research Laboratory and director of the Biomarker Laboratory, Massachussetts General Hospital (MGH) Cancer Center, Harvard Medical School. “If we understand what drives the malignancy in any given patient, we are able to match existing therapies to the patient’s genotype.”

Over the last decade, the Biomarker/Translational Research Laboratory has focused on developing clinical genotyping and fluorescent in situ hybridization (FISH) assays for rapid personalized genomic testing.

“Initially, we analyzed the most prevalent hotspot mutations, about 160 in 25 cancer genes,” continued Dr. Borger. “However, this approach revealed mutations in only half of our patients. With the advent of NGS, we are able to sequence 190 exons in 39 cancer genes and obtain significantly richer genetic fingerprints, finding genetic aberrations in 92% of our cancer patients.”

Using multiplexed approaches, Dr. Borger’s team within the larger Center for Integrated Diagnostics (CID) program at MGH has established high-throughput genotyping service as an important component of routine care. While only a few susceptible molecular alterations may currently have a corresponding drug, the NGS-driven analysis may supply new information for inclusion of patients into ongoing clinical trials, or bank the result for future research and development.

“A significant impediment to discovery of clinically relevant genomic signatures is our current inability to interconnect the data,” explained Dr. Borger. “On the local level, we are striving to compile the data from clinical observations, including responses to therapy and genotyping. Globally, it is imperative that comprehensive public databases become available to the research community.”

Tumor profiling at MGH have already yielded significant discoveries. Dr. Borger’s lab, in collaboration with oncologists at the MGH Cancer Center, found significant correlations between mutations in the genes encoding the metabolic enzymes isocitrate dehydrogenase (IDH1 and IDH2) and certain types of cancers, such as cholangiocarcinoma and acute myelogenous leukemia (AML).

Historically, cancer signatures largely focus on signaling proteins. Discovery of a correlative metabolic enzyme offered a promise of diagnostics based on metabolic byproducts that may be easily identified in blood. Indeed, the metabolite 2-hydroxyglutarate accumulates to high levels in the tissues of patients carrying IDH1 and IDH2 mutations. They have reported that circulating 2-hydroxyglutarate as measured in the blood correlates with tumor burden, and could serve as an important surrogate marker of treatment response.

Tuning Immunosuppression, Preventing Chronic Rejection

“We believe that this is caused by chronic immune-mediated rejection. Failure of effective immunosuppression reduces functional life of these patients and adds in $9–13 billion in yearly healthcare costs.” Dr. Salomon emphasized that ineffective use of immunosuppressive drugs is partially due to the lack of an objective biomarker which could provide decision support for just-in-time adjustment in therapeutic regimens.

“Our research aims to provide that objective measure to clinicians,” explained Dr. Salomon.

To date, kidney transplant biopsies remain the gold standard, even though they are not suitable for continuous monitoring and have both costs and risks. Dr. Salomon’s team developed a minimally invasive diagnostic approach based on unbiased whole-genome expression profiling of blood samples. Using Affymetrix Human Genome U133 Plus 2.0 Gene Chips, the team analyzed 275 bloodsamples of kidney transplant patients with biopsy-proved acute rejection, acute dysfunction without rejection and transplant excellent phenotype.

The data was passed through several machine-learning algorithms to identify a group of about 250 classifiers that predict subacute or acute rejection with 80% accuracy. This signature is locked while the team continues to expand the core dataset aiming to reach a thousand samples by the end of this year.

“As opposed to classical approaches to biomarker discoveries limited to just a few classifiers, our methodology provides for the first use of unbiased whole-genome profiling in the identification of multiple molecular predictors,” declared Dr. Salomon. “We can use this molecular diagnostic strategy to reveal a subacute rejection prior to significant tissue injury leading to transplant dysfunction. Continuous monitoring would inform physicians on the balance between over-suppression and effective/optimal therapy.”

Dr. Salomon is a chief scientific advisor for Transplant Genomics (TGI), a start-up company created to translate the blood-based molecular diagnostics into clinical tests. In late 2016, TGI will begin providing its TruGraf blood tests for kidney transplant recipients for use by four to six U.S. transplant centers through an early-access program (EAP).

Additional tests designed to be used serially to diagnose and treat subclinical episodes of rejection including biopsy gene profiling are in the final stages of development. Validation and will be made available through the EAP in the upcoming months.

http://www.genengnews.com/Media/images/Article/BioAgilytix_MultiMuscleAnalysis5413927931.jpg

BioAgilytix’ MultiMuscle Analysis is a process that can split sample analysis into multiple parallel tracks to minimize antibody cross-reactivity and allow for use of the best-fit platform or kit for each biomarker analysis. The process may require only one tube of sample with only one F/T cycle.

Focusing on Large Molecules 

BioAgilytix, a specialized bioanalytical laboratory, is a global leader in large molecule bioanalysis. The company’s business encompasses pharmacokinetic/pharmacodynamic (PK/PD) studies of large biomolecules, in addition to immunogenicity, biomarkers, and cell-based assays. In less than 10 years,BioAgilytix has grown from a start-up to an international powerhouse with over 100 employees—more than half possessing advanced scientific degrees—because of its team’s expertise in the complexities of large molecule drug development.

“In contrast to small molecule analysis, which has become more of a commodity due to its semiautomated and process-oriented nature, large molecule analysis is inherently challenging,” said Afshin Safavi, Ph.D., founder and chief science officer of BioAgilytix. “In large molecule bioanalysis, we rely heavily on analytical reagents, such as antibodies and recombinant proteins, which are known to show considerable variability from lot to lot.

BioAgilytix’ MultiMuscle Analysis is a process that can split sample analysis into multiple parallel tracks to minimize antibody cross-reactivity and allow for use of the best-fit platform or kit for each biomarker analysis. The process may require only one tube of sample with only one F/T cycle.

“Therefore, designing an effective analytical process for large biomolecules requires scientific personnel with years of experience. It also requires careful management of critical reagents, and a deep understanding of the capabilities and limitations of the platforms selected for use.”

Dr. Safavi explains that the biomarker field has been trending away from a gunshot approach traditionally favored by large pharma to more focused analyses of a few key biomarkers.

“Unlike several years ago, most biotech and pharma companies now perform careful due diligence and literature research before approaching us, to narrow down their investigation to just a handful of biomarkers,” he explained. Limited samples may drive the desire to multiplex as many biomarkers as possible, but a multiplex approach may often result in low quality data due to reagent cross-reactivity.

A recent process innovation developed by BioAgilytix, called MultiMuscle Analysis™, uses a customized parallel process to drastically reduce analytical process time and increase data quality. MultiMuscle Analysis splits the sample analysis into multiple parallel tracks, each performed on specialized equipment by scientists experienced in that particular platform.

“Say, for example, a customer requests measurements of 10 biomarkers,” ventured Dr. Safavi. “If we know some of the antibodies may cross-react, then we may, for example, end up with one heptaplex and three as uniplexes, all done in parallel.”

Using this approach, BioAgilytix is able to perform large biomarker analyses on a very large number of samples in near real-time. “We now receive samples from over 20 countries,” Dr. Safavi stated. “We have used the MultiMuscle approach successfully over and over.”

Feature ArticlesMore » May 1, 2016 (Vol. 36, No. 9)

Paving the Road for Clinical Biomarkers

Where Trackless Terrain Once Challenged Biomarker Development, Clearer Paths Are Emerging

Kate Marusina, Ph.D.

Focusing on Large Molecules

BioAgilytix’ MultiMuscle Analysis is a process that can split sample analysis into multiple parallel tracks to minimize antibody cross-reactivity and allow for use of the best-fit platform or kit for each biomarker analysis. The process may require only one tube of sample with only one F/T cycle.

BioAgilytix, a specialized bioanalytical laboratory, is a global leader in large molecule bioanalysis. The company’s business encompasses pharmacokinetic/pharmacodynamic (PK/PD) studies of large biomolecules, in addition to immunogenicity, biomarkers, and cell-based assays. In less than 10 years, BioAgilytix has grown from a start-up to an international powerhouse with over 100 employees—more than half possessing advanced scientific degrees—because of its team’s expertise in the complexities of large molecule drug development.

“In contrast to small molecule analysis, which has become more of a commodity due to its semiautomated and process-oriented nature, large molecule analysis is inherently challenging,” said Afshin Safavi, Ph.D., founder and chief science officer of BioAgilytix. “In large molecule bioanalysis, we rely heavily on analytical reagents, such as antibodies and recombinant proteins, which are known to show considerable variability from lot to lot.

“Therefore, designing an effective analytical process for large biomolecules requires scientific personnel with years of experience. It also requires careful management of critical reagents, and a deep understanding of the capabilities and limitations of the platforms selected for use.”

Dr. Safavi explains that the biomarker field has been trending away from a gunshot approach traditionally favored by large pharma to more focused analyses of a few key biomarkers.

“Unlike several years ago, most biotech and pharma companies now perform careful due diligence and literature research before approaching us, to narrow down their investigation to just a handful of biomarkers,” he explained. Limited samples may drive the desire to multiplex as many biomarkers as possible, but a multiplex approach may often result in low quality data due to reagent cross-reactivity.

A recent process innovation developed by BioAgilytix, called MultiMuscle Analysis™, uses a customized parallel process to drastically reduce analytical process time and increase data quality. MultiMuscle Analysis splits the sample analysis into multiple parallel tracks, each performed on specialized equipment by scientists experienced in that particular platform.

“Say, for example, a customer requests measurements of 10 biomarkers,” ventured Dr. Safavi. “If we know some of the antibodies may cross-react, then we may, for example, end up with one heptaplex and three as uniplexes, all done in parallel.”

Using this approach, BioAgilytix is able to perform large biomarker analyses on a very large number of samples in near real-time. “We now receive samples from over 20 countries,” Dr. Safavi stated. “We have used the MultiMuscle approach successfully over and over.”

Predicting Clotting or Hemorrhaging

Venous thromboembolism (VTE) is a disease that includes both deep vein thrombosis (DVT) and pulmonary embolism (PE). It is a common, lethal disorder, symptoms of which are often overlooked. VTE is the third most common cardiovascular illness after acute coronary syndrome and stroke.

Venous thrombi, composed predominately of red blood cells bound together by fibrin, form in sites of vessel damage and areas of stagnant blood flow. Once VTE is diagnosed, anticoagulation therapy is indicated.

A novel anticoagulant that reversibly and directly inhibits factor Xa, a key factor in the coagulation system, has been developed by Daiichi Sankyo. “Once on the path of development of an anticoagulant, we recognized the lack of a rapid and sensitive coagulation test that would not be affected by blood traces of anticoagulant therapies,” said Michele Mercuri, M.D., Ph.D., the company’s senior vice president. “An improved diagnostic test would speed up recognition and treatment of thrombosis, and would aid in development of reversing agents that reduce the effect of anticoagulant therapies when needed.”

When Daiichi Sankyo entered in collaboration with Perosphere to develop a novel broad-spectrum reversing agent, the company also supported development of a point-of-care coagulometer (still under development), a hand-held device designed for broad-spectrum monitoring of the activity of anticoagulants and their corresponding reversing agents, across drug classes. A single test requires only 10 µL of fresh or citrated whole blood from a venous draw or finger stick. It optically measures clotting starting with Factor XII activation to fibrin assembly.

Dr. Mercuri explains that none of the existing tests are able to predict whether a patient is at risk for either clotting or hemorrhaging. “Together with Prof. Zahi Fayad’s Team from the Icahn School of Medicine at Mt Sinai, we used magnetic resonance imaging with the gadolinium-based contrast reagent to detect the venous thrombi and follow their dissolution with edoxaban treatment,” reported Dr. Mercuri.

This study, the edoxaban Thrombus Reduction Imaging Study (eTRIS), was focused on developing and validating a magnetic resonance venography (MRV) image acquisition and analysis protocol for the quantification of thrombus volume in deep vein thrombosis. The multicenter study demonstrated excellent reproducibility of analysis of quantifying thrombus volume.

 

Sequence and Epigenetic Factors Determine Overall DNA Structure

Researchers at Ulsan National Institute of Science and Technology (UNIST) in South Korea found that DNA molecules directly interact with one another in ways that are dependent on the sequence of the DNA and epigenetic factors.

The researchers found evidence for sequence-dependent attractive interactions between double-stranded DNA molecules that neither involve intermolecular strand exchange nor are mediated by DNA-binding proteins.

“DNA molecules tend to repel each other in water, but in the presence of special types of cations, they can attract each other just like nuclei pulling each other by sharing electrons in between,” explained lead study author Hajin Kim, Ph.D., assistant professor of biophysics at UNIST. “Our study suggests that the attractive force strongly depends on the nucleic acid sequence and also the epigenetic modifications.”

The investigators used atomic-level simulations to measure forces between double-stranded DNA helices, proposing that the distribution of methyl groups on DNA were the key to regulating this sequence-dependent attraction.

The findings from this study were published recently in Nature Communications through an article entitled “Direct evidence for sequence-dependent attraction between double-stranded DNA controlled by methylation.”

The researchers surmised that direct DNA-DNA interactions could play a central role in how chromosomes are organized and packaged, determining the ultimate fate of many cell types.

Dr. Kim concluded by stating that “in our lab, we try to unravel the mysteries within human cells based on the principles of physics and the mechanisms of biology—seeking for ways to prevent chronic illnesses and diseases associated with aging.”

Searches Related to Direct evidence for sequence-dependent attraction between double-stranded DNA controlled by methylation

 

Direct evidence for sequence-dependent attraction between double-stranded DNA controlled by methylation

Jejoong Yoo, Hajin Kim, Aleksei Aksimentiev  & Taekjip Ha

Nature Communications 22 Mar 2016; 7(11045)    http://dx.doi.org:/10.1038/ncomms11045

Although proteins mediate highly ordered DNA organization in vivo, theoretical studies suggest that homologous DNA duplexes can preferentially associate with one another even in the absence of proteins. Here we combine molecular dynamics simulations with single-molecule fluorescence resonance energy transfer experiments to examine the interactions between duplex DNA in the presence of spermine, a biological polycation. We find that AT-rich DNA duplexes associate more strongly than GC-rich duplexes, regardless of the sequence homology. Methyl groups of thymine acts as a steric block, relocating spermine from major grooves to interhelical regions, thereby increasing DNA–DNA attraction. Indeed, methylation of cytosines makes attraction between GC-rich DNA as strong as that between AT-rich DNA. Recent genome-wide chromosome organization studies showed that remote contact frequencies are higher for AT-rich and methylated DNA, suggesting that direct DNA–DNA interactions that we report here may play a role in the chromosome organization and gene regulation.

Formation of a DNA double helix occurs through Watson–Crick pairing mediated by the complementary hydrogen bond patterns of the two DNA strands and base stacking. Interactions between double-stranded (ds)DNA molecules in typical experimental conditions containing mono- and divalent cations are repulsive1, but can turn attractive in the presence of high-valence cations2. Theoretical studies have identified the ion–ion correlation effect as a possible microscopic mechanism of the DNA condensation phenomena345. Theoretical investigations have also suggested that sequence-specific attractive forces might exist between two homologous fragments of dsDNA6, and this ‘homology recognition’ hypothesis was supported by in vitro atomic force microscopy7 and in vivo point mutation assays8. However, the systems used in these measurements were too complex to rule out other possible causes such as Watson–Crick strand exchange between partially melted DNA or protein-mediated association of DNA.

Here we present direct evidence for sequence-dependent attractive interactions between dsDNA molecules that neither involve intermolecular strand exchange nor are mediated by proteins. Further, we find that the sequence-dependent attraction is controlled not by homology—contradictory to the ‘homology recognition’ hypothesis6—but by a methylation pattern. Unlike the previous in vitro study that used monovalent (Na+) or divalent (Mg2+) cations7, we presumed that for the sequence-dependent attractive interactions to operate polyamines would have to be present. Polyamine is a biological polycation present at a millimolar concentration in most eukaryotic cells and essential for cell growth and proliferation910. Polyamines are also known to condense DNA in a concentration-dependent manner211. In this study, we use spermine4+(Sm4+) that contains four positively charged amine groups per molecule.

 

Methylation determines the strength of DNA–DNA attraction

Analysis of the MD simulations revealed the molecular mechanism of the polyamine-mediated sequence-dependent attraction (Fig. 2). In the case of the AT-rich fragments, the bulky methyl group of thymine base blocks Sm4+ binding to the N7 nitrogen atom of adenine, which is the cation-binding hotspot2122. As a result, Sm4+ is not found in the major grooves of the AT-rich duplexes and resides mostly near the DNA backbone (Fig. 2a,d). Such relocated Sm4+ molecules bridge the two DNA duplexes better, accounting for the stronger attraction16232425. In contrast, significant amount of Sm4+ is adsorbed to the major groove of the GC-rich helices that lacks cation-blocking methyl group (Fig. 2b,e).

Figure 2: Molecular mechanism of polyamine-mediated DNA sequence recognition.

(ac) Representative configurations of Sm4+ molecules at the DNA–DNA distance of 28 Å for the (AT)10–(AT)10 (a), (GC)10–(GC)10 (b) and (GmC)10–(GmC)10 (c) DNA pairs. The backbone and bases of DNA are shown as ribbon and molecular bond, respectively; Sm4+ molecules are shown as molecular bonds. Spheres indicate the location of the N7 atoms and the methyl groups. (df) The average distributions of cations for the three sequence pairs featured in ac. Top: density of Sm4+ nitrogen atoms (d=28 Å) averaged over the corresponding MD trajectory and the z axis. White circles (20 Å in diameter) indicate the location of the DNA helices. Bottom: the average density of Sm4+ nitrogen (blue), DNA phosphate (black) and sodium (red) atoms projected onto the DNA–DNA distance axis (x axis). The plot was obtained by averaging the corresponding heat map data over y=[−10, 10] Å. See Supplementary Figs 4 and 5 for the cation distributions at d=30, 32, 34 and 36 Å.

Genome-wide investigations of chromosome conformations using the Hi–C technique revealed that AT-rich loci form tight clusters in human nucleus2728. Gene or chromosome inactivation is often accompanied by increased methylation of DNA29 and compaction of facultative heterochromatin regions30. The consistency between those phenomena and our findings suggest the possibility that the polyamine-mediated sequence-dependent DNA–DNA interaction might play a role in chromosome folding and epigenetic regulation of gene expression.

 

Phenotypic and Biomarker-based Drug Discovery

Organizers: Michael Foley (Tri-Institutional Therapeutics Discovery Institute), Ralph Garippa (Memorial Sloan-Kettering Cancer Center), David Mark (F. Hoffmann-La Roche), Lorenz Mayr (Astra Zeneca), John Moffat (Genentech), Marco Prunotto (F. Hoffmann-La Roche), and Sonya Dougal (The New York Academy of Sciences)Presented by the Biochemical Pharmacology Discussion Group

Reported by Robert Frawley | Posted January 12, 2016

Overview

There are two major methods for designing pharmaceutical drugs. In traditional drug discovery (TDD), or empiric design, researchers target a particular domain or protein after working to understand its mechanisms and molecular biology. In phenotypic drug discovery (PDD), many different compounds are tested on a system until one results in an observable phenotype of success, and the compounds’ mechanisms of action are not considered. The Phenotypic and Biomarker-based Drug Discovery symposium, presented by the Academy’s Biochemical Pharmacology Discussion Group on October 27, 2015, featured current work in PDD and highlighted the need to bridge commercial and academic research to improve phenotypic drug design.

Phenotypic drug discovery—screening of thousands of substances for functional cellular outputs such as gene expression, growth arrest, and cancer cell death—has led to the development of more commercial drugs than TDD, the more common method of discovery. Indeed, as Jonathan A. Lee of Eli Lilly noted, spending on TDD is out of sync with the rate of new drugs reaching approval; the number of new drugs per billion dollars spent dropped sharply in the last few decades. He argued that the need for functionally validated drugs could be met through a renewed focus on PDD.

Bruce A. Posner started the morning session with a discussion of a phenotypic screen conducted at the University of Texas Southwestern Medical Center which identified two chemical scaffolds that are effective in killing non-small cell lung cancer (NSCLC) cells but are harmless to the non-cancer cells tested. In further studies, the group showed that an optimized analog of one scaffold arrested tumor growth in a mouse xenograft model of NSCLC. Both chemical scaffolds appear to work through a novel mechanism targeting stearoyl-CoA desaturase (SCD), which is known to be important in unsaturated fatty acid synthesis. These compounds were found to be specific, effective, and potent in NSCLC cell lines that express elevated levels of Cyp4F11 and/or related Cyp family members. The group also showed that these scaffolds function as prodrugs that are activated only in cancer cells expressing these Cyp isoforms and that the Cyps produce metabolites of the prodrug that bring about cancer-specific cell toxicity. The group is working to improve these scaffolds and to develop a putative biomarker based on Cyp expression.

The Broad Institute’s LINCS (Library of Network-based Cellular Signatures) database is designed to keep track of small-molecule therapeutics, collecting data on cellular responses to “perturbagens” (drugs, factors, and others stimuli). Data are generated using the L1000 assay, which assesses the expression of 1000 genes known to explain 80% of genetic variation in assayed cell lines. Aravind Subramanian explained that the technique can identify the majority of drug effects for a fraction of the cost of RNA sequencing. Although it examines only a subset of molecules and relies on measuring genetic responses, the technique can help predict the likelihood that new compounds will elicit desired effects.

Martin Main of AstraZeneca described phenotypic drug discovery at AstraZeneca. The company’s model for discovery is to check phenotypic markers at every step, as drugs are moved from cell lines to patients. Main’s team identified a molecule that enhances the regenerative function of cardiac myocytes after infarction. Using cells from several donors, the team validated a promising compound that increases proliferation of cardiac myocytes and drives epicardium-derived progenitor cells to assume a myocyte lineage. In another discovery, the team used islet β-cell regeneration as the phenotype, discovering a compound the researchers believe will reach clinical trials for type 2 diabetes.

Andras J. Bauer of Boehringer Ingelheim discussed a method to increase predictive strength in compound selection before phenotypic screening. By cataloging the structures of known target–reference compound binding pairs, the team can compare those structures to untested compounds, and then assess only the most promising compounds. The THICK (Target Hypothesis Information from Curated Knowledge bases) database gives interaction-probability scores to untested compounds on the basis of structure. Bauer also described a method to verify target–compound interaction without labeling the molecules, in which phenotypic results were verified with mass spectrometry.

In the afternoon session, Myles Fennell of Memorial Sloan-Kettering Cancer Center described his work testing small interfering RNA (siRNA) libraries to find siRNAs that alter macropinocytosis (MP), cell-surface ruffling that is seen in prostate cancer cells. The surface phenotype allows TMR-dextran uptake, which the researchers measured in the screen. MP is driven by RAS (a commonly affected gene family in cancers) and the pathways are already popular drug targets. The researchers tested two libraries of siRNAs, which block translation of specific proteins, using TMR as a marker to report MP severity, as well as sensitive single-cell assays to determine siRNA efficacy. The team identified promising target sequences and used a data-analysis pipeline called KNIME to define several hits, which the researchers are pursuing in therapeutic development.

http://www.nyas.org/image.axd?id=0b4496f6-28fb-435c-bd11-06b4d31fc0ad&t=635863102714400000

TMR-dextran is able to work into cells undergoing macropinocytosis and thus these cells can be separated by phenotype as seen in the controls above. (Image courtesy of Myles Fennell)

Giulio Superti-Furga of the Austrian Academy of Sciences is a proponent of understanding the mechanisms of action (MOA) of candidate drugs. He began by explaining that the genome is an incomplete indicator of disease; epigenetics, altered protein function, metabolism, and other factors are also important. He then introduced pharmacoscopy and the “thermal shiftome” as methods to phenotypically screen compounds. Pharmacoscopy uses high-power automated microscopy to describe how compounds affect cell populations by using specific stains for different cell types; a computer then counts the cells expressing each stain, yielding results similar to those obtained via fluorescence-activated cell sorting but generated through an automated process. The thermal shiftome catalogs changes in thermal stability after protein binding in known reactions and is used to characterize the stability of new reactions. Superti-Furga offered a perspective that tempered the enthusiasm for pure PDD and advocated a mechanistic approach to drug discovery.

Michael R. Jackson, at one of the largest academic screening facilities, the Sanford Burnham Prebys Medical Discovery Institute, led a reexamination of drug screens performed by pharmaceutical companies. His team conducted millions of assays and accumulated a large data library with few new hits. However, the researchers were able to closely characterize the chemistry of one hit, an undisclosed interaction, and Jackson’s group is proceeding to develop a drug to modulate nuclear receptor signaling. The researchers also have a procedure that can screen for the differentiation of human induced pluripotent stem cells (iPSCs) into neurons for potential neuro-regenerative therapies. They developed high-throughput morphology, endpoint-measurement, and proliferation assays that generate tightly clustered, repeatable data. The team has produced consistent results screening 10 immune modulators and various cytokines to assess the reactivity and stability of the cells, providing reliable compound characterization. This success in human cells shows that a disease-relevant patient-derived screening platform to characterize differentiation and immune response is possible with robust assays.

In the next set of talks, Friedrich Metzger and Susanne Swalley described the parallel work of Hoffmann-La Roche and Novartis, respectively, toward treating spinal muscular atrophy (SMA). A devastating disease that leads to loss of motor function and affects motor nerve cells in the spinal cord, SMA presents a unique drug development opportunity. The condition is caused by the loss of function of a single gene product called survival of motor neuron (SMN1). Humans encode an unstable gene product, called SMN2, which is nearly homologous to SMN1.

Metzger explained that the inactive SMN2 variant is largely the same as active SMN1 but, missing exon 7, cannot compensate in its absence. The group from Hoffmann-La Roche aimed to stabilize SMN2 by promoting the inclusion of exon 7. The researchers conducted a phenotypic screen seeking a compound that could change the splicing in patient fibroblasts in vitro and produce a stable, functional SMN2 protein including exon 7. In studies with an SMN2Δ7 mouse model (lacking exon 7), mice drugged with the compound experienced full phenotypic rescue. The compound has been shown to induce alternative splicing of SMN2 to include exon 7 in healthy human volunteers; it was well tolerated and is moving to human patient trials.

Swalley discussed the target identification and MOA of the Novartis compound. After a screening process similar to Roche’s, Novartis moved its compound into animal models while also beginning parallel experimentation to find out why it worked. The group found that U1-snRNP, a spliceosome component required for the splicing process, is bound at two essential nucleotides by the compound. In the SMN2Δ7 mice, the compound improved survival and rescued full SMN2 protein expression. The Novartis compound stabilizes the appropriate spliceosome components to produce SMN2 with exon 7 intact. This novel mechanism demonstrates that a sequence-selective small molecule therapy can alter splicing activity to treat SMA. Together these talks demonstrated the power of PDD and the importance of validating drug mechanisms.

The final talk of the day was given by Hoffmann-La Roche’s Jitao David Zhang, who suggested that pathway reporter genes, which are only modulated when a specific signaling pathway is activated or inhibited, can be used as phenotypic readouts. It is known that gene expression data can predict cell phenotype. Using transcriptomics as a surrogate for downstream phenotypes, for example by using expression data from a gene subset to predict outcomes, would save time and effort. In an iPSC cardiomyocyte model of diabetic stress, machine learning (guided by pathway information) characterizes the response of the iPSCs to a library of compounds, highlighting compounds and pathways worthy of further investigation. This new platform for molecular phenotyping using pathway reporter genes, sequencing, and early analysis speeds compound characterization.

Use the tabs above to find multimedia from this event.

Presentations available from:
Andras J. Bauer, PhD, PharmD (Boehringer Ingelheim)
Myles Fennell, PhD (Memorial Sloan-Kettering Cancer Center)
Jonathan A. Lee, PhD (Eli Lilly)
Martin Main, PhD (AstraZeneca)
Yao Shen, PhD (Columbia University)
Susanne Swalley, PhD (Novartis Institutes for BioMedical Research)
Jitao David Zhang, PhD (F. Hoffmann-La Roche)

Read Full Post »

Digital PCR

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

GEN Roundup: Digital PCR Advances Partition by Partition  

By Partitioning Samples Digital PCR Is Lowering Detection Limits and Enabling New Applications

GEN  Mar 1, 2016 (Vol. 36, No. 5)       http://www.genengnews.com/gen-articles/gen-roundup-digital-pcr-advances-partition-by-partition/5697

 

  • Digital PCR (dPCR) has generated intense interest because it is showing potential as a clinical diagnostics tool. It has already proven to be a useful technique for any application where extreme sensitivity or precise quantification is essential, such as identifying mutations or copy number variations in tumor cells, or examining gene expression at the single-cell level.

    GEN interviewed several dPCR experts to find out specifically why the technique is increasing in popularity. GEN also asked the experts to envision dPCR’s future capabilities.

  • GEN: What makes dPCR technology such a superior tool for discovery and diagnostic applications?

    Dr. Shelton The high levels of sensitivity, precision, and reproducibility in DNA and quantification are the major strengths of dPCR. The technology is robust where differences in primer efficiency or the presence of sample-specific PCR inhibitors are trivial to the final quantification through an end-point amplification reaction.

    This provides value to discovery as a trusted tool for validating potential biomarkers and hypotheses generated by broad profiling techniques such as microarrays or next-generation sequencing (NGS). In diagnostics applications, the reproducibility and rapid results of dPCR are critical for labs around the world to quickly compare and share data, especially for ultra-low detection of DNA where variability is high.

    Dr. Garner Digital PCR provides a precise direct counting approach for single molecule detection, thereby providing a straightforward process for the absolute quantification of nucleic acids in samples. One of the biggest advantages of using a system such as ours is its ability to do real-time reads on digital samples. When samples go through PCR, their results are recorded after each cycle.

    These results build a curve, and customers can analyze the data if something went wrong. If it isn’t a clean read—from either a contamination issue, primer-dimer issue, or off-target issue—the curve isn’t the classic PCR curve.

    Dr. Menezes Digital PCR allows absolute quantification of target concentration in samples without the need for standard curves. Obtaining consistent, precise, and absolute quantification with regular qPCR is dependent on standard curve generation and amplification efficiency calculations, which can introduce errors.

    Ms. Hibbs At MilliporeSigma Cell Design Studio, the implementation of dPCR has improved and accelerated the custom cell engineering workflow. After the application of zinc finger nuclease or CRISPR/Cas to create precise genetic modifications in mammalian cell lines, dPCR is used to characterize the expected frequency of homologous recombination and develop a screening strategy based on this expected frequency.

    In some cell lines, homologous recombination occurs at a low frequency. In such cases, dPCR is used to screen cell pools and subsequently identify rare clones having the desired mutation. Digital PCR is also used to accurately and expeditiously measure target gene copy number. It is used this way, for example, in polyploid cell lines.

    Dr. Price The ability to partition genomic samples to a level that enables robust detection of single target molecules is what sets dPCR apart as an innovative tool. Each partition (droplet in the case of the RainDrop System) operates as an individual PCR reaction, allowing for sensitive, reproducible, and precise quantification of nucleic acid molecules without the need for reference standards or endogenous controls.

    Partitioning also provides greater tolerance to PCR inhibitors compared to quantitative PCR (qPCR). In doing so, dPCR can remedy many shortcomings of qPCR by transforming the analog, exponential nature of PCR into a digital signal.

    Mr. Wakida Digital PCR is an ideal technology for detecting rare targets at concentrations of 0.1% or lower. By partitioning samples prior to PCR, exceptionally rare targets can be isolated into individual partitions and amplified.

    Digital PCR produces absolute quantitative results, so in some respects, it is easier than qPCR because it doesn’t require a standard curve, with the added advantages of being highly tolerant of inhibitors and being able to detect more minute fold changes. Absolute quantification is useful for generating reference standards, detecting viral load, and preparing NGS libraries.

  • GEN: In what field do you think dPCR will have the greatest impact in the future?

    Dr. Shelton dPCR will have a great impact on precision medicine, especially in liquid biopsy analysis. Cell-free DNA from bodily fluids such as urine or blood plasma can be analyzed quickly and cost-effectively using dPCR. For example, a rapid dPCR test can be performed to determine mutations present in a patient’s tumor and help drive treatment decisions.

    Iterative monitoring of disease states can also be achieved due to the relatively low cost of dPCR, providing faster response times when medications are failing. Gene editing will also be greatly impacted by dPCR. Digital PCR enables refinement and optimization of gene-editing tools and conditions. Digital PCR also serves as quality control of therapeutically modified cells and viral transfer vectors used in gene-therapy efforts.

    Dr. Garner The BioMark™ HD system combines dPCR with simultaneous real-time data for counting and validation. This capability is important for applications such as rare mutation detection, GMO quantitation, and aneuploidy detection—where false positives are intolerable and precision is paramount.

    Any field that requires precision and the ability to detect false positives is a likely target for Fluidigm’s dPCR. Suitable applications include detecting and quantifying cancer-causing genes in patients’ cells, viral RNA that infects bacteria, or fetal DNA in an expectant mother’s plasma.

    Dr. Menezes This technology is particularly useful for samples with low frequency sequences as, for example, those containing rare alleles, low levels of pathogen, or low levels of target gene expression. Teasing out fine differences in copy number variants is another area where this technology delivers more precise data.

    Ms. Hibbs Digital PCR overcomes limitations associated with low-abundance template material and quantification of rare mutations in a high background of wild-type DNA sequence. For this reason, dPCR is poised to have significant impacts in diverse clinical applications such as detection and quantification of rare mutations in liquid biopsies, detection of viral pathogens, and detection of copy number variation and mosaicism.

    Dr. Price Due to its high sensitivity, precision, and absolute quantification, the RainDrop dPCR has the potential to extend the range of nucleic acid analysis beyond the reach of other methods in a number of applications that could lend themselves to diagnostic, prognostic, and predictive applications. The precision of dPCR can be extremely useful in applications that require finer measures of fold change and rare variant detection.

    Digital PCR is suitable for addressing varied research and clinical challenges. These include the early detection of cancer, pathogen/viral detection and quantitation, copy number variation, rare mutation detection, fetal genetic screening, and predicting transplant rejection. Additional applications include gene expression analysis, microRNA analysis, and NGS library quantification.

    Mr. Wakida Digital PCR will have an impact on applications for detecting rare targets by enabling investigators to complement and extend their capabilities beyond traditionally employed methods. One such application is using dPCR to monitor rare targets in peripheral blood, as in liquid biopsies.

    The monitoring of peripheral blood by means of dPCR has been described in several peer-reviewed articles. In one such article, investigators considered the clinical value of Thermo’s QuantStudio™ 3D Digital PCR system for the detection of circulating DNA in metastatic colorectal cancer (Dig Liver Dis. 2015 Oct; 47(10): 884–90).

  • GEN: Is there a new technology on the horizon that will increase the speed and/or efficiency of dPCR?

    Dr. Shelton High-throughput sample analysis can be an issue with some dPCR systems. However, Bio-Rad’s Automated Droplet Generator allows labs to process 96 samples simultaneously, a capability that eliminates user-to-user variability and minimizes hands-on time.

    We also want users to get the most information from one sample. Therefore, we are focused on expanding the multiplexing capabilities of our system. In development at Bio-Rad are new technologies that increase the multiplexing capabilities without loss of specificity or accuracy in the downstream workflow.

    Dr. Garner Much of the industry direction seems to be in offering ever-higher resolution, or the ability to run more samples at the same resolution. Thus far, however, customers haven’t found commercial uses for these tools. Also, with increasing resolution and the search for even rarer mutations, the challenge of detecting false positives becomes an even bigger issue.

    Dr. Menezes Use of ZEN™ Double-Quenched Probes by IDT in digital PCR provides increased sensitivity and a lower limit of detection. Due to the second quencher, ZEN probes provide even lower background than traditional single-quenched probes. And this lower background enables increased sensitivity when analyzing samples with low copy number targets, where every droplet matters.

    Ms. Hibbs Quantification relies upon counting the number of positive partitions at the end point of the reaction. Accordingly, precision and resolution can be increased by increasing the number of partitions. We are now capable of analyzing on the order of millions of partitions per run, further extending the lower limit of detection. Additionally, the workflow is amenable to the integration of automation in order to increase throughput and standardize reaction set up.

    Dr. Price Although dPCR is still an emerging technology, there is tremendous interest in its potential clinical diagnostics applications. Enabling adoption of dPCR in the clinical lab requires addressing current gaps in workflow, cost, throughput, and turnaround time.

    Digital PCR technology has the potential for being improved significantly in two dimensions. First, one can address the problem of serially detecting positive versus negative partitions by leveraging lower-cost imaging detection technologies. Alternatively, one may capitalize on the small partition volumes to dramatically reduce the time to perform PCR. Ideally, the future will bring both capabilities to bear.

    Mr. Wakida Compared to qPCR, dPCR currently requires more hands-on time to set up experiments. We are investigating methods to address this.

 

PCR Shows Off Its Clinical Chops   

Thanks to Advances in Genomics, PCR Is Becoming More Common in Clinical Applications

  • Last May, Roche Molecular Systems announced that its cobas Liat Strep A assay received a CLIA waiver. This clinic-ready assay can detect Streptococcus pyogenes (group A ß-hemolytic streptococcus) DNA in throat swabs by targeting a segment of the S. pyogenes genome.

    Since its invention by Kary B. Mullis in 1985, the polymerase chain reaction (PCR) has become well established, even routine, in research laboratories. And now PCR is becoming more common in clinical applications, thanks to advances in genomics and the evolution of more sensitive quantitative PCR methodologies.

    Examples of clinical applications of PCR include point-of-care (POC) molecular tests for bacterial and viral detection, as well as mutation detection in liquid or tumor biopsies for patient stratification and treatment monitoring.
    Industry leaders recently participated in a CHI conference that was held in San Francisco. This conference—PCR for Molecular Medicine—encompassed research and clinical perspectives and emphasized advanced techniques and tools for effective disease diagnosis.
    To kick off the event, speakers shared their views on POC molecular tests. These tests, the speakers insisted, can provide significant value to healthcare only if they support timely decision making.
    Clinic-ready PCR platforms need to combine speed, ease of use, and accuracy. One such platform, the cobas Liat (“laboratory in a tube”), is manufactured by Roche Molecular Systems. The system employs nucleic acid purification and state-of-art PCR-based assay chemistry to enable POC sites to rapidly provide lab-quality results.
    The cobas Liat Strep A Assay detects Streptococcus pyogenes (group A β-hemolytic streptococcus) DNA by targeting a segment of the S. pyogenes genome. The operator transfers an aliquot of a throat swab sample in Amies medium into a cobas Liat Strep A Assay tube, scans the relevant tube and sample identification barcodes, and then inserts the tube into the analyzer for automated processing and result interpretation. No other operator intervention or interpretation is required. Results are ready in approximately 15 minutes.

    According to Shuqi Chen, Ph.D., vp of Point-of-Care R&D at Roche Molecular Systems, clinical studies of the cobas Liat Strep A Assay demonstrated 97.7% sensitivity when the test was used at CLIA-waived, intended-use sites, such as physicians’ offices. In comparison, rapid antigen tests and diagnostic culture have sensitivities of 70% and 81%, respectively (according to a 2009 study Tanz et al. in Pediatrics).

    The cobas Liat assay preserved the same ease-of-use and rapid turnaround as the rapid antigen tests. It addition, it provided significantly faster turnaround than the lab-based culture test, which can take 24–48 hours.

    A CLIA waiver was announced for the cobas Liat Strep A assay in May 2015. CLIA wavers have been submitted for cobas Liat flu assays, and Roche intends to extend the assay menu.

    POC tests are also moving into field applications. Coyote Bioscience has developed a novel method for one-step gene testing without nucleic acid extraction that can be as fast as 10 minutes from blood sample to result. Their portable devices for molecular diagnostics can be used as genetic biosensors to bring complex clinical testing directly to the patient.

    “Instead of sequential steps, reactions happen in parallel, significantly reducing analysis time. Buffer, enzyme, and temperature profiles are optimized to maximize sensitivity,” explained Sabrina Li, CEO, Coyote Bioscience. “Both RNA and DNA can be analyzed simultaneously from a drop of blood in the same reaction.”

    The first-generation Mini-8 system was used for Ebola detection in Africa where close to 600 samples were tested with 98.8% sensitivity. Recently in China, the Mini-8 system was applied in hospitals and small community clinics for hepatitis B and C and Bunia virus detection. The second-generation InstantGene system is currently being tested internally with clinical samples.

  • Digital PCR

    Conventional real-time PCR technology, while suited to the analysis of high-quality clinical samples, may effectively conceal amplification efficiency changes when sample quality is inconsistent. A more effective alternative, Bio-Rad suggests, is its droplet-digital PCR (ddPCR) technology, which can provide absolute quantification of target DNA or RNA, a critical advantage when samples are limited, degraded, or contain PCR inhibitors. The company says that of the half-dozen clinical trials that are using digital PCR, half rely on the Bio-Rad QX200 ddPCR system.

    Personalized cancer care requires ultra-sensitive detection and monitoring of actionable mutations from patient samples. The high sensitivity and precision of droplet-digital PCR (ddPCR) from Bio-Rad Laboratories offers critical advantages when clinical samples are limited, degraded, or contain PCR inhibitors.

    Typically, formalin-fixed and paraffin-embedded (FFPE) tissue samples are processed. FFPE samples work well for immunohistochemistry and protein analysis; however, the formalin fixation can damage nucleic acids and inhibit the PCR reaction. Samples may yield 100 ng of purified nucleic acid, but the actual amplifiable material is less than 1%, or 1 ng, in most cases.

    “Current qPCR technology depends on real-time fluorescence accumulation as the PCR is occurring, which can be an effective means of detecting and quantifying DNA targets in nondegraded samples,” commented Dawne Shelton, Ph.D., staff scientist, Digital Biology Center, Applications Development Group, Bio-Rad Laboratories. “Amplification efficiency is critical; if that amplification efficiency changes because of sample quality it is hidden in the qPCR methodology.”

    “In ddPCR, that is a big red flag,” Dr. Shelton continued. “It changes the format of how the data look immediately so you know the amount of inhibition and which samples are too inhibited to use.”

    Tissue types vary and contain different degrees of fat or other content that can also act as PCR inhibitors. In blood monitoring, the small circulating fragments of DNA are extremely degraded; in addition, food, supplements, or other compounds ingested by the patient may have an inhibitory effect.

    Clinical labs test for these variabilities and clean the blood, but remnant PCR inhibitors can remain. In ddPCR, a single template is partitioned into a droplet. If the droplet contains a good template, it produces a signal; otherwise, it does not—a simple yes or no answer.

    “Even if there is no PCR inhibition, most clinical samples yield very small amounts of nucleic acid,” Dr. Shelton added. “To make a secure decision using qPCR is difficult because you are in a gray zone at the very end of its linear range. ddPCR operates best with small sample amounts and provides good statistics for confidence in your results.”

    Currently, at least a half dozen clinical trials worldwide are using digital PCR, half of them are using the Bio-Rad QX200 Droplet Digital PCR system. Examples of studies include examining BCR-ABL monitoring in patients with chronic myelogenous leukemia (CML); identifying activating mutations in epidermal growth factor receptor (EGFR) for first-line therapy of new drugs in patients with lung cancer; and the monitoring of resistance mutations such as EFGR T790M in patients with non-small cell lung cancer (NSCLC).

    Clovis Oncology used a technology called BEAMing (Beads, Emulsions, Amplification, and Magnetics), a type of digital PCR for blood-based molecular testing, to perform EGFR testing on almost 250 patients in clinical trials. In BEAMing, individual EGFR gene copies from plasma are separated into individual water droplets in a water-in-oil emulsion. The gene copies are then amplified by PCR on magnetic beads.

    The beads are counted by flow cytometry using fluorescently labeled probes to distinguish mutant beads from wild-type. Because each bead can be traced to an individual EGFR molecule in the patient’s plasma, the method is highly quantitative.

    “BEAMing is particularly well-suited for the detection of known mutations in circulating tumor DNA. In this circumstance, the mutation of interest often occurs at low levels, perhaps only 1–2 copies per milliliter or even less, and in a high background of wild-type DNA that comes from normal tissue. BEAMing can detect one mutant molecule in a background of 5,000 wild-type molecules in clinical samples,” stated Andrew Allen, MRCP, Ph.D., chief medical officer, Clovis Oncology.

    In the studies, the EGFR-resistance mutation T790M could be identified in plasma 81% of the time that it was seen in the matched patient tumor biopsy. Additionally, about 10% of patients in the study had a T790M mutation in plasma that was not identified in tissue, presumably because of tumor heterogeneity. Another 5–10% of the patients did not provide an EGFR result, usually because the tissue biopsy had no tumor cells.

    In aggregate, these results suggest that plasma EGFR testing can be a valuable complement to tumor testing in the clinical management of NSCLC patients, and can provide an alternative when a biopsy is not available. Tumor biopsies may provide only limited tissue, if in fact any tissue is available, for molecular analysis. Also, mutations may be missed due to tumor heterogeneity. These mutations may be captured by sampling the blood, which acts as a reservoir for mutations from all parts of a patient’s tumor burden.

    In the last few years, a panoply of clinically actionable driver mutations have been identified for NSCLC, including mutations in EGFR, BRAF, and HER2, as well as ALK, ROS, and RET rearrangements. These driver mutations will migrate NSCLC molecular diagnostic testing in the next few years toward panel testing of relevant cancer genes using various digital technologies, including next-generation sequencing.

     

PCR Has a History of Amplifying Its Game

A GEN 35th Anniversary Retrospective

PCR Has a History of Amplifying Its Game

PCR is a fast and inexpensive technique used to amplify segments of DNA that continues to adapt and evolve for the demanding needs of molecular biology researchers. This diagram shows the basic principles of PCR amplification. [NHGRI]

  • The influence that the polymerase chain reaction (PCR) has had on modern molecular biology is nothing short of remarkable. This technique, which is akin to molecular photocopying, has been the centerpiece of everything from the OJ Simpson Trial to the completion of the Human Genome Project. Clinical laboratories use this DNA amplification method for infectious disease testing and tissue typing in organ transplantation. Most recently, with the explosion of the molecular diagnostics field and meteoric rise in the use of next-generation sequencing platforms, PCR has enhanced its standing as an essential pillar of genomic science.

    Let’s open the door to the past and take a look back around 35 years ago when GEN started reporting on the relatively new disciplines of genetic engineering and molecular biology. At that time, GEN was among the first to hear the buzz surrounding a new method to synthesize and amplify DNA in the laboratory. In reviewing the fascinating history of PCR, we will see how the molecular diagnostics field took shape and where it could be headed in the future.

  • Some Like It Hot

    The biological sciences rarely advance within a vacuum—rather they rely on previous discoveries to promote directly or indirectly our understanding. The contributions made by scientists in the field of molecular biology that contributed to the functional pieces of PCR were numerous and spread out over more than two decades.

    It began with H. Gobind Khorana’s advances in understanding the genetic code, leading to the use of synthetic DNA oligonucleotides, continued through Kjell Kleepe’s 1971 vision of a two-primer system for replicating DNA segments, to Fredrick Sanger’s method of DNA sequencing—a process that would win him the Nobel prize in 1980—which utilized DNA oligo primers, nucleotide precursors, and a DNA synthesis enzyme.

    All of the previous discoveries were essential to PCR’s birth, yet it would be an egregious mistake to begin a retrospective on PCR and not discuss the enzyme upon which the entire reaction hinges upon—DNA polymerase. In 1956, Nobel laureate Arthur Kornberg and his colleagues discovered DNA polymerase (Pol I), in Escherichia coli. Moreover, the researchers described the fundamental process by which the polymerase enzyme copies the base sequence of a DNA template strand. However, it would take biologists another 20 years to discover a version of DNA polymerase that was stable enough for use for any meaningful laboratory purposes.

    That discovery came in 1976 when a team of researchers from the University of Cincinnati described the activity of a DNA polymerase (Taq) they isolated from the extreme thermophile bacteria, Thermus aquaticus, which lives in hot springs and hydrothermal vents. The fact that this enzyme could withstand typical protein-denaturing temperatures and function optimally around 75–80°C fortuitously set the stage for the development of PCR.

    By 1983, all of the ingredients to bake the molecular cake were sitting in the biological cupboard waiting to be assembled in the proper order. At that time, Nobel laureate Kary Mullis was working as a scientist for the Cetus Corporation trying to perfect oligonucleotide synthesis. Mullis stumbled upon the idea of amplifying segments of DNA using multiple rounds of replication and the two primer system—essentially modifying and expanding upon Sanger’s sequencing reaction. Mullis discovered that the temperatures for each step (melting, annealing, and extension) in the reaction would need to be painstakingly controlled by hand. In addition, he realized that since the reactions were using a non-thermostable DNA polymerase, fresh enzyme would need to be “spiked in” after each successive cycle.

    Mullis’ hard work and persistence paid off as the reaction was successful at amplifying a particular segment of DNA that was flanked by two opposing nucleotide primer molecules. Two years later, the Cetus team presented their work at the annual meeting of the American Society for Human Genetics, and the first mention of the method was published in Science that same year; however, that article did not go into detail about the specifics of the newly developed PCR method—a paper that would be rejected by roughly 15 journals and would not be published until 1987.

    Although scientists were a bit slow on the uptake for the new method, the researchers at Cetus were developing ways to improve upon the original assay. In 1986, the scientists substituted the original heat-liable DNA polymerase for the temperature-resistant Taq polymerase, removing the need to spike in enzyme and dramatically reducing errors while increasing sensitivity. A year later, PerkinElmer launched their creation of a thermal cycler, allowing scientists to regulate the heating and cooling parts of the PCR reaction with greater efficiency.

    Extremely soon after the introduction of Taq and the launch of the thermal cycler, the PCR reaction exploded exponentially among research laboratories and not only vaulted molecular biology to the pinnacle of researcher interests, it also launched a molecular diagnostics revolution that continues today and shows no signs of slowing down.

  • Molecular Workhorse

    In the years since PCR first burst onto the scene, there have been a number of significant advancements to the technique that have widely improved the overall method. For example, in 1991, a new DNA polymerase from the hyperthermophilic bacteria Pyrococcus furiosus, or Pfu, was introduced as a high-fidelity alternative enzyme to Taq. Unlike Taq polymerase, Pfu has built in 3′ to 5′ exonuclease proofreading activity, which allows the enzyme to correct nucleotide incorporation errors on the fly—dramatically increasing base specificity, albeit at a reduced rate of amplification versusTaq.

    In 1995, two advancements were introduced to PCR users. The first, called antibody “hot-start” PCR, utilized an immunoglobulin molecule that is directed to the DNA polymerase and inhibits its activity until the first 95°C melt stage, denaturing the antibody and allowing the polymerase to become active. Although this process was effective in increasing the specificity of the PCR reaction, many researchers found that the technique was time consuming and often caused cross-contamination of samples.

    The second innovation introduced that year began another revolution for molecular biology and the PCR method. Real-time PCR, or quantitative PCR (qPCR), allowed researchers to quantitatively create DNA templates for PCR amplification from RNA transcripts through the use of the reverse-transcriptase enzyme and specifically incorporated fluorescent reporter dyes. The technique is still widely used by researchers to monitor gene expression extremely accurately. Over the past 20 years, many companies have spent many R&D dollars to create more accurate, higher throughput, and simple qPCR machines to meet researcher demands.

    With the advent of next-generation sequencing techniques—and the rise of techniques that started commanding the attention of more and more researchers—PCR machines and methods needed to evolve and modernize to keep pace. PCR remained the lynchpin in almost all the next-generation sequencing reactions that came along, but the traditional technique wasn’t nearly as precise as required.

    Digital PCR (dPCR) was introduced as a refinement of the conventional method, with the first real commercial system emerging around 2006. dPCR can be used to quantify directly and clonally amplify DNA or RNA.

    The apparatus carries out a single reaction within a sample. The sample, however, is separated into a large number of partitions. Moreover, the reaction is performed in each partition individually—allowing a more reliable measurement of nucleic acid content. Researchers often use this method for studying gene-sequence variations, such as copy number variants (CNV), point mutations, rare-sequence detection, and microRNA analysis, as well as for routine amplification of next-generation sequencing samples.

  • Future of PCR: Better, Faster, Stronger!

    It is almost impossible to envision a future laboratory setting that wouldn’t utilize PCR in some fashion, especially due to the heavy reliance of next-generation sequencing techniques for accurate PCR samples and at the very least using the method as a simple amplification tool for creating DNA fragments of interest.

    Yet there is at least one new next-generation sequencing technique that can identify native DNA sequences without an amplification step—nanopore sequencing. Although this technique has performed well in many preliminary trials, it is in its relative infancy. It will probably undergo additional development lasting several years before it approaches large-scale adoption by researchers. Even then, PCR has become so engrained into daily laboratory life that to try to phase out the technique would be like asking molecular biologists to give up their pipettes or restriction enzymes.

    Most PCR equipment manufacturers continue to seek ways to improve the speed and sensitivity of their thermal cyclers, while biologists continue to look toward ways to genetically engineer better DNA polymerase molecules with even greater fidelity than their naturally occurring cousins. Whatever the new advancements are, and wherever they lead the life sciences field, you can count on us at GEN to continue to provide our readers with detailed information for another 35 years … at least!

     

Read Full Post »

Next Generation Sequencing in Clinical Laboratory, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Next Generation Sequencing in Clinical Laboratory

Curator: Larry H. Bernstein, MD, FCAP

INSIGHTS on Next-Generation Sequencing

Next-generation (NGS) sequencing brings scalability and sensitivity to diagnostics in ways that traditional DNA analysis could not

Enabling Technology for Diagnosis, Prognosis, and Personalized Medicine

Significantly higher speed, lower cost, smaller sample size, and higher accuracy compared with conventional Sanger sequencing make next-generation sequencing (NGS) an attractive platform for medical diagnostics. By practically eliminating cost and time barriers, NGS allows testing of virtually any gene or genetic mutation associated with diseases.

Scalability and Sensitivity

NGS brings scalability and sensitivity to diagnostics in ways that traditional DNA analysis could not. “NGS analyzes hundreds of gene variants or biomarkers simultaneously. Traditional sequencing is better suited for analysis of single genes or fewer than 100 variants,” notes Joseph Bernardo, president of next-generation sequencing and oncology at Thermo Fisher Scientific (Waltham, MA).

Related Article: Computational Changes in Next-Generation Sequencing

Thermo Fisher’s Oncomine Focus Assay for NGS, for example, analyzes close to 1,000 biomarkers associated with the 52-gene panel. These biomarkers constitute about 1,000 different locations on the 52 genes that correlate with the efficacy of certain drugs. The assay allows single-workflow concurrent analysis of DNA and RNA, enabling sequencing of 35 hot-spot genes, 19 genes associated with copy number gain, and 23 fusion genes.

NGS is also better suited to detect lower levels of variants present in heterogeneous material, such as tumor samples. And while both NGS and Sanger sequencing are versatile, NGS can analyze both DNA and RNA, including RNA fusions, at a much more cost-efficient price point.

“When interrogating a limited number of analytes, Sanger sequencing is the standard for many laboratory- developed tests, offering fast turnaround times and lower cost than NGS,” Bernardo says. “We view the two methods as complementary.”

Diagnostic NGS is moving inexorably toward targeted sequencing, particularly for tumor analysis. The targets are specific regions within a tumor’s DNA or individual genes, or specific locations on single genes.

“Targeted sequencing lends itself to diagnostic testing, particularly in oncology, as the goal is to analyze multiple genes associated with cancer using a platform that offers high sensitivity, reliability, and rapid turnaround time,” Bernardo tells Lab Manager. “It is simply more cost-effective.”

That is why the National Cancer Institute (NCI) chose Thermo Fisher’s Ion Torrent sequencing system and the Oncomine reagents for NCI-MATCH, the most ambitious trial to date of NGS oncology diagnostics.

NCI-MATCH will use a 143-gene panel to test submitted tumor samples at four centers (NCI, MD Anderson Cancer Center, Massachusetts General Hospital, and Yale University). The centers then provide sequencing data that helps direct appropriate treatments.

The NCI test protocol ensures consistency across multiple instruments and sites.

Personalized Treatments

Another great opportunity for NGS-based diagnostics is in personalized or precision medicine for both new and existing drugs. Companion diagnostics—co-approved with the relevant drug—drive this entire business. “The only way personalized medicine can succeed commercially is if pharmaceutical companies incorporate a universal assay philosophy in their development programs instead of developing a unique assay for each new drug,” Bernardo explains. For example, in late 2015, Thermo Fisher partnered with Pfizer and Novartis to develop a universal companion diagnostic with the goal of identifying personalized therapy selection from a menu of drugs targeting non-small-cell lung cancer, which annually causes more deaths than breast, colon, and prostate cancer combined.

While advances in sequencing have been remarkable in recent years, the eventual success of NGS-based diagnostics will not depend on instrumentation alone. “What [ensures] ease of use and commonality of results is the cohesiveness of the entire workflow, from sample prep to rapid sequencing systems and bioinformatics,” Bernardo says. “Those components working together will drive NGS into a realizable solution for the clinical market.”

In addition to confirming a disease condition (diagnosis), NGS also provides valuable information on disease susceptibility, prognosis, and the potential effect of drugs on individual patients. The latter idea, known as precision medicine or personalized medicine, uses an individual’s molecular profile to guide treatment. The idea is to differentiate diseases into subtypes based on molecular (usually genetic) characteristics and tailor therapies accordingly.

Precision medicine is still in its infancy, but dozens of pharmaceutical, diagnostics, and genetics firms have bought into the idea.

“We are just at the beginning of connecting genomic and genetic information to target specific therapies for patients,” says T.J. Johnson, president and CEO of HTG Molecular Diagnostics (Tuscon, AZ). “Precision medicine will have a bright future as we gain better understanding of the root causes of disease.”

In 2013, HTG commercialized its HTG Edge instrument platform and a portfolio of RNA assays, which fully automate the company’s proprietary nuclease protection chemistry. This chemistry measures mRNA and miRNA gene expression levels from very small quantities of difficult-to-handle samples.

HTG entered the NGS market in 2014 with the launch of the first HTG EdgeSeq product, an assay that targets and digitally measures the expression of more than 2,000 microRNAs. The assay utilizes the HTG Edge for sample and library preparation, and it uses a suitable NGS instrument (from either Illumina or Thermo Fisher) for quantitation. The data is imported back into the HTG EdgeSeq instrument for analytics and reporting.

In 2015, the company launched four additional HTG EdgeSeq panels: immuno- oncology and pan-oncology biomarker panels, a lymphoma profiling panel, and a classifier for subtyping diffuse large B-cell lymphomas (DLBCL).

Eliminating Biopsies?

Traditional biopsies for tumor DNA analysis are invasive, risky, and often impossible to obtain, and they may not uncover the heterogeneity often present in tumors. It was recently discovered that dying tumor cells release small pieces of DNA into the bloodstream. This cell-free circulating tumor DNA (ctDNA) is detectable in samples through NGS.

In September 2015, Memorial Sloan Kettering Cancer Center (MSK) and NGS leader Illumina (San Diego, CA) entered a collaboration to study ctDNA for cancer diagnosis and monitoring. The aim is to establish ctDNA as a relevant cancer biomarker.

Heterogeneity as it pertains to cancer traditionally refers to multiple tissues located within a tumor, as determined histologically. A number of recent studies suggest that tumor heterogeneity occurs at the genetic level as well. “In particular, there appears to be a tremendous variety of sequence variants within the same tumor, even resulting in situations where one tumor can have multiple mutated genes that have been demonstrated to drive cancer,” says John Leite, PhD, vice president, oncology—market development and product marketing at Illumina.

Heterogeneity challenges the search for treatments that target a specific gene product or pathway. Once the patient is treated, biopsies tell very little about how that patient is responding. “Our hope is that ctDNA provides clinicians with a real-time measure of the abundance of those mutated genes and that a decrease in the relative abundance is synonymous with a lower tumor burden,” Leite adds.

Clinical trials are needed to demonstrate that patients whose therapy was selected using ctDNA versus traditional tissue biopsy testing had a significantly improved outcome or that the analysis might be informative for prognosis.

What about cancer cells that do not release DNA? “Studies show that tumors from different organs or tissues release more or less ctDNA into the peripheral blood,” Leite explains, “but in general the possibility that some cells might not release ctDNA is an open area of research.”

For the MSK-Illumina collaboration, the cancer center will collect samples, and Illumina will apply its sequencing technology to detect ctDNA in those samples. The big draw here is the potential to reduce the number of invasive, expensive diagnostic and monitoring procedures with a simple blood test. This would not be possible without deep next-generation sequencing—the genomics vernacular for sequencing at great depths of coverage.

“Whereas sequencing to identify germline variants can be performed at a nominal depth of coverage—for example, reading a DNA strand 30 times—sequencing rare variants such as in ctDNA requires a much higher level of sensitivity, which is driven by increasing depth of coverage [as much] as 25,000 times,” Leite tells Lab Manager.

In addition to the Illumina MSK collaboration and the work of Thermo Fisher Scientific described above, many more studies involving research consortia and pharmaceutical companies are under way.

“This is a really exciting time for oncology,” Leite says.

Reducing Sample Size

Similarly, in November 2015, Circulogene Theranostics (Birmingham, AL) launched its cfDNA (cell-free DNA) liquid biopsy products for testing ten tumor types, including breast, lung, and colon cancers. The company claims the ability to enrich circulating cfDNA from a single drop of blood.

“While all liquid biopsy companies are focusing on the downstream novel technologies to selectively enrich or amplify tumor-specific cfDNA from a dominantly normal population, the upstream 40 to 90 percent material loss during cfDNA extraction leads to potential false negative results of cancer mutation detection,” explains Chen Yeh, Circulogene’s chief scientific officer. “This is why 10 to 20 mL of blood [are] generally required for conventional cfDNA liquid biopsies.”

Related Article: Researcher Using Next-Generation Sequencing, Other New Methods to Rapidly Identify Pathogens

Released cfDNA fragments often complex with proteins and lipids, which shift their densities to values much lower than those of pure DNA or protein while protecting the corresponding cfDNA from attack by circulating nucleases. Circulogene’s cfDNA breakthrough concentrates and enriches these genetic fragments through density fractionation followed by enzyme-based DNA modification and manipulation, eliminating extraction-associated loss. The technology ensures near-full recovery of both small-molecular-weight (apoptotic cell death) and high-molecular-weight (necrotic cell death) cell-free DNA species from droplet volumes of plasma, serum, or other body fluids.

“The 50-gene panel is our first offering,” says Yeh. “We will continue to develop and cover more comprehensive, informative, and actionable genes and tests.”

The current bottleneck in personalized and precision medicine is the severe shortage of anticancer drugs. Yeh provides perspective, saying, “We have about 60 FDA-approved drugs for cancer-targeted therapies on market, while there are approximately 150 cancer driver genes to aim for. If counting all mutations within these 150 genes, the numbers will be overwhelming.”

Circulogene’s cell-free DNA enrichment technology may be followed up with NGS, conventional Sanger sequencing, or any DNA assay based on PCR or mass spectrometry. However, the sensitivity of Sanger sequencing is insufficient for detecting variants with volumes below 15 percent. Moreover, the company’s multiplex NGS liquid biopsy test captures and monitors real-time, longitudinal tumor heterogeneity or tumor clonal dynamic evolution, which goes well beyond testing of a single mutation on a single sample in traditional sequencing.

 

Gene Editing Casts a Wide Net 

With CRISPR, Gene Editing Can Trawl the Murk, Catching Elusive Phenotypes amidst the Epigenetic Ebb and Flow

http://www.genengnews.com/gen-articles/gene-editing-casts-a-wide-net/5713/

  • Genome editing, a much-desired means of accomplishing gene knockout, gene activation, and other tasks, once seemed just beyond the reach of most research scientists and drug developers. But that was before the advent of CRISPR technology, an easy, versatile, and dependable means of implementing genetic modifications. It is in the process of democratizing genome editing.

    CRISPR stands for “clustered, regularly interspaced, short palindromic repeats,” segments of DNA that occur naturally in many types of bacteria. These segments function as part of an ancient immune system. Each segment precedes “spacer DNA,” a short base sequence that is derived from a fragment of foreign DNA. Spacers serve as reminders of past encounters with phages or plasmids.

    The CRISPR-based immune system encompasses several mechanisms, including one in which CRISPR loci are transcribed into small RNAs that may complex with a nuclease called CRISPR-associated protein (Cas). Then the RNA guides Cas, which cleaves invading DNA on the basis of sequence complementarity.

    In the laboratory, CRISPR sequences are combined with a short RNA complementary to a target gene site. The result is a complex in which the RNA guides Cas to a preselected target.

    Cas produces precise site-specific DNA breaks, which, with imperfect repair, cause gene mutagenesis. In more recent applications, Cas can serve as an anchor for other proteins, such as transcriptional factors and epigenetic enzymes. This system, it seems, has almost limitless versatility.

  • Edited Stem Cells

    The Sanger Institute Mouse Genetic Program, along with other academic institutions around the world, provides access to thousands of genetically modified mouse strains. “Genetic engineering of mouse embryonic stem (ES) cells by homologous recombination is a powerful technique that has been around since the 1980s,” says William Skarnes, Ph.D., senior group leader at the Wellcome Trust Sanger Institute.

    “A significant drawback of the ES technology is the time required to achieve a germline transmission of the modified genetic locus,” he continues. “While we have an exhaustive collection of modified ES cells, only about 5,000 knockout mice, or a quarter of mouse genome, were derived on the basis of this methodology.”

    The dominant position of the mouse ES cell engineering is now effectively challenged by the CRISPR technology. Compared with very low rates of homologous recombination in fertilized eggs, CRISPR generates high levels of mutations, and off-target effects may be so few as to be undetectable.

    “We used the whole-genome sequencing to thoroughly assess off-target mutations in the offspring of CRISPR-engineered founder animals,” informs Dr. Skarnes. “A mutated Cas9 nuclease was deployed to increase specificity, resulting in nearly perfect targeting.”

    Dr. Skarnes explains that the major mouse genome centers are now switching to CRISPR to complete the creation of the world-wide repository of mouse knockouts. His own research is now focused on genetically engineered induced pluripotent stem cells (iPSCs). These cells are adult cells that have been reprogrammed to an embryonic stem cell–like state, and are thus devoid of ethical issues associated with research on human embryonic stem cells. The ultimate goal is to establish a world-wide panel of reference iPSCs created by high-throughput genetic editing of every single human gene.

    “We are poised to begin a large-scale phenotypic analysis of human genes,” declares Dr. Skarnes. His lab is releasing the first set of functional data on 100 DNA repair genes. “By knocking out individual proteins involved in DNA repair and sequencing the genomes of mutant cells,” declares Dr. Skarnes, “we hope to better understand the mutational signatures that occur in cancer.”

  • Pooled CRISPR Libraries

    Researchers hope to gain a better understanding of the mutational signatures found in cancers by using CRISPR techniques to knock out individual proteins involved in DNA repair and then sequencing the genomes of mutant cells. [iStock/zmeel]

    Connecting a phenotype to the underlying genomics requires an unbiased screening of multiple genes at once. “Pooled CRISPR libraries provide an opportunity to cast a wide net at a reasonably low cost,” says Donato Tedesco, Ph.D., lead research scientist at Cellecta. “Screening one gene at a time on genome scale is a significant investment of time and money that not everyone can afford, especially when looking for common genetic drivers across many cell models.”

    Building on years of experience with shRNA libraries, Cellecta is uniquely positioned to prepare pooled CRISPR libraries for genome-wide or targeted screens of gene families. While shRNA interferes with gene translation, CRISPR disrupts a gene and the genomic level due to imperfections in the DNA repair mechanism.

    To determine if these different mechanisms for inactivating genes affect the results of genetic screens, the team conducted a side-by-side comparison of Cellecta’s Human Genome-Wide Module 1 shRNA Library, which expresses 50,000 shRNA targeting 6,300 human genes, with the library of 50,000 gRNA targeting the same gene set. The concordance between approaches was very high, suggesting that these technologies may be complementary and used for cross-confirmation of results.

    Also, a recently completed Phase I NIH SBIR Grant was used to create and test guiding strand RNA (sgRNA) structures to drastically improve efficiency of gene targeting. For this work, Cellecta used a pool library strategy to simultaneously test multiple sgRNA structures for their efficiency and specificity. An early customized Cellecta pooled gRNA library was successfully utilized for screening for epigenetic genes. This particular screen is highly dependent on a complete loss of function, and could not have been accomplished by shRNA inhibition.

    Scientists from Epizyme interrogated 600 genes in a panel of 100 cell lines and, in addition to finding many epigenetic genes required for proliferation in nearly all cell lines, were able to identify validate several essential epigenetic genes required only in subsets of cells with specific genetic lesions. In other words, pooled cell line screening was able to distinguish targets that are likely to produce toxic side effects in certain types of cancer cells from gene targets that are essential in most cells.

    “A more complicated application of CRISPR technology is to use it for gene activation,” adds Dr. Tedesco. “Cellecta plans to optimize this application to bring forth highly efficient, inexpensive, high-throughput genetic screens based on their pooled libraries.

  • Chemically Modified sgRNA

    Scientists based at Agilent Research Laboratories and Stanford University worked together to demonstrate that chemically modified single guide RNA can be used to enhance the genome editing of primary hepatopoietic stem cells and T cells. This image, which is from the Stanford laboratory of Matthew Porteus, M.D., Ph.D., shows CD34+ human hematopoietic stem cells that were edited to turn green. Editing involved inserting a construct for green fluorescent protein. About 1,000 cells are pictured here.

    Researchers at Agilent Technologies applied their considerable experience in DNA and RNA synthesis to develop a novel chemical synthesis method that can generate long RNAs of 100 nucleotides or more, such as single guide RNAs (sgRNAs) for CRISPR genome editing. “We have used this capability to design and test numerous chemical modifications at different positions of the RNA molecule,” said Laurakay Bruhn, Ph.D., section manager, biological chemistry, Agilent.

    Agilent Research Laboratories worked closely with the laboratory of Matthew Porteus, M.D., Ph.D., an associate professor of pediatrics and stem cell transplantation at Stanford University. The Agilent and Stanford researchers collaborated to further explore the benefits of chemically modified sgRNAs in genome editing of primary hematopoetic stem cells and T cells.

    Dr. Porteus’ lab chose three key target genes implicated in the development of severe combined immunodeficiency (SCID), sickle cell anemia, and HIV transmission. Editing these genes in the patient-derived cells offers an opportunity for novel precision therapies, as the edited cells can renew, expand, and colonize the donor’s bone marrow.

    Dr. Bruhn emphasized the importance of editing specificity, so that no other cellular function is affected by the change. The collaborators focused on three chemical modifications strategically placed at each end of sgRNAs that Agilent had previously tested to show they maintained sgRNA function. A number of other optimization strategies in cell culturing and transfection were explored to ensure high editing yields.

    “Primary cells are difficult to manipulate and edit in comparison with cell lines already adapted to cell culture,” maintains Dr. Bruhn. Widely varied cellular properties of primary cells may require experimental adaptation of editing techniques for each primary cell type.

    The resulting data showed that chemical modifications can greatly enhance efficiency of gene editing. The next step would translate these findings into animal models. Another advantage of chemical synthesis of RNA is that it can potentially be used to make large enough quantities for therapeutics.

    “We are working with Agilent’s Nucleic Acid Solution Division—a business focused on GMP manufacturing of oligonucleotides for therapeutics—to engage with customers interested in this capability and better understand how we might be able to help them accomplish their goals,” says Dr. Bruhn.

  • Customized Animal Models

    “Given their gene-knockout capabilities, zinc-finger-based technologies and CRISPR-based technologies opened the doors for creation of animal models that more closely resemble human disease than mouse models,” says Myung Shin, Ph.D., senior principal scientist, Merck & Co. Dr. Shin’s team supports Merck’s drug discovery and development program by creating animal models mimicking human genetics.

    For example, Dr. Shin’s team has worked with the Dahl salt-sensitive strain of rats, a widely studied model of hypertension. “We used zinc-finger nucleases to generate a homozygous knockout of a renal outer medullary potassium channel (ROMK) gene,” elaborates Dr. Shin. “The resulting model represents a major advance in elucidating the role of ROMK gene.”

    According to Dr. Shin, the model may also provide a bridge between genetics and physiology, particularly in studies of renal regulation and blood pressure. In one study, the model generated animal data that suggest ROMK plays a key role in kidney development and sodium absorption. Work along these lines may lead to a pharmacological strategy to manage hypertension.

    In another study, the team applied zinc-finger nuclease strategy to knockout the coagulation Factor XII, and thoroughly characterize them in thrombosis and hemostasis studies. Results confirmed and extended previous literature findings suggesting Factor XII as a potential target for antithrombotic therapies that carry minimal bleeding risk. The model can be further utilized to study safety profiles and off-target effects of such novel Factor XII inhibitors.

    “We use one-cell embryos to conduct genome editing with zinc-fingers and CRISPR,” continues Dr. Shin. “The ease of this genetic manipulation speeds up generation of animal models for testing of various hypotheses.”

    A zinc finger–generated knockout of the multidrug resistance protein MDR 1a P-glycoprotein became an invaluable tool for evaluating drug candidates for targets located in the central nervous system. For example, it demonstrated utility in pharmacological analyses.

    Dr. Shin’s future research is directed toward preclinical animal models that would contain specific nucleotide changes corresponding to those of humans. “CRISPR technology,” insists Dr. Shin, “brings an unprecedented power to manipulate genome at the level of a single nucleotide, to create gain- or loss-of-function genetic alterations, and to deeply understand the biology of a disease.”

  • Transcriptionally Active dCas9

    “Epigenome editing is important for several reasons,” says Charles Gersbach, Ph.D., an associate professor of biomedical engineering at Duke University. “It is a tool that helps us answer fundamental questions about biology. It advances disease modeling and drug screening. And it may, in the future, serve as mode of genetic therapy.”

    “One part of our research focuses on studying the function of epigenetic marks,” Dr. Gersback continues. “While many of these marks are catalogued, and some have been associated with the certain gene-expression states, the exact causal link between these marks and their effect on gene expression is not known. CRISPR technology can potentially allow for targeted direct manipulation of each epigenetic mark, one at a time.”

    Dr. Gersback’s team mutated the Cas9 nuclease to create deactivated Cas9 (dCas9), which is devoid of endonuclease activity. Although the dCas9 protein lacks catalytic activity, it may still serve as an anchor for a plethora of other important proteins, such as transcription factors and methyltransferases.

    In an elegant study, Dr. Gersbach and colleagues demonstrated that recruitment of a histone acetyltransferase by dCas9 to a genomic site activates nearby gene expression. Moreover, the activation occurred even when the acetyltransferase domain was targeted to a distal enhancer. Similarly, recruitment of KRAB repressor to a distant site silenced the target gene in a very specific manner. These findings support the important role of three-dimensional chromatin structures in gene activation.

    “Genome regulation by epigenetic markers is not static,” maintains Dr. Gersbach. “It responds to changes in the environment and other stimuli. It also changes during cell differentiation. We designed an inducible system providing us with an ability to execute dynamic control over the target genes.”

    In a light-activated CRISPR-Cas9 effector (LACE) system, blue light may be used to control the recruitment of the transcriptional factor VP64 to target DNA sequences. The system has been used to provide robust activation of four target genes with only minimal background activity. Selective illumination of culture plates created a pattern of gene expression in a population of cells, which could be used to mimic what is observed in natural tissues.

    Together with collaborators at Duke University, Dr. Gersbach intends to carry out the high-throughput analysis of all currently identified regulatory elements in the genome. “Our ultimate goal,” he declares, “is to assign function to all of these elements.”

Read Full Post »

World’s First 3D-printed ‘Sneezeometer’ Will Help Asthma Patients

Reporter: Irina Robu, PhD

Researchers at University of Surrrey has developed the world’s first sneezeometer using an airflow sensor that is sensitive enough to measure the speed of sneeze to help diagnose diverse respiratory conditions twice as fast. The current devices are expensive and  lack sensitivy in difficult diagnostic situations.

Surrey’s sneezometer is ultra-sensitive and measures the flow rate of air through a patient’s lungs. The sneezometer is fast and sensitive enough to pick up tiny fluctuations int he breath’s flow rate when the patient breathes through the instrument. After the development of the Surrey’s sneezometer, researchers are currently exploring its diagnostic capabilities.

According to Dr. Birch from University of Surrey’s Aerodynamics and Environmental Flow research Group explained, “Breathing disorders are highly prevalent in both the developed and developing world”. The diagnosis and monitoring of respiratory diseases is crucial to proper treatment. This sneezometer that has been developed is a simple, low-cost and non-intrusive diagnostic solution that will make doctors’ lives easier.

The device is currently used in clinical trials at King’s College Hospital in London to help develop a wide range conditions from neonatal settings through to animal diseases. The ability to monitor the sensitivity of airflow detection makes this very useful for both research and clinical perspective.

Surrey’s researchers envisions that the new device could be in clinical service as soon as 2018 and will have a true impact on the lives of patience living with chronic illnesses. The device will make the diagnosis more accurate, faster, and cheaper.

Source

http://www.prnewswire.com/news-releases/worlds-first-3d-printed-sneezeometer-will-help-asthma-patients-breathe-easy-300229900.html

Read Full Post »

Signaling Stiffness Changes

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Sound Waves Levitate Cells to Detect Disease-Signaling Stiffness Changes

Wed, 11/04/2015  Acoustical Society of America (ASA)

http://www.mdtmag.com/news/2015/11/sound-waves-levitate-cells-detect-disease-signaling-stiffness-changes

 

Utah Valley University physicists are literally applying rocket science to the field of medical diagnostics. With a few key changes, the researchers used a noninvasive ultrasonic technique originally developed to detect microscopic flaws in solid fuel rockets, such as space shuttle boosters, to successfully detect cell stiffness changes associated with certain cancers and other diseases.

Brian Patchett, a research assistant and instructor within the Department of Physics at Utah Valley University, will describe the group’s method, which uses sound waves to manipulate and probe cells, during the Acoustical Society of America’s Fall 2015 Meeting, held Nov. 2-6, in Jacksonville, Fla.

The method combines a low-frequency ultrasonic wave to levitate the cells and confine them to a single layer within a fluid and a high-frequency ultrasonic wave to measure the cell’s stiffness.

 

http://www.mdtmag.com/sites/mdtmag.com/files/styles/content_body_image/public/embedded_image/2015/11/levitating-cells-sound.jpg

University Basic setup of the group’s acoustic levitation device during normal use with an ultrahigh-frequency pulser taking readings of the monolayer. (Credit: Brian Patchett/Utah Valley)

 

“An acoustic wave is a pressure wave so it travels as a wave of high and low pressure. By trapping a sound wave between a transducer — such as a speaker — and a reflective surface, we can create a ‘standing wave’ in the space between,” explained Patchett. “This standing wave has stationary layers of high and low pressure, a.k.a. ‘anti-nodes,’ and areas, ‘the nodes’ where the pressure remains the same.”

This standing wave allowed the group to acoustically levitate the cells and isolate them in manner similar to their natural state — as they would be within human tissue or the bloodstream. Previous work in this realm relied on “growing the cell cultures in a Petri dish, which tends to deform the structure, as well as create all sorts of interference,” Patchett said.

The significance of the group’s work is that it focuses on an unexplored method of measuring the properties of cells and how they change during the process of cancer and disease development. “The stiffness of the cell is the primary change detected with our high-frequency ultrasound; it reveals detailed information about the internal structure of the cell and how it changes in certain diseases,” Patchett said.

The group’s method can also help distinguish between different types of cancer — such as aggressive breast cancer vs. less aggressive forms. “By isolating the cells in a monolayer of fluid via acoustic levitation, we’re providing a better method for the detection of cell stiffness,” Patchett said. “This method can be used to explore the aspect of cells that changes during Alzheimer’s disease, the metastasis of cancer, or during the onset of autoimmune responses to better understand these conditions and provide insight into possible treatment methods.”

 

http://www.mdtmag.com/sites/mdtmag.com/files/styles/content_body_image/public/featured_image/2015/11/levitating-cells-sound2.jpg

UHFSine photo of the layers created by Sine waves. (Credit: Brian Patchett/Utah Valley University)

 

One of the group’s key findings is that “by manipulating the shape of the wave that we use for levitation in specific ways, we’re able to create more precise, sharply defined layers,” Patchett said. “And, borrowing from previous cell culture work done by our group, our high-frequency ultrasound method detects changes within the stiffness of cells with high accuracy. By isolating the cells in a levitated monolayer, we hope to see these changes more clearly so that we can gain a better understanding of what’s happening within the cell and why.”

What kinds of applications might this method find? “It’s a really fantastic research method for exploring autoimmune disorders,” pointed out Timothy Doyle, lead scientist on the project and an assistant professor of physics at Utah Valley University.

As far as other applications, the group’s method may find use in clinics, hospitals, and surgical centers as a way to immediately detect and characterize cancer or other diseases.

“Our method identifies aggressive types of breast cancer, for example, while in the operating room,” Patchett noted. “Faster than current pathology methods, it will enable doctors to ensure speedier assessments and more effective treatment plans for patients — personalized to their specific needs, which, in turn, will end up being more cost effective in the long term.”

In the near future, the group plans to apply their method to a wide range of biological materials, including white blood cells undergoing activation, which is part of the immune response to an illness.

“We’re collaborating with the Huntsman Cancer Institute — part of the University of Utah healthcare system — to explore various types of breast tissues under levitation to refine our pathology detection methods,” Patchett said. “Our goal is to provide potentially life-saving, personalized medical treatments based on our ability to quickly and effectively detect cancers and diseases in patients.”

 

 

 

Read Full Post »

Renal (Kidney) Cancer: Connections in Metabolism at Krebs cycle  and Histone Modulation

Curator: Demet Sag, PhD, CRA, GCP

Through Histone Modulation

Renal cell carcinoma accounts for only 3% of total human malignancies but it is still the most common type of urological cancer with a high prevalence in elderly men (>60 years of age).

ICD10 C64
ICD9-CM 189.0
ICD-O M8312/3
OMIM 144700 605074
DiseasesDB 11245
MedlinePlus 000516
eMedicine med/2002

Most kidney cancers are renal cell carcinomas (RCC). RCC lacks early warning signs and 70 % of patients with RCC develop metastases. Among them, 50 % of patients having skeletal metastases developed a dismal survival of less than 10 % at 5 years.

There are three main histopathological entities:

  1. Clear cell RCC (ccRCC), dominant in histology (65%)
  2. Papillary (15-20%) and
  3. Chromophobe RCC (5%).

There are very rare forms of RCC shown in collecting duct, mucinous tubular, spindle cell, renal medullary, and MiTF-TFE translocation carcinomas.

Subtypes of clear cell and papillary RCC, and a new subtype, clear cell papillary http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399969/bin/nihms380694f6.jpg

Different subtypes of clear cell RCC can be defined by HIF patterns as well as by transcriptomic expression as defined by ccA and ccB subtypes. Papillary RCC also demonstrates distinct histological subtypes. A recently described variant denoted as clear cell papillary RCC is VHL wildtype (VHL WT), while other clear cell tumors are characterized by VHL mutation, loss, or inactivation (VHL MT).

KEY POINTS

  • Renal cell cancer is a disease in which malignant (cancer) cells form in tubules of the kidney.
  • Smoking and misuse of certain pain medicines can affect the risk of renal cell cancer.
  • Signs of renal cell cancer include
  • Blood in your urine, which may appear pink, red or cola colored
  • A lump in the abdomen.
  • Back pain just below the ribs that doesn’t go away
  • Weight loss
  • Fatigue
  • Intermittent fever

 

Factors that can increase the risk of kidney cancer include:

  • Older age.
  • High blood pressure (hypertension).
  • Treatment for kidney failure.(long-term dialysis to treat chronic kidney failure)
  • Certain inherited syndromes.
  • von Hippel-Lindau disease

Tests that examine the abdomen and kidneys are used to detect (find) and diagnose renal cell cancer.

The following tests and procedures may be used:

There are 3 treatment approaches for Renal Cancer:

Stages of Renal Cancer:

Stage I Tumour of a diameter of 7 cm (approx. 23⁄4 inches) or smaller, and limited to the kidney. No lymph node involvement or metastases to distant organs.
Stage II Tumour larger than 7.0 cm but still limited to the kidney. No lymph node involvement or metastases to distant organs.
Stage III
any of the following
Tumor of any size with involvement of a nearby lymph node but no metastases to distant organs. Tumour of this stage may be with or without spread to fatty tissue around the kidney, with or without spread into the large veins leading from the kidney to the heart.
Tumour with spread to fatty tissue around the kidney and/or spread into the large veins leading from the kidney to the heart, but without spread to any lymph nodes or other organs.
Stage IV
any of the following
Tumour that has spread directly through the fatty tissue and the fascia ligament-like tissue that surrounds the kidney.
Involvement of more than one lymph node near the kidney
Involvement of any lymph node not near the kidney
Distant metastases, such as in the lungs, bone, or brain.
Grade Level Nuclear Characteristics
Grade I Nuclei appear round and uniform, 10 μm; nucleoli are inconspicuous or absent.
Grade II Nuclei have an irregular appearance with signs of lobe formation, 15 μm; nucleoli are evident.
Grade III Nuclei appear very irregular, 20 μm; nucleoli are large and prominent.
Grade IV Nuclei appear bizarre and multilobated, 20 μm or more; nucleoli are prominent

 

GENETICS:

90% or more of kidney cancers are believed to be of epithelial cell origin, and are referred to as renal cell carcinoma (RCC), which are further subdivided based on histology into clear-cell RCC (75%), papillary RCC (15%),

chromophobe tumor (5%), and oncocytoma (5%).

Nephrectomy continues to be the cornerstone of treatment for localized renal cell carcinoma (RCC). Research is still underway to developed targeted agents against the vascular endothelial growth factor (VEGF) molecule and related pathways as well as inhibitors of the mammalian target of rapamycin (mTOR),

clear cell RCC (ccRCC) doesn’t respond well to radiation chemotherapy due to high radiation resistancy.  The hallmark genetic features of solid tumors such as KRAS or TP53 mutations are also absent. However, there is a well-designed association presented between ccRCC and mutations in the VHL gene

Hereditary RCC, accounts for around 4% of cases, has been a relatively dominant area of RCC genetics.

Causative genes have been identified in several familial cancer syndromes that predispose to RCC including

  • VHLmutations in von Hippel-Lindau disease that predispose to ccRCC and VHL is somatically mutated in up to 80% of ccRCC
  • METmutations in familial papillary renal cancer,
  • dominantly activating kinase domainMET mutation reported in 4–10% of sporadic papillary RCC[2].
  • FH (fumarate hydratase) mutations in hereditary leiomyomatosis and renal cell cancer that predispose to papillary RCC
  • FLCN(folliculin) mutations in Birt-Hogg-Dubé syndrome that predispose to primarily chromophobe RCC.

In addition, there are germline mutations:

  • in theTSC1/2 genes predispose to tuberous sclerosis complex where approximately 3% of cases develop ccRCC,
  • in the SDHB(succinate dehydrogenase type B) in patients with paraganglioma syndrome shows elevated risk to develop multiple types of RCC.

GWAS in almost 6000 RCC cases demonstrated that loci on 2p21 and 11q13.3 play a role in RCC. Although EPAS1 gene encoding a transcription factor operative in hypoxia-regulated responses in  2p21 , 11q13.3 has no known coding genes.

There has been, however, comparatively less progress in the elaboration of the somatic genetics of sporadic RCC.

Absent mutations in sporadic RCC:

  • somaticFH mutations
  • somatic mutations ofTSC12 and SDHB

Present mutations in sporadic ccRCC (chromophobe RCC) are

  • TSC1mutations occur in 5% of ccRCCs and
  • somatic mutations inFLCN  rare
  • may predict for extraordinary sensitivity to mTORC1 inhibitors clinically.

The COSMIC database reports somatic point mutations in TP53 in 10% of cases, KRAS/HRAS/NRAS combined ≤1%, CDKN2A 10%, PTEN 3%, RB1 3%, STK11/LKB1 ≤1%, PIK3Ca ≤1%, EGFR1% and BRAF ≤1% in all histological samples. Further information can be found at (http://www.sanger.ac.uk/ genetics/CGP/cosmic/) for the  RCC somatic genetics.

HIF- and hypoxia-mediated epigenetic regulation work together due to histone modification because HIF activate several chromatin demethylases, including JMJD1A (KDM3A), JMJD2B (KDM4B), JMJD2C (KDM4C) and JARID1B (KDM5B), all of which are directly targeted by HIF.

Overview of Histone 3 modifications implicated in RCC genetics http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399969/bin/nihms380694f1.jpg

A number of histone modifying genes are mutated in renal cell carcinoma. These include the H3K36 trimethylase SETD2, the H3K27 demethylase UTX/KDM6A, the H3K4 demethylase JARID1C/KDM5C and the SWI/SNF complex compenent PBRM1, shown in this cartoon to represent their relative activities on Histone H3.

Hyper-methylation is observed on RASSF1 highly (50% f RCC) yet less on VHL and CDKN2A, yet there is a methylation and silencing observed on TIMP3 and secreted frizzled-related protein 2.

RCC is ONE OF THE “CILIOPATHIES” among Polycystic Kidney Disease (PKD), Tuberous Sclerosis Complex (TSC) and VHL Syndrome. The main display of cysts is dysfunctional primary cilia.

Mol Cancer Res. Author manuscript; available in PMC 2013 Jan 1.

Mol Cancer Res. 2012 Jul; 10(7): 859–880. Published online 2012 May 25. doi:  10.1158/1541-7786.MCR-12-0117

pVHL mutants are categorized as Class A, B and C depending on the affected step in pVHL protein quality control http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399969/bin/nihms380694f2.jpg

VHL proteostasis involves the chaperone mediated translocation of nascent VHL peptide from the ribosome to the TRiC/CCT chaperonin, where folding occurs in an ATP dependent process. The VBC complex is formed while VHL is bound to TRiC, and the mature complex is then released. Three different classes of mutation exist: Class A mutations prevent binding of VHL to TRiC, and abrogate folding into a mature complex. Class B mutations prevent association of Elongins C and B to VHL. Class C mutations inhibit interaction between VHL and HIF1 a.

# 193300. VON HIPPEL-LINDAU SYNDROME; VHL ICD+, Links
VON HIPPEL-LINDAU SYNDROME, MODIFIERS OF, INCLUDED
Cytogenetic locations: 3p25.3 , 11q13.3
Matching terms: lindau, disease, von, hippellindau, hippel
  • Birt-Hogg-Dube syndrome,
# 135150. BIRT-HOGG-DUBE SYNDROME; BHD ICD+, Links
Cytogenetic location: 17p11.2 
Matching terms: birthoggdube, syndrome, birt, hogg, dube
  • tuberous sclerosis
# 191100. TUBEROUS SCLEROSIS 1; TSC1 ICD+, Links
Cytogenetic location: 9q34.13 
Matching terms: tuber, sclerosi, tuberous
  • familial papillary renal cell carcinoma.
# 144700. RENAL CELL CARCINOMA, NONPAPILLARY; RCC ICD+, Links
NONPAPILLARY RENAL CARCINOMA 1 LOCUS, INCLUDED
Cytogenetic locations: 3p25.3 3p25.3 3q21.1 8q24.13 12q24.31 17p11.2 17q12 
Matching terms: renal, familial, papillary, carcinoma, cell

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4358399/bin/467fig3.jpg

Model for the control of the fate of nephron progenitor cells. Eya1 lies genetically upstream of Six2. Six2 labels the nephron progenitor cells, which can either maintain a progenitor state and self-renew or differentiate via the Wnt4-mediated MET. Wnt4 expression is under the direct control of Wt1. β-Catenin is involved in both progenitor cell fates through activation of different transcriptional programs. Active nuclear phosphorylated Yap/Taz shifts the progenitor balance toward the self-renewal fate. Eya1 and Six2 interact directly with Mycn, leading to dephosphorylation of Mycn pT58, stabilization of the protein, increased proliferation, and potentially a shift of the nephron progenitor toward self-renewal. Genes activated in Wilms’ tumors are depicted in green, and inactivated genes are in blue. Deregulation of Yap/Taz in Wilms’ tumors results in phosphorylated Yap not being retained in the cytoplasm as it should, but it translocates to the nucleus and thus shifts the progenitor cell balance toward self-renewal. This model is likely a simplification, as it presumes that all Wilms’ tumors, regardless of causative mutation, are caused by the same mechanism.

Epigenetic aberrations associated with Wilms’ tumor

Chinese Case Study: PMCID: PMC4471788

They u8ndertook this study based on association of low circulating adiponectin concentrations with a higher risk of several cancers, including renal cell carcinoma. Thus they demonstrated that by case–control study that ADIPOQ rs182052 is significantly associated with ccRCC risk.

They investigated the frequency of three single nucleotide polymorphisms (SNPs), rs182052G>A, rs266729C>G, rs3774262G>A, in the adiponectin gene (ADIPOQ).  1004 registered patients with clear cell renal cell carcinoma (ccRCC) compared with 1108 healthy subjects (= 1108).

The first table presents the characteristics of 1004 patients with clear cell renal cell carcinoma and 1108 cancer-free controls from a Chinese Han population. The Second and third table shows the SNP results.

Table 1: The characteristics of the examined population.

Variable Cases, n (%) Controls, n (%) P-value
1004 (100) 1108 (100)
Age, years
 ≤44 195 (19.4) 230 (20.8) 0.559
 45–64 580 (57.8) 644 (58.1)
 ≥65 229 (22.8) 234 (21.1)
Sex
 Male 711 (70.8) 815 (73.6) 0.160
 Female 293 (29.2) 293 (26.4)
BMI, kg/m2
 <25 480 (47.8) 589 (53.2) 0.014
 ≥25 524 (52.2) 519 (46.8)
Smoking status
 Never 455 (45.3) 529 (47.7) 0.265
 Ever/current 549 (54.7) 579 (52.3)
Hypertension
 No 639 (63.6) 780 (70.4) 0.001
 Yes 365 (36.4) 328 (29.6)
Fuhrman grade
 I 40 (4.0)
 II 380 (37.8)
 III 347 (34.6)
 IV 175 (17.4)
 Missing 62 (6.2)
Stage at diagnosis
 I 738 (73.5)
 II 71 (7.1)
 III 19 (1.9)
 IV 176 (17.5)

Pearson’s χ2-test.

Table 2:

Association between ADIPOQ single nucleotide polymorphisms (SNP) and clear cell renal cell carcinoma risk

SNP HWE Cases, n(%) Controls, n(%) Crude OR (95% CI) P-value Adjusted OR (95% CI) P-value
rs182052
 GG 0.636 249 (24.8) 315 (28.4) 1.00 1.00
 AG 485 (48.3) 544 (49.1) 1.13 (0.92–1.39) 0.253 1.11 (0.90–1.37) 0.331
 AA 270 (26.9) 249 (22.5) 1.37 (1.08–1.75) 0.010 1.36 (1.07–1.74) 0.013
 AG/AA versusGG 1.20 (0.99–1.46) 0.060 1.19 (0.98–1.45) 0.086
 AA versusGG/AG 1.28 (1.04–1.57) 0.019 1.27 (1.04–1.56) 0.019
rs266729
 CC 0.143 502 (50.0) 572 (51.6) 1.00 1.00
 CG 398 (39.6) 434 (39.2) 1.05 (0.88–1.25) 0.635 1.05 (0.87–1.26) 0.633
 GG 104 (10.4) 102 (9.2) 1.16 (0.86–1.57) 0.324 1.17 (0.86–1.58) 0.307
 CG/GG versusCC 1.07 (0.91–1.29) 0.456 1.07 (0.90–1.27) 0.445
 GG versus CC/CG 1.19 (0.83–1.59) 0.377 1.15 (0.86–1.54) 0.353
rs3774262
 GG 0.106 482 (48.0) 523 (47.2) 1.00 1.00
 AG 420 (41.8) 459 (41.4) 0.99 (0.83–1.20) 0.938 0.99 (0.82–1.19) 0.905
 AA 102 (10.2) 126 (11.4) 0.88 (0.66–1.17) 0.381 0.90 (0.67–1.20) 0.463
 AG/AA versusGG 0.98 (0.80–1.16) 0.711 0.97 (0.82–1.15) 0.722
 AA versusGG/AG 0.88 (0.67–1.18) 0.372 0.90 (0.68–1.19) 0.465

Bold values indicate significance.

Adjusted for age, sex, BMI, smoking status, and hypertension. CI, confidence interval; OR, odds ratio; HWE, Hardy–Weinberg equilibrium.

Table 3:

Association between ADIPOQ single nucleotide polymorphisms (SNP) and clear cell renal cell carcinoma risk

SNP HWE Cases, n(%) Controls, n(%) Crude OR (95% CI) P-value Adjusted OR (95% CI) P-value
rs182052
 GG 0.636 249 (24.8) 315 (28.4) 1.00 1.00
 AG 485 (48.3) 544 (49.1) 1.13 (0.92–1.39) 0.253 1.11 (0.90–1.37) 0.331
 AA 270 (26.9) 249 (22.5) 1.37 (1.08–1.75) 0.010 1.36 (1.07–1.74) 0.013
 AG/AA versusGG 1.20 (0.99–1.46) 0.060 1.19 (0.98–1.45) 0.086
 AA versusGG/AG 1.28 (1.04–1.57) 0.019 1.27 (1.04–1.56) 0.019
rs266729
 CC 0.143 502 (50.0) 572 (51.6) 1.00 1.00
 CG 398 (39.6) 434 (39.2) 1.05 (0.88–1.25) 0.635 1.05 (0.87–1.26) 0.633
 GG 104 (10.4) 102 (9.2) 1.16 (0.86–1.57) 0.324 1.17 (0.86–1.58) 0.307
 CG/GG versusCC 1.07 (0.91–1.29) 0.456 1.07 (0.90–1.27) 0.445
 GG versus CC/CG 1.19 (0.83–1.59) 0.377 1.15 (0.86–1.54) 0.353
rs3774262
 GG 0.106 482 (48.0) 523 (47.2) 1.00 1.00
 AG 420 (41.8) 459 (41.4) 0.99 (0.83–1.20) 0.938 0.99 (0.82–1.19) 0.905
 AA 102 (10.2) 126 (11.4) 0.88 (0.66–1.17) 0.381 0.90 (0.67–1.20) 0.463
 AG/AA versusGG 0.98 (0.80–1.16) 0.711 0.97 (0.82–1.15) 0.722
 AA versusGG/AG 0.88 (0.67–1.18) 0.372 0.90 (0.68–1.19) 0.465

Bold values indicate significance.

Adjusted for age, sex, BMI, smoking status, and hypertension. CI, confidence interval; OR, odds ratio; HWE, Hardy–Weinberg equilibrium.

Molecular Genetics Level for Physiology (Function):

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4503866/bin/10585_2015_9731_Fig6_HTML.jpg

a The protein–protein interaction for the identified 8 proteins in STRING (10 necessary proteins/genes were added into the network so as to find the potential strong connection among them. The red dotted lines circled three main pathways. b The ingenuity pathway analysis (IPA) for all these 18 genes showing that oxidative phosphorylation, mitochondria dysfunction and granzyme A are the significantly activated pathways (fold change over 1.5, P < 0.05). c The possible mechanism related mitochondria functions: unspecific condition like inflammation, carcinogens, radiation (ionizing or ultraviolet), intermittent hypoxia, viral infections which is carcinogenesis in our study that damages a cell’s oxidative phosphorylation. Any of these conditions can damage the structure and function of mitochondria thus activating a respiratory chain changes (Complex I, II, III, IV) and also cytochrome c release. When the mitochondrial dysfunction persists, it produces genome instability (mtDNA mutation), and further lead to malignant transformation (metastasis) via increased ROS and apoptotic resistance. (Color figure online)

RENAL CELL CARCINOMA AND METABOLISM goes hand to hand in genes encoding enzymes of the Krebs cycle suppress tumor formation in kidney cells. This includes Succinate dehydrogenase (SDH), Fumarate hydratase (FH).  As a result of accumulation of succinate or fumarate causes the inhibition of a family of 2-oxoglutarate-dependent dioxygeneases.

The FH and SDH genes function as two-hit tumor suppressor genes.

SDH has a complex of 4 different polypeptides (SDHA-D) function in electron transfer, catalyzes the conversion of succinate to fumarate. Furthermore, heterozygous germline mutations in SDHsubunits predispose to pheochromocytoma/paraganglioma. FH function to convert fumarate to malate.  When its mutations presented as heterozygous germline, it predisposes hereditary leiomyomatosis and renal cell cancer (HLRCC). Among them about 20–50% of HLRCC families are typically papillary-type 2 (pRCC-2) and overwhelmingly aggressive.RCC is increasingly being recognized as a metabolic disease, and key lesions in nutrient sensing and processing have been detected.

Regulation of Prolyl Hydroxylases and Keap1 by Krebs cycle http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399969/bin/nihms380694f4.jpg

Regulation of Prolyl Hydroxylases by Tricarboxylic Acid (TCA) Cycle Intermediates. Prolyl hydroxylases use TCA cycle intermediates to help catalyze the oxygen, iron and ascorbate dependent- addition of a hydroxyl side chain to a Pro402 and Pro564 of HIF alpha subunits, leading to VHL binding and degradation. Defects in either fumarate hydratase or succinate dehydrogenase will drive up levels of fumarate and succinate, which competitively bind prolyl hydroxylases, and prevent HIF prolyl hydroxylation. This results in higher intracellular HIF levels.

Regulation of mTORC1 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399969/bin/nihms380694f5.jpg

HIF regulation and mTOR pathway connections. Hypoxia blocks HIF expression in a TSC1/2 and REDD dependent pathway [155]. HIF1α appears to be both TORC1 and TORC2 dependent, whereas HIF2α is only TORC2 dependent [275]. Signaling via TORC2 appears to upregulate HIF2α in an AKT dependent manner [69].

TREATMENT:

Based on the types of renal cancers the treatment method may vary but the general scheme is:

 

Drugs Approved for Kidney (Renal Cell) Cancer

Food and Drug Administration (FDA) approved drugs for kidney (renal cell) cancer. Some of the drug names link to NCI’s Cancer Drug Information summaries.

T cell regulation in RCC http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399969/bin/nihms380694f7.jpg

Immune regulation of renal tumor cells. A: When an antigen presenting cell (APC) engages a T-cell via a cognate T-cell receptor (TCR) and CD28, T-cell cell activation occurs. B: Early and late T-cell inhibitory signals are mediated via CTLA-4 and PD-1 receptors, and this occurs via engagement of the APC via B7 and PD-L1, respectively. C: Inhibitory antibodies against CTLA-4 and PD-1 can overcome T-cell downregulation and once again allow cytokine production.

Phase III Trials of Targeted Therapy in Metastatic Renal Cell Carcinoma

Trial Number
of
patients
Clinical setting RR (%) PFS (months) OS (months)
VEGF-Targeted Therapy
*AVOREN

Bevacizumab +
IFNa
vs.IFNa[270]

649 First-line 31 vs. 12 10.2 vs. 5.5
(p<0.001)
23.3 vs. 21.3
(p=0.129)
*CALBG 90206

Bevacizumab +
IFNa
vs.IFNa[271]

732 First-line 25.5 vs. 13 8.4 vs. 4.9
(p<0.001)
18.3 vs. 17.4
(p=0.069)
Sunitinib vs.
IFNa[248]
750 First-line 47 vs. 12 11 vs. 5
(p=0.0001)
26.4 vs. 21.8
(p=0.051)
*TARGET

Sorafenib vs.
Placebo[272]

903 Second-line

(post-cytokine)

10 vs. 2 5.5 vs. 2.8
(p<0.01)
17.8vs.15.2
(p=0.88)
Pazopanib vs.
placebo[273]
435 First line/second line

(post-cytokine)

30 vs. 3 9.2 vs. 4.2
(p<0.0001)
22.9 vs. 20.5
(p=0.224)
*AXIS

Axitinib vs.
sorafenib [269]

723 Second line

(post-sunitinib, cytokine,
bevacizumab or
temsirolimus)

19 vs. 9
(p=0.0001)
6.7 vs. 4.7
(p<0.0001)
Not reported
mTOR-Targeted Therapy
*ARCC
Temsirolimus
vs. Tem + IFNa
vs. IFNa[249]
624 First line, ≥ 3 poor risk
featuresa
9 vs. 5 3.8 vs. 1.9 for
IFNa
monotherapy
(p=0.0001)
10.9 vs. 7.3 for
IFNa(p=0.008)
*RECORD-1
Everolimus vs.
placebo [274]
410 Second line
(post sunitinib and/or
sorafenib)
2 vs. 0 4.9 vs. 1.9

(p<0.0001)

14.8 vs. 14.5

RCC renal cell carcinoma, RR response rate, OS overall survival, PFS progression free survival, VEGFvascular endothelial growth factor, IFNa interferon alphamTOR mammalian target of rapamycin. AVORENAVastin fOr RENal cell cancer, CALBG Cancer and Leukemia Group B. TARGET Treatment Approaches in Renal Cancer Global Evaluation Trial. AXIS Axitinib in Second Line. ARCC Advanced Renal-Cell Carcinoma. RECORD-1 REnal Cell cancer treatment withOral RAD001 given Daily.

aIncluding serum lactate dehydrogenase level of more than 1.5 times the upper limit of the normal range, a hemoglobin level below the lower limit of the normal range; a corrected serum calcium level of more than 10 mg per deciliter (2.5 mmol per liter), a time from initial diagnosis of renal-cell carcinoma to randomization of less than 1 year, a Karnofsky performance score of 60 or 70, or metastases in multiple organs.

PMC full text: Open Access J Urol. Author manuscript; available in PMC 2013 Jul 8.

Open Access J Urol. 2010 Aug; 2010(2): 125–141. doi:  10.2147/RRU.S7242

Table: RCC-Associated Antigens (RCCAA) Recognized by T Cells.

Antigen Antigen
Category
Frequency of
Expression
Among RCC
Tumors (%)
CD8+ T cell
recognition:
Patients with
HLA Class I
Allele(s)
CD4+ T cell
recognition:
Patients with
HLA Class II
Allele(s)
References found in Open Access J Urol. Author manuscript; available in PMC 2013 Jul 8.
Survivina ML 100 Multiple Multiple 114
OFA-iLR OF 100 A2 NR 115116
IGFBP3ab ML 97 NR Multiple 117118
EphA2a ML > 90 A2 DR4 1744119
RU2AS Antisense
transcript
> 90 B7 NR 120
G250
(CA-IX) ab
RCC 90 A2, A24 Multiple 4751
EGFRab ML 85 A2 NR 121122
HIFPH3a ML 85 A24 NR 123
c-Meta ML > 80 A2 NR 124
WT-1a ML 80 A2, A24 NR 125128
MUC1ab ML 76 A2 DR3 46129130
5T4 ML 75 A2, Cw7 DR4 54131133
iCE aORF 75 B7 NR 134
MMP7a ML 75 A3 Multiple 117135136
Cyclin D1a ML 75 A2 Multiple 117137138
HAGE b CT 75 A2 DR4 139
hTERT ab ML > 70 Mutliple Multiple 140142
FGF-5 Protein splice variant > 60 A3 NR 143
mutVHLab ML > 60 NR NR 144
MAGE-A3 b CT 60 Multiple Multiple 145
SART-3 ML 57 Mulitple NR 146149
SART-2 ML 56 A24 NR 150
PRAME b CT 40 Multiple NR 151154
p53ab Mutant/WT
ML
32 Mutliple Multiple 155156
MAGE-A9b CT >30 A2 NR 157
MAGE-A6b CT 30 Mutliple DR4 18158
MAGE-D4b CT 30 A25 NR 159
Her2/neua ML 1030 Multiple Multiple 45160164
SART-1a ML 25 Multiple NR 165167
RAGE-1 CT (ORF2/5) 21 Mutliple Multiple 151157168169
TRP-1/ gp75 ML 11 A31 DR4 151170172

A summary is provided for RCCAA that have been defined at the molecular level. RCCAA are characterized with regard to their antigen category, their prevalence of (over)expression among total RCC specimens evaluated, whether RCCAA expression is modulated by hypoxia or tumor DNA methylation status, and which HLA class I and class II alleles have been reported to serve as presenting molecules for T cell recognition of peptides derived from a given RCCAA.

Abbreviations: CT = Cancer-Testis Antigens; ML = Multi-lineage Antigens; NR = Not Reported; OF = Oncofetal Antigen; aORF = altered open reading frame; ORF = open reading frame; RCC = Renal cell carcinoma; WT = Wild-Type;

aHypoxia-Induced;

bHypomethylation-Induced.

PMC full text: Open Access J Urol. Author manuscript; available in PMC 2013 Jul 8.

Open Access J Urol. 2010 Aug; 2010(2): 125–141. doi:  10.2147/RRU.S7242

Expected Impact on Teff versus Suppressor Cells
Co-Therapeutic Agent Teff
priming
Teff
function
Teff
survival
Teff
(TME)
Treg/
MDSC
References found in Open Access J Urol. Author manuscript; available in PMC 2013 Jul 8.
Cytokines
IL-2 +/− ↑ (Treg) 173175
IL-7 ↑ (Treg) 176178
IL-12 – (Treg), ↓ (MDSC) 179181
IL-15 ↑ (Treg)* 182183
IL-18 ↓ (Treg) 184186
IL-21 ? +/− (Treg) 187190
IFN-α +/− (Treg) 175191194
IFN-γ -? ? ↑ ↑ (Treg); ↑ ?(MDSC) 195197
GM-CSF ? ↑ (Treg); ↑(MDSC) 198202
Coinhibitory Antagonist
CTLA-4 ? ↓ (Treg) 203204
PD1/PD1L ↓ (Treg) 205207
Costimulatory Agonist
CD40/CD40L ↑ (Treg); ↑(MDSC) 208211
GITR/GITRL ↓ (Treg); ↓ (MDSC) 212213
OX40/OX86 ↑↓ (Treg); ↓ (MDSC) 214219
4-1BB/4-1BBL ↑ (Treg) 220224
TLR Agonists
Imiquimod (TLR7) ? 225227
Resiquimod (TLR8) ? ? 228229
CpG (TLR9) ↓ (Treg) 230232
Anti-Angiogenic
VEGF-Trap ? ? 233
Sunitinib ? ↓ (Treg/MDSC) 98100234
Sorafenib ? ↓ (MDSC) 235
Bevacizumab ? ? ↓ (MDSC) 236237
Gefitinib (IRESSA) ? ? ? ? ? 238239
Cetuximab ? ? ? ? 240
mTOR Inhibitors
Temsirolimus/Everolimus ? ↓ (Treg) 241
Treg/MDSC Inhibitors
Iplimumab (CTLA-4) ? ↓ (Treg) 242243
ONTAK (CD25) +/− +/− ? ? ↓ (Treg) 244
Anti-TGFβ/TGFβR ↓ (Treg) 245247
Anti-IL10/IL10R +/− ↓ (Treg) 248249
Anti-IL35/IL35R ↑? ↑? ↑? ↑? ↓ (Treg) 250
1-methyl trytophan ? ? ↓ (MDSC) 251
ATRA ? ? ↑ (Treg), ↓ (MDSC) 9093

Agents that are currently or soon-to-be in clinical trials are summarized with regard to their anticipated impact(s) on Type-1 anti-tumor T cell (Te) activation, function, survival and recruitment into the TME. Additional anticipated effects of drugs on suppressor cells (Treg and MDSC) are also summarized. Key: ↑, agent is expected to increase parameter; ↓, agent is expected to inhibit parameter; +/−, minimal increase or decrease is expected in parameter as a consequence of treatment with agent; ?, unknown effect of agent on parameter.

Abbreviations: ATRA, all-trans retinoic acid; CTLA-4, cytotoxic T Lymphocyte antigen 4; GITR(L), glucocorticoid-induced TNF receptor (ligand); GM-CSF, granulocyte-macrophage colony stimulating factor; IFN, interferon; IL, interleukin; MDSC, myeloid-derived suppressor cell; PD1/PD1L, programmed cell death 1 (ligand); TGF-β(R), tumor necrosis factor-β(receptor); TLR, Toll-like receptor; TME, tumor microenvironment; Treg, regulatory T cell; VEGF, vascular endothelial growth factor.

Alternative and Complementary Therapies for Cancer:

  • Art therapy
  • Dance or movement therapy
  • Exercise
  • Meditation
  • Music therapy
  • Relaxation exercises

Mol Cancer Res. 2012 Jul; 10(7): 859–880. Published online 2012 May 25. doi:  10.1158/1541-7786.MCR-12-0117 PMCID: PMC3399969 NIHMSID: NIHMS380694

State-of-the-science: An update on renal cell carcinoma

Eric Jonasch,1 Andrew Futreal,1 Ian Davis,2 Sean Bailey,2 William Y. Kim,2 James Brugarolas,3 Amato Giaccia,4 Ghada Kurban,5 Armin Pause,6 Judith Frydman,4 Amado Zurita,1 Brian I. Rini,7 Pam Sharma,8Michael Atkins,9 Cheryl Walker,8,* and W. Kimryn Rathmell2,*

Go to:

REFERENCES

Germline and somatic mutations in the tyrosine kinase domain of the MET proto-oncogene in papillary renal carcinomas.[Nat Genet. 1997]

Germline mutations in FH predispose to dominantly inherited uterine fibroids, skin leiomyomata and papillary renal cell cancer.[Nat Genet. 2002]

Mutations in a novel gene lead to kidney tumors, lung wall defects, and benign tumors of the hair follicle in patients with the Birt-Hogg-Dubé syndrome.[Cancer Cell. 2002]

Tuberous sclerosis-associated renal cell carcinoma. Clinical, pathological, and genetic features.[Am J Pathol. 1996]

Tumor risks and genotype-phenotype-proteotype analysis in 358 patients with germline mutations in SDHB and SDHD.[Hum Mutat. 2010]

Genome-wide association study of renal cell carcinoma identifies two susceptibility loci on 2p21 and 11q13.3.[Nat Genet. 2011]

Mutations of the VHL tumour suppressor gene in renal carcinoma.[Nat Genet. 1994]

Germline and somatic mutations in the tyrosine kinase domain of the MET proto-oncogene in papillary renal carcinomas.[Nat Genet. 1997]

Few FH mutations in sporadic counterparts of tumor types observed in hereditary leiomyomatosis and renal cell cancer families.[Cancer Res. 2002]

Interplay between pVHL and mTORC1 pathways in clear-cell renal cell carcinoma.[Mol Cancer Res. 2011]

Intratumor heterogeneity and branched evolution revealed by multiregion sequencing.[N Engl J Med. 2012]

Exome sequencing identifies frequent mutation of the SWI/SNF complex gene PBRM1 in renal carcinoma.[Nature. 2011]

Systematic sequencing of renal carcinoma reveals inactivation of histone modifying genes.[Nature. 2010]

Histone methyltransferase gene SETD2 is a novel tumor suppressor gene in clear cell renal cell carcinoma.[Cancer Res. 2010]

The von Hippel-Lindau tumor suppressor protein regulates gene expression and tumor growth through histone demethylase JARID1C.[Oncogene. 2012]

HIFalpha targeted for VHL-mediated destruction by proline hydroxylation: implications for O2 sensing.[Science. 2001]

Targeting of HIF-alpha to the von Hippel-Lindau ubiquitylation complex by O2-regulated prolyl hydroxylation.[Science. 2001]

Hypoxia induces trimethylated H3 lysine 4 by inhibition of JARID1A demethylase.[Cancer Res. 2010]

Silencing of the VHL tumor-suppressor gene by DNA methylation in renal carcinoma.[Proc Natl Acad Sci U S A. 1994]

Inactivation of the von Hippel-Lindau (VHL) tumour suppressor gene and allelic losses at chromosome arm 3p in primary renal cell carcinoma: evidence for a VHL-independent pathway in clear cell renal tumourigenesis.[Genes Chromosomes Cancer. 1998]

DNA methylation and histone modifications cause silencing of Wnt antagonist gene in human renal cell carcinoma cell lines.[Int J Cancer. 2008]

Global levels of histone modifications predict prognosis in different cancers.[Am J Pathol. 2009]

A phase II trial of panobinostat, a histone deacetylase inhibitor, in the treatment of patients with refractory metastatic renal cell carcinoma.[Cancer Invest. 2011]

Review Oxygen sensing by metazoans: the central role of the HIF hydroxylase pathway.[Mol Cell. 2008]

Review Targeting HIF-1 for cancer therapy.[Nat Rev Cancer. 2003]

Review Hypoxia-inducible factors: central regulators of the tumor phenotype.[Curr Opin Genet Dev. 2007]

Review Role of VHL gene mutation in human cancer.[J Clin Oncol. 2004]

Systematic sequencing of renal carcinoma reveals inactivation of histone modifying genes.[Nature. 2010]

Genetic and functional studies implicate HIF1α as a 14q kidney cancer suppressor gene.[Cancer Discov. 2011]

HIF-alpha effects on c-Myc distinguish two subtypes of sporadic VHL-deficient clear cell renal carcinoma.[Cancer Cell. 2008]

Software and database for the analysis of mutations in the VHL gene.[Nucleic Acids Res. 1998]

Genetic analysis of von Hippel-Lindau disease.[Hum Mutat. 2010]

The von Hippel-Lindau tumor suppressor protein is required for proper assembly of an extracellular fibronectin matrix.[Mol Cell. 1998]

pVHL modification by NEDD8 is required for fibronectin matrix assembly and suppression of tumor development.[Mol Cell Biol. 2004]

Contrasting effects on HIF-1alpha regulation by disease-causing pVHL mutations correlate with patterns of tumourigenesis in von Hippel-Lindau disease.[Hum Mol Genet. 2001]

Characterization of a von Hippel Lindau pathway involved in extracellular matrix remodeling, cell invasion, and angiogenesis.[Cancer Res. 2006]

The von Hippel-Lindau tumor suppressor gene inhibits hepatocyte growth factor/scatter factor-induced invasion and branching morphogenesis in renal carcinoma cells.[Mol Cell Biol. 1999]

A role for mitochondrial enzymes in inherited neoplasia and beyond.[Nat Rev Cancer. 2003]

Mitochondrial tumour suppressors: a genetic and biochemical update.[Nat Rev Cancer. 2005]

Mitochondrial tumour suppressors: a genetic and biochemical update.[Nat Rev Cancer. 2005]

Identification of the von Hippel-Lindau disease tumor suppressor gene.[Science. 1993]

Vascular tumors in livers with targeted inactivation of the von Hippel-Lindau tumor suppressor.[Proc Natl Acad Sci U S A. 2001]

mTOR: from growth signal integration to cancer, diabetes and ageing.[Nat Rev Mol Cell Biol. 2011]

Genomic expression and single-nucleotide polymorphism profiling discriminates chromophobe renal cell carcinoma and oncocytoma.[BMC Cancer. 2010]

Classification of renal neoplasms based on molecular signatures.[J Urol. 2006]

Genetic subtyping of renal cell carcinoma by comparative genomic hybridization.[Recent Results Cancer Res. 2003]

Papillary renal cell carcinoma. Prognostic value of morphological subtypes in a clinicopathologic study of 43 cases.[Virchows Arch. 2003]

Prognostic impact of carbonic anhydrase IX expression in human renal cell carcinoma.[BJU Int. 2007]

Survivin expression in renal cell carcinoma.[Cancer Invest. 2008]

High expression levels of survivin protein independently predict a poor outcome for patients who undergo surgery for clear cell renal cell carcinoma.[Cancer. 2006]

Mcm2, Geminin, and KI67 define proliferative state and are prognostic markers in renal cell carcinoma.[Clin Cancer Res. 2005]

Prognostic impacts of cytogenetic findings in clear cell renal cell carcinoma: gain of 5q31-qter predicts a distinct clinical phenotype with favorable prognosis.[Cancer Res. 2001]

Chromosome 14q loss defines a molecular subtype of clear-cell renal cell carcinoma associated with poor prognosis.[Mod Pathol. 2011]

Loss of chromosome 9p is an independent prognostic factor in patients with clear cell renal cell carcinoma.[Mod Pathol. 2008]

Chromosome 9p deletions identify an aggressive phenotype of clear cell renal cell carcinoma.[Cancer. 2010]

Gene expression profiling predicts survival in conventional renal cell carcinoma.[PLoS Med. 2006]

Biomarkers predicting outcome in patients with advanced renal cell carcinoma: Results from sorafenib phase III Treatment Approaches in Renal Cancer Global Evaluation Trial.[Clin Cancer Res. 2010]

Interleukin-8 mediates resistance to antiangiogenic agent sunitinib in renal cell carcinoma.[Cancer Res. 2010]

Therapeutic vaccination against metastatic renal cell carcinoma by autologous dendritic cells: preclinical results and outcome of a first clinical phase I/II trial.[Cancer Immunol Immunother. 2002]

Immunotherapy for metastatic renal cell carcinoma.[BJU Int. 2007]

Results of treatment of 255 patients with metastatic renal cell carcinoma who received high-dose recombinant interleukin-2 therapy.[J Clin Oncol. 1995]

Phase II study of vinorelbine in patients with androgen-independent prostate cancer.[Ann Oncol. 2001]

CD28-mediated signalling co-stimulates murine T cells and prevents induction of anergy in T-cell clones.[Nature. 1992]

CD28 and CTLA-4 have opposing effects on the response of T cells to stimulation.[J Exp Med. 1995]

 Mechanisms of T-cell inhibition: implications for cancer immunotherapy.[Expert Rev Vaccines. 2010]

Enhancement of antitumor immunity by CTLA-4 blockade.[Science. 1996]

Combination immunotherapy of B16 melanoma using anti-cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and granulocyte/macrophage colony-stimulating factor (GM-CSF)-producing vaccines induces rejection of subcutaneous and metastatic tumors accompanied by autoimmune depigmentation.[J Exp Med. 1999]

Biologic activity of cytotoxic T lymphocyte-associated antigen 4 antibody blockade in previously vaccinated metastatic melanoma and ovarian carcinoma patients.[Proc Natl Acad Sci U S A. 2003]

Preoperative CTLA-4 blockade: tolerability and immune monitoring in the setting of a presurgical clinical trial.[Clin Cancer Res. 2010]

Ipilimumab (anti-CTLA4 antibody) causes regression of metastatic renal cell cancer associated with enteritis and hypophysitis.[J Immunother. 2007]

PD-1 and its ligands in tolerance and immunity.[Annu Rev Immunol. 2008]

New strategies in kidney cancer: therapeutic advances through understanding the molecular basis of response and resistance.[Clin Cancer Res. 2010]

Efficacy of everolimus in advanced renal cell carcinoma: a double-blind, randomised, placebo-controlled phase III trial.[Lancet. 2008]

Comparative effectiveness of axitinib versus sorafenib in advanced renal cell carcinoma (AXIS): a randomised phase 3 trial.[Lancet. 2011]

Efficacy of everolimus in advanced renal cell carcinoma: a double-blind, randomised, placebo-controlled phase III trial.[Lancet. 2008]

American Cancer Society. Cancer Facts & Figures 2014. 2014. http://www.cancer.org/research/cancerfactsstatistics/cancerfactsfigures2014/. Accessed Oct. 1, 2014.

Amin A, Plimack ER, Infante JR, Ernstoff MS, Rini BI, Mcdermott DF, Knox JJ, Pal SK, Voss MH, Sharma P, Kollmannsberger CK, Heng DYC, Sprattin JL, Shen Y, Kurland JF, Gagnier P, Hammers HJ. Nivolumab (anti-PD-1; BMS-936558, ONO-4538) in combination with sunitinib or pazopanib in patients (pts) with metastatic renal cell carcinoma (mRCC). J Clin Oncol 32:5s (suppl; abstr 5010), 2014.

Atkins MB, Kudchadkar RR, Sznol M, Mcdermott DF, Lotem M, Schacther J, Wolchok JD, Urba WJ, Kuzel T, Schuchter LM, Slingluff CL, Ernstoff MS, Fay JW, Friedlander PA, Gajewski T, Zarour H, Rotem-Yehudar R, Sosman JA. Phase 2, multicenter, safety and efficacy study of pidilizumab in patients with metastatic melanoma. J Clin Oncol 32:5s (suppl; abstr 9001), 2014.

Beck KE, Blansfield JA, Tran KQ, Feldman AL, Hughes MS, Royal RE, Kammula US, Topalian SL, Sherry RM, Kleiner D, Quezado M, Lowy I, Yellin M, Rosenberg SA, Yang JC. Enterocolitis in patients with cancer after antibody blockade of cytotoxic T-lymphocyte-associated antigen 4. J Clin Oncol 24(15):2283-2289, 2006.

Berger R, Rotem-Yehudar R, Slama G, Landes S, Kneller A, Leiba M, Koren-Michowitz M, Shimoni A, Nagler A. Phase I safety and pharmacokinetic study of CT-011, a humanized antibody interacting with PD-1, in patients with advanced hematologic malignancies. Clin Cancer Res 14(10):3044-3051, 2008.

Brahmer JR, Drake CG, Wollner I, Powderly JD, Picus J, Sharfman WH, Stankevich E, Pons A, Salay TM, Mcmiller TL, Gilson MM, Wang C, Selby M, Taube JM, Anders R, Chen L, Korman AJ, Pardoll DM, Lowy I, Topalian SL. Phase I study of single-agent anti-programmed death-1 (MDX-1106) in refractory solid tumors: safety, clinical activity, pharmacodynamics, and immunologic correlates. J Clin Oncol 28(19):3167-3175, 2010.

Camacho LH, Antonia S, Sosman J, Kirkwood JM, Gajewski TF, Redman B, Pavlov D, Bulanhagui C, Bozon VA, Gomez-Navarro J, Ribas A. Phase I/II trial of tremelimumab in patients with metastatic melanoma. J Clin Oncol 27(7):1075-1081, 2009.

Cho DC, Sosman JA, Sznol M, Gordon MS, Hollebecque A, Mcdermott DF, Delord JP, Rhee IP, Mokatrin A, Kowantez M, Funke RP, Fine GD, Powles T. Clinical activity, safety, and biomarkers of MPDL3280A, an engineered PD-L1 antibody in patients with metastatic renal cell carcinoma (mRCC). J Clin Oncol 31 (suppl; abstr 4505), 2013.

Choueiri TK, Fishman MN, Escudier B, Kim JJ, Kluger H, Stadler WM, Perez-Gracia JL, Mcneel DG, Curti BD, Harrison MR, Plimack ER, Appleman LJ, Fong L, Drake CG, Cohen LJ, Srivastava S, Jure-Kunkel M, Hong Q, Kurland JF, Sznol M. Immunomodulatory activity of nivolumab in previously treated and untreated metastatic renal cell carcinoma (mRCC): Biomarker-based results from a randomized clinical trial. J Clin Oncol 32:5s (suppl; abstr 5012), 2014.

Coppin C, Porzsolt F, Awa A, Kumpf J, Coldman A, Wilt T. Immunotherapy for advanced renal cell cancer. Cochrane Database Syst Rev (1):CD001425, 2005.

Downey SG, Klapper JA, Smith FO, Yang JC, Sherry RM, Royal RE, Kammula US, Hughes MS, Allen TE, Levy CL. Prognostic factors related to clinical response in patients with metastatic melanoma treated by CTL-associated antigen-4 blockade. Clin Cancer Res 13(22):6681-6688, 2007.

Drake CG, Mcdermott DF, Sznol M. Survival, safety, and response duration results of nivolumab (Anti-PD-1; BMS-936558; ONO-4538) in a phase I trial in patients with previously treatedmetastatic renal cell carcinoma (mRCC): long-term patient followup. J Clin Oncol 31 (suppl; abstr 4514), 2013.

Dunn GP, Bruce AT, Ikeda H, Old LJ, Schreiber RD. Cancer immunoediting: from immunosurveillance to tumor escape. Nat Immunol 3(11):991-998, 2002.

Elfiky AA, Sonpavde G. Novel molecular targets for the therapy of renal cell carcinoma. Discov Med13(73):461-471, 2012.

FDA. Pembrolizumab. http://www.fda.gov/Drugs/InformationOnDrugs/ApprovedDrugs/ucm412861.htm. 2014. Accessed Oct. 16, 2014.

Fyfe G, Fisher RI, Rosenberg SA, Sznol M, Parkinson DR, Louie AC. Results of treatment of 255 patients with metastatic renal cell carcinoma who received high-dose recombinant interleukin-2 therapy. J Clin Oncol 13(3):688-696, 1995.

Gupta K, Miller JD, Li JZ, Russell MW, Charbonneau C. Epidemiologic and socioeconomic burden of metastatic renal cell carcinoma (mRCC): a literature review. Cancer Treat Rev 34(3):193-205, 2008.

Hamid O, Robert C, Daud A, Hodi FS, Hwu WJ, Kefford R, Wolchok JD, Hersey P, Joseph RW, Weber JS, Dronca R, Gangadhar TC, Patnaik A, Zarour H, Joshua AM, Gergich K, Elassaiss-Schaap J, Algazi A, Mateus C, Boasberg P, et al. Safety and tumor responses with lambrolizumab (anti-PD-1) in melanoma. N Engl J Med 369(2):134-144, 2013.

Hammers H, Plimack ER, Infante JR, Ernstoff MS, Rini BI, Mcdermott B, Razak AR, Pal SK, Voss MH, Sharma P, Kollmannsberger C, Heng DY, Spratlin J, Shen Y, Kurland J, Gagnier P, Amin A. Phase I study of nivolumab in combination with ipilimumab in metastatic renal cell carcinoma. ASCO Annual Meeting. J Clin Oncol 32:5s (suppl; abstr 4504), 2014.

Hodi FS, O’day SJ, Mcdermott DF, Weber RW, Sosman JA, Haanen JB, Gonzalez R, Robert C, Schadendorf D, Hassel JC. Improved survival with ipilimumab in patients with metastatic melanoma.N Engl J Med 363(8):711-723, 2010.

Infante JR, Powderly JD, Burris HA, Kittaneh M, Grice JH, Smothers JF, Brett S, Fleming ME, May R, Marshall S, Devenport M, Pillemer S, Pardoll DM, Chen L, Langermann S, Lorusso P. Clinical and pharmacodynamic (PD) results of a phase I trial with AMP-224 (B7-DC Fc) that binds to the PD-1 receptor. J Clin Oncol 31 (suppl; abstr 3044), 2013.

Intlekofer AM, Thompson CB. At the bench: preclinical rationale for CTLA-4 and PD-1 blockade ascancer immunotherapyJ Leukoc Biol 94(1):25-39, 2013.

Keir ME, Butte MJ, Freeman GJ, Sharpe AH. PD-1 and its ligands in tolerance and immunity. Annu Rev Immunol 26:677-704, 2008.

Kirkwood JM, Lorigan P, Hersey P, Hauschild A, Robert C, Mcdermott D, Marshall MA, Gomez-Navarro J, Liang JQ, Bulanhagui CA. Phase II trial of tremelimumab (CP-675,206) in patients with advanced refractory or relapsed melanoma. Clin Cancer Res 16(3):1042-1048, 2010.

Lieu C, Bendell J, Powderly JD, Pishvaian MJ, Hochster H, Eckhardt SG, Funke RP, Rossi C, Waterkamp D, Hurwitz H. Safety and efficacy of MPDL3280A (anti-PDL1) in combination with bevacizumab (bev) and/or chemotherapy (chemo) in patients (pts) with locally advanced or metastatic solid tumors. Ann Oncol 25 (suppl 4; abstr 1049o), 2014.

McDermott DF, Regan MM, Clark JI, Flaherty LE, Weiss GR, Logan TF, Kirkwood JM, Gordon MS, Sosman JA, Ernstoff MS, Tretter CP, Urba WJ, Smith JW, Margolin KA, Mier JW, Gollob JA, Dutcher JP, Atkins MB. Randomized phase III trial of high-dose interleukin-2 versus subcutaneous interleukin-2 and interferon in patients with metastatic renal cell carcinoma. J Clin Oncol 23(1):133-141, 2005.

McDermott DF, Sznol M, Sosman JA, Soria JC, Gordon MS, Hamid O, Delord JP, Fasso M, Wang Y, Bruey J, Fine GD, Powles T. Immune correlates and long term follow up of a phase Ia study of MPDL3280A, an engineered PD-L1 antibody, in patients with metastatic renal cell carcinoma (mRCC). Ann Oncol 25 (Suppl 4; abstr 809o), 2014.

Melero I, Hervas-Stubbs S, Glennie M, Pardoll DM, Chen L. Immunostimulatory monoclonal antibodies for cancer therapy. Nat Rev Cancer 7(2):95-106, 2007.

Millward M, Underhill C, Lobb S, Mcburnie J, Meech SJ, Gomez-Navarro J, Marshall MA, Huang B, Mather CB. Phase I study of tremelimumab (CP-675 206) plus PF-3512676 (CPG 7909) in patients with melanoma or advanced solid tumours. Br J Cancer 108(10):1998-2004, 2013.

Motzer RJ, Hutson TE, Cella D, Reeves J, Hawkins R, Guo J, Nathan P, Staehler M, De Souza P, Merchan JR, Boleti E, Fife K, Jin J, Jones R, Uemura H, De Giorgi U, Harmenberg U, Wang J, Sternberg CN, Deen K, et al. Pazopanib versus sunitinib in metastatic renal-cell carcinoma. N Engl J Med 369(8):722-731, 2013.

Motzer RJ, Jonasch E, Agarwal N. NCCN Clinical Practice Guidelines in Oncology: Kidney Cancer. Version 1. 2015. National Comprehensive Cancer Network.http://www.nccn.org/professionals/physician_gls/f_guidelines.asp. Accessed Oct. 1, 2014.

Motzer RJ, Rini BI, McDermott DF, Redman BG, Kuzel TM, Harrison MR, Vaishampayan UN, Drabkin HA, George S, Logan TF, Margolin KA, Plimack ER, Lambert AM, Waxman IM, Hammers HJ. Nivolumab for Metastatic Renal Cell Carcinoma: Results of a Randomized Phase II Trial. J Clin Oncol, epub ahead of print, Dec. 1, 2014.

National Cancer Institute. SEER Stat Fact Sheets: Kidney and renal pelvis cancer. Surveillance,Epidemiology, and End Results Program. 2014. http://seer.cancer.gov/statfacts/html/kidrp.html. Accessed Oct. 1, 2014.

Negrier S, Escudier B, Lasset C, Douillard JY, Savary J, Chevreau C, Ravaud A, Mercatello A, Peny J, Mousseau M, Philip T, Tursz T. Recombinant human interleukin-2, recombinant human interferon alfa-2a, or both in metastatic renal-cell carcinoma. Groupe Francais d’Immunotherapie. N Engl J Med 338(18):1272-1278, 1998.

O’Day SJ, Hamid O, Urba WJ. Targeting cytotoxic T-lymphocyte antigen-4 (CTLA-4). Cancer110(12):2614-2627, 2007.

Page DB, Postow MA, Callahan MK, Allison JP, Wolchok JD. Immune modulation in cancer with antibodies. Annu Rev Med 65:185-202, 2014.

Pages C, Gornet JM, Monsel G, Allez M, Bertheau P, Bagot M, Lebbe C, Viguier M. Ipilimumab-induced acute severe colitis treated by infliximab. Melanoma Res 23(3):227-230, 2013.

Pardoll DM. The blockade of immune checkpoints in cancer immunotherapy. Nat Rev Cancer12(4):252-264, 2012.

Park J-J, Omiya R, Matsumura Y, Sakoda Y, Kuramasu A, Augustine MM, Yao S, Tsushima F, Narazaki H, Anand S. B7-H1/CD80 interaction is required for the induction and maintenance of peripheral T-cell tolerance. Blood 116(8):1291-1298, 2010.

Patnaik A, Kang SP, Tolcher AW, Rasco DW, Papadopoulos KP, Beeram M, Drengler R, Chen C, Smith L, Perez C, Gergich K, Lehnert M. Phase I study of MK-3475 (anti-PD-1 monoclonal antibody) in patients with advanced solid tumors. J Clin Oncol 30 (suppl; abstr 2512), 2012.

Ribas A, Hodi FS, Kefford R, Hamid O, Daud A, Wolchok JD, Hwu WJ, Gangadhar TC, Patnaik A, Joshua AM, Hersey P, Weber JS, Dronca R, Zarour H, Gergich K, Li XN, Iannone R, Kang SP, Ebbinghaus SW, Robert C. Efficacy and safety of the anti-PD-1 monoclonal antibody MK-3475 in 411 patients (pts) with melanoma (MEL). J Clin Oncol 32:5s (suppl; abstr LBA9000), 2014.

Ribas A, Kefford R, Marshall MA, Punt CJ, Haanen JB, Marmol M, Garbe C, Gogas H, Schachter J, Linette G, Lorigan P, Kendra KL, Maio M, Trefzer U, Smylie M, Mcarthur GA, Dreno B, Nathan PD, Mackiewicz J, Kirkwood JM, et al. Phase III randomized clinical trial comparing tremelimumab with standard-of-care chemotherapy in patients with advanced melanoma. J Clin Oncol 31(5):616-622, 2013.

Rini BI, Stein M, Shannon P, Eddy S, Tyler A, Stephenson JJ Jr, Catlett L, Huang B, Healey D, Gordon M. Phase 1 dose-escalation trial of tremelimumab plus sunitinib in patients with metastatic renal cell carcinoma. Cancer 117(4):758-767, 2011.

Robert C, Ribas A, Wolchok JD, Hodi FS, Hamid O, Kefford R, Weber JS, Joshua AM, Hwu WJ, Gangadhar TC, Patnaik A, Dronca R, Zarour H, Joseph RW, Boasberg P, Chmielowski B, Mateus C, Postow MA, Gergich K, Elassaiss-Schaap J, et al. Anti-programmed-death-receptor-1 treatment with pembrolizumab in ipilimumab-refractory advanced melanoma: a randomised dose-comparison cohort of a phase 1 trial. Lancet 384(9948):1109-1117, 2014.

Robert C, Thomas L, Bondarenko I, O’day S, M DJ, Garbe C, Lebbe C, Baurain JF, Testori A, Grob JJ, Davidson N, Richards J, Maio M, Hauschild A, Miller WH Jr, Gascon P, Lotem M, Harmankaya K, Ibrahim R, Francis S, et al. Ipilimumab plus dacarbazine for previously untreated metastatic melanoma. N Engl J Med 364(26):2517-2526, 2011.

Sheridan C. Cautious optimism surrounds early clinical data for PD-1 blocker. Nat Biotechnol30(8):729-730, 2012.

Tarhini AA, Cherian J, Moschos SJ, Tawbi HA, Shuai Y, Gooding WE, Sander C, Kirkwood JM. Safety and efficacy of combination immunotherapy with interferon alfa-2b and tremelimumab in patients with stage IV melanoma. J Clin Oncol 30(3):322-328, 2012.

Topalian SL, Drake CG, Pardoll DM. Targeting the PD-1/B7-H1(PD-L1) pathway to activate anti-tumor immunityCurr Opin Immunol 24(2):207-212, 2012a.

Topalian SL, Hodi FS, Brahmer JR, Gettinger SN, Smith DC, Mcdermott DF, Powderly JD, Carvajal RD, Sosman JA, Atkins MB, Leming PD, Spigel DR, Antonia SJ, Horn L, Drake CG, Pardoll DM, Chen L, Sharfman WH, Anders RA, Taube JM, et al. Safety, activity, and immune correlates of anti-PD-1 antibody in cancer. N Engl J Med 366(26):2443-2454, 2012b.

Waterhouse P, Penninger JM, Timms E, Wakeham A, Shahinian A, Lee KP, Thompson CB, Griesser H, Mak TW. Lymphoproliferative disorders with early lethality in mice deficient in Ctla-4. Science270(5238):985-988, 1995.

Weber JS, Kähler KC, Hauschild A. Management of immune-related adverse events and kinetics of response with ipilimumab. J Clin Oncol 30(21):2691-2697, 2012.

Weber JS, Minor DR, D’Angelo S, Hodi FS, Gutzmer R, Neyns B, Hoeller C, Khushalani NI, Miller WH, Grob J, Lao C, Linette G, Grossmann K, Hassel J, Lorigan P, Maio M, Sznol M, Lambert A, Yang A, Larkin J. A phase 3 randomized, open-label study of nivolumab (anti-PD-1; BMS-936558; ONO-4538) versus investigator’s choice chemotherapy (ICC) in patients with advanced melanoma after prior anti-CTLA-4 therapy. ESMO Annual Meetings. Abstract #LBA3_PR. 2014.

Westin JR, Chu F, Zhang M, Fayad LE, Kwak LW, Fowler N, Romaguera J, Hagemeister F, Fanale M, Samaniego F, Feng L, Baladandayuthapani V, Wang Z, Ma W, Gao Y, Wallace M, Vence LM, Radvanyi L, Muzzafar T, Rotem-Yehudar R, et al. Safety and activity of PD1 blockade by pidilizumab in combination with rituximab in patients with relapsed follicular lymphoma: a single group, open-label, phase 2 trial. Lancet Oncol 15(1):69-77, 2014.

Wolchok JD, Kluger H, Callahan MK, Postow MA, Rizvi NA, Lesokhin AM, Segal NH, Ariyan CE, Gordon RA, Reed K, Burke MM, Caldwell A, Kronenberg SA, Agunwamba BU, Zhang X, Lowy I, Inzunza HD, Feely W, Horak CE, Hong Q, et al. Nivolumab plus ipilimumab in advanced melanoma.N Engl J Med 369(2):122-133, 2013.

Yang JC, Hughes M, Kammula U, Royal R, Sherry RM, Topalian SL, Suri KB, Levy C, Allen T, Mavroukakis S, Lowy I, White DE, Rosenberg SA. Ipilimumab (anti-CTLA4 antibody) causes regression of metastatic renal cell cancer associated with enteritis and hypophysitis. J Immunother30(8):825-830, 2007.

Zou W, Chen L. Inhibitory B7-family molecules in the tumour microenvironment. Nat Rev Immunol8(6):467-477, 2008.

[Discovery Medicine; ISSN: 1539-6509; Discov Med 18(101):341-350, December 2014.Copyright © Discovery Medicine. All rights reserved.]

 

Related Articles

Read Full Post »

Hematological Malignancy Diagnostics

Author and Curator: Larry H. Bernstein, MD, FCAP

 

2.4.3 Diagnostics

2.4.3.1 Computer-aided diagnostics

Back-to-Front Design

Robert Didner
Bell Laboratories

Decision-making in the clinical setting
Didner, R  Mar 1999  Amer Clin Lab

Mr. Didner is an Independent Consultant in Systems Analysis, Information Architecture (Informatics) Operations Research, and Human Factors Engineering (Cognitive Psychology),  Decision Information Designs, 29 Skyline Dr., Morristown, NJ07960, U.S.A.; tel.: 973-455-0489; fax/e-mail: bdidner@hotmail.com

A common problem in the medical profession is the level of effort dedicated to administration and paperwork necessitated by various agencies, which contributes to the high cost of medical care. Costs would be reduced and accuracy improved if the clinical data could be captured directly at the point they are generated in a form suitable for transmission to insurers or machine transformable into other formats. Such a capability could also be used to improve the form and the structure of information presented to physicians and support a more comprehensive database linking clinical protocols to outcomes, with the prospect of improving clinical outcomes. Although the problem centers on the physician’s process of determining the diagnosis and treatment of patients and the timely and accurate recording of that process in the medical system, it substantially involves the pathologist and laboratorian, who interact significantly throughout the in-formation-gathering process. Each of the currently predominant ways of collecting information from diagnostic protocols has drawbacks. Using blank paper to collect free-form notes from the physician is not amenable to computerization; such free-form data are also poorly formulated, formatted, and organized for the clinical decision-making they support. The alternative of preprinted forms listing the possible tests, results, and other in-formation gathered during the diagnostic process facilitates the desired computerization, but the fixed sequence of tests and questions they present impede the physician from using an optimal decision-making sequence. This follows because:

  • People tend to make decisions and consider information in a step-by-step manner in which intermediate decisions are intermixed with data acquisition steps.
  • The sequence in which components of decisions are made may alter the decision outcome.
  • People tend to consider information in the sequence it is requested or displayed.
  • Since there is a separate optimum sequence of tests and questions for each cluster of history and presenting symptoms, there is no one sequence of tests and questions that can be optimal for all presenting clusters.
  • As additional data and test results are acquired, the optimal sequence of further testing and data acquisition changes, depending on the already acquired information.

Therefore, promoting an arbitrary sequence of information requests with preprinted forms may detract from outcomes by contributing to a non-optimal decision-making sequence. Unlike the decisions resulting from theoretical or normative processes, decisions made by humans are path dependent; that is, the out-come of a decision process may be different if the same components are considered in a different sequence.

Proposed solution

This paper proposes a general approach to gathering data at their source in computer-based form so as to improve the expected outcomes. Such a means must be interactive and dynamic, so that at any point in the clinical process the patient’s presenting symptoms, history, and the data already collected are used to determine the next data or tests requested. That de-termination must derive from a decision-making strategy designed to produce outcomes with the greatest value and supported by appropriate data collection and display techniques. The strategy must be based on the knowledge of the possible outcomes at any given stage of testing and information gathering, coupled with a metric, or hierarchy of values for assessing the relative desirability of the possible outcomes.

A value hierarchy

  • The numbered list below illustrates a value hierarchy. In any particular instance, the higher-numbered values should only be considered once the lower- numbered values have been satisfied. Thus, a diagnostic sequence that is very time or cost efficient should only be considered if it does not increase the likelihood (relative to some other diagnostic sequence) that a life-threatening disorder may be missed, or that one of the diagnostic procedures may cause discomfort.
  • Minimize the likelihood that a treatable, life-threatening disorder is not treated.
  • Minimize the likelihood that a treatable, discomfort-causing disorder is not treated.
  • Minimize the likelihood that a risky procedure(treatment or diagnostic procedure) is inappropriately administered.
  • Minimize the likelihood that a discomfort-causing procedure is inappropriately administered.
  • Minimize the likelihood that a costly procedure is inappropriately administered.
  • Minimize the time of diagnosing and treating thepatient.8.Minimize the cost of diagnosing and treating the patient.

The above hierarchy is relative, not absolute; for many patients, a little bit of testing discomfort may be worth a lot of time. There are also some factors and graduations intentionally left out for expository simplicity (e.g., acute versus chronic disorders).This value hierarchy is based on a hypothetical patient. Clearly, the hierarchy of a health insurance carrier might be different, as might that of another patient (e.g., a geriatric patient). If the approach outlined herein were to be followed, a value hierarchy agreed to by a majority of stakeholders should be adopted.

Efficiency

Once the higher values are satisfied, the time and cost of diagnosis and treatment should be minimized. One way to do so would be to optimize the sequence in which tests are performed, so as to minimize the number, cost, and time of tests that need to be per-formed to reach a definitive decision regarding treatment. Such an optimum sequence could be constructed using Claude Shannon’s information theory.

According to this theory, the best next question to ask under any given situation (assuming the question has two possible outcomes) is that question that divides the possible outcomes into two equally likely sets. In the real world, all tests or questions are not equally valuable, costly, or time consuming; therefore, value(risk factors), cost, and time should be used as weighting factors to optimize the test sequence, but this is a complicating detail at this point.

A value scale

For dynamic computation of outcome values, the hierarchy could be converted into a weighted value scale so differing outcomes at more than one level of the hierarchy could be readily compared. An example of such a weighted value scale is Quality Adjusted Life Years (QALY).

Although QALY does not incorporate all of the factors in this example, it is a good conceptual starting place.

The display, request, decision-making relationship

For each clinical determination, the pertinent information should be gathered, organized, formatted, and formulated in a way that facilitates the accuracy, reliability, and efficiency with which that determination is made. A physician treating a patient with high cholesterol and blood pressure (BP), for example, may need to know whether or not the patient’s cholesterol and BP respond to weight changes to determine an appropriate treatment (e.g., weight control versus medication). This requires searching records for BP, certain blood chemicals (e.g., HDLs, LDLs, triglycerides, etc.), and weight from several

sources, then attempting to track them against each other over time. Manually reorganizing this clinical information each time it is used is extremely inefficient. More important, the current organization and formatting defies principles of human factors for optimally displaying information to enhance human information-processing characteristics, particularly for decision support.

While a discussion of human factors and cognitive psychology principles is beyond the scope of this paper, following are a few of the system design principles of concern:

  • Minimize the load on short-term memory.
  • Provide information pertinent to a given decision or component of a decision in a compact, contiguous space.
  • Take advantage of basic human perceptual and pat-tern recognition facilities.
  • Design the form of an information display to com-plement the decision-making task it supports.

F i g u re 1 shows fictitious, quasi-random data from a hypothetical patient with moderately elevated cholesterol. This one-page display pulls together all the pertinent data from six years of blood tests and related clinical measurements. At a glance, the physician’s innate pattern recognition, color, and shape perception facilities recognize the patient’s steadily increasing weight, cholesterol, BP, and triglycerides as well as the declining high-density lipoproteins. It would have taken considerably more time and effort to grasp this information from the raw data collection and blood test reports as they are currently presented in independent, tabular time slices.

Design the formulation of an information display to complement the decision-making task.

The physician may wish to know only the relationship between weight and cardiac risk factors rather than whether these measures are increasing or decreasing, or are within acceptable or marginal ranges. If so, Table 1 shows the correlations between weight and the other factors in a much more direct and simple way using the same data as in Figure 1. One can readily see the same conclusions about relations that were drawn from Figure 1.This type of abstract, symbolic display of derived information also makes it easier to spot relationships when the individual variables are bouncing up and down, unlike the more or less steady rise of most values in Figure 1. This increase in precision of relationship information is gained at the expense of other types of information (e.g., trends). To display information in an optimum form then, the system designer must know what the information demands of the task are at the point in the task when the display is to be used.

Present the sequence of information display clusters to complement an optimum decision-making strategy.

Just as a fixed sequence of gathering clinical, diagnostic information may lead to a far from optimum outcome, there exists an optimum sequence of testing, considering information, and gathering data that will lead to an optimum outcome (as defined by the value hierarchy) with a minimum of time and expense. The task of the information system designer, then, is to provide or request the right information, in the best form, at each stage of the procedure. For ex-ample, Figure 1 is suitable for the diagnostic phase since it shows the current state of the risk factors and their trends. Table 1, on the other hand, might be more appropriate in determining treatment, where there may be a choice of first trying a strict dietary treatment, or going straight to a combination of diet plus medication. The fact that Figure 1 and Table 1 have somewhat redundant information is not a problem, since they are intended to optimally provide information for different decision-making tasks. The critical need, at this point, is for a model of how to determine what information should be requested, what tests to order, what information to request and display, and in what form at each step of the decision-making process. Commitment to a collaborative relationship between physicians and laboratorians and other information providers would be an essential requirement for such an undertaking. The ideal diagnostic data-collection instrument is a flexible, computer-based device, such as a notebook computer or Personal Digital Assistant (PDA) sized device.

Barriers to interactive, computer-driven data collection at the source

As with any major change, it may be difficult to induce many physicians to change their behavior by interacting directly with a computer instead of with paper and pen. Unlike office workers, who have had to make this transition over the past three decades, most physicians’ livelihoods will not depend on converting to computer interaction. Therefore, the transition must be made attractive and the changes less onerous. Some suggestions follow:

  1. Make the data collection a natural part of the clinical process.
  2. Ensure that the user interface is extremely friendly, easy to learn, and easy to use.
  3. Use a small, portable device.
  4. Use the same device for collection and display of existing information (e.g., test results and his-tory).
  5. Minimize the need for free-form written data entry (use check boxes, forms, etc.).
  6. Allow the entry of notes in pen-based free-form (with the option of automated conversion of numeric data to machine-manipulable form).
  7. Give the physicians a more direct benefit for collecting data, not just a means of helping a clerk at an HMO second-guess the physician’s judgment.
  8. Improve administrative efficiency in the office.
  9. Make the data collection complement the clinical decision-making process.
  10. Improve information displays, leading to better outcomes.
  11. Make better use of the physician’s time and mental effort.

Conclusion

The medical profession is facing a crisis of information. Gathering information is costing a typical practice more and more while fees are being restricted by third parties, and the process of gathering this in-formation may be detrimental to current outcomes. Gathered properly, in machine-manipulable form, these data could be reformatted so as to greatly improve their value immediately in the clinical setting by leading to decisions with better outcomes and, in the long run, by contributing to a clinical data warehouse that could greatly improve medical knowledge. The challenge is to create a mechanism for data collection that facilitates, hastens, and improves the outcomes of clinical activity while minimizing the inconvenience and resistance to change on the part of clinical practitioners. This paper is intended to provide a high-level overview of how this may be accomplished, and start a dialogue along these lines.

References

  1. Tversky A. Elimination by aspects: a theory of choice. Psych Rev 1972; 79:281–99.
  2. Didner RS. Back-to-front design: a guns and butter approach. Ergonomics 1982; 25(6):2564–5.
  3. Shannon CE. A mathematical theory of communication. Bell System Technical J 1948; 27:379–423 (July), 623–56 (Oct).
  4. Feeny DH, Torrance GW. Incorporating utility-based quality-of-life assessment measures in clinical trials: two examples. Med Care 1989; 27:S190–204.
  5. Smith S, Mosier J. Guidelines for designing user interface soft-ware. ESD-TR-86-278, Aug 1986.
  6. Miller GA. The magical number seven plus or minus two. Psych Rev 1956; 65(2):81–97.
  7. Sternberg S. High-speed scanning in human memory. Science 1966; 153: 652–4.

Table 1

Correlation of weight with other cardiac risk factors

Cholesterol 0.759384
HDL 0.53908
LDL 0.177297
BP-syst. 0.424728
BP-dia. 0.516167
Triglycerides 0.637817

Figure 1  Hypothetical patient data.

(not shown)

Realtime Clinical Expert Support

http://pharmaceuticalintelligence.com/2015/05/10/realtime-clinical-expert-support/

Regression: A richly textured method for comparison and classification of predictor variables

http://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-for-comparison-and-classification-of-predictor-variables/

Converting Hematology Based Data into an Inferential Interpretation

Larry H. Bernstein, Gil David, James Rucinski and Ronald R. Coifman
In Hematology – Science and Practice
Lawrie CH, Ch 22. Pp541-552.
InTech Feb 2012, ISBN 978-953-51-0174-1
https://www.researchgate.net/profile/Larry_Bernstein/publication/221927033_Converting_Hematology_Based_Data_into_an_Inferential_Interpretation/links/0fcfd507f28c14c8a2000000.pdf

A model for Thalassemia Screening using Hematology Measurements

https://www.researchgate.net/profile/Larry_Bernstein/publication/258848064_A_model_for_Thalassemia_Screening_using_Hematology_Measurements/links/0c9605293c3048060b000000.pdf

2.4.3.2 A model for automated screening of thalassemia in hematology (math study).

Kneifati-Hayek J, Fleischman W, Bernstein LH, Riccioli A, Bellevue R.
Lab Hematol. 2007; 13(4):119-23. http://dx.doi.org:/10.1532/LH96.07003.

The results of 398 patient screens were collected. Data from the set were divided into training and validation subsets. The Mentzer ratio was determined through a receiver operating characteristic (ROC) curve on the first subset, and screened for thalassemia using the second subset. HgbA2 levels were used to confirm beta-thalassemia.

RESULTS: We determined the correct decision point of the Mentzer index to be a ratio of 20. Physicians can screen patients using this index before further evaluation for beta-thalassemia (P < .05).

CONCLUSION: The proposed method can be implemented by hospitals and laboratories to flag positive matches for further definitive evaluation, and will enable beta-thalassemia screening of a much larger population at little to no additional cost.

Measurement of granulocyte maturation may improve the early diagnosis of the septic state.

2.4.3.3 Bernstein LH, Rucinski J. Clin Chem Lab Med. 2011 Sep 21;49(12):2089-95.
http://dx.doi.org:/10.1515/CCLM.2011.688.

2.4.3.4 The automated malnutrition assessment.

David G, Bernstein LH, Coifman RR. Nutrition. 2013 Jan; 29(1):113-21.
http://dx.doi.org:/10.1016/j.nut.2012.04.017

2.4.3.5 Molecular Diagnostics

Genomic Analysis of Hematological Malignancies

Acute lymphoblastic leukemia (ALL) is the most common hematologic malignancy that occurs in children. Although more than 90% of children with ALL now survive to adulthood, those with the rarest and high-risk forms of the disease continue to have poor prognoses. Through the Pediatric Cancer Genome Project (PCGP), investigators in the Hematological Malignancies Program are identifying the genetic aberrations that cause these aggressive forms of leukemias. Here we present two studies on the genetic bases of early T-cell precursor ALL and acute megakaryoblastic leukemia.

  • Early T-Cell Precursor ALL Is Characterized by Activating Mutations
  • The CBFA2T3-GLIS2Fusion Gene Defines an Aggressive Subtype of Acute Megakaryoblastic Leukemia in Children

Early T-cell precursor ALL (ETP-ALL), which comprises 15% of all pediatric T-cell leukemias, is an aggressive disease that is typically resistant to contemporary therapies. Children with ETP-ALL have a high rate of relapse and an extremely poor prognosis (i.e., 5-year survival is approximately 20%). The genetic basis of ETP-ALL has remained elusive. Although ETP-ALL is associated with a high burden of DNA copy number aberrations, none are consistently found or suggest a unifying genetic alteration that drives this disease.

Through the efforts of the PCGP, Jinghui Zhang, PhD (Computational Biology), James R. Downing, MD (Pathology), Charles G. Mullighan, MBBS(Hons), MSc, MD (Pathology), and colleagues analyzed the whole-genome sequences of leukemic cells and matched normal DNA from 12 pediatric patients with ETP-ALL. The identified genetic mutations were confirmed in a validation cohort of 52 ETP-ALL specimens and 42 non-ETP T-lineage ALLs (T-ALL).

In the journal Nature, the investigators reported that each ETP-ALL sample carried an average of 1140 sequence mutations and 12 structural variations. Of the structural variations, 51% were breakpoints in genes with well-established roles in hematopoiesis or leukemogenesis (e.g., MLH2,SUZ12, and RUNX1). Eighty-four percent of the structural variations either caused loss of function of the gene in question or resulted in the formation of a fusion gene such as ETV6-INO80D. The ETV6 gene, which encodes a protein that is essential for hematopoiesis, is frequently mutated in leukemia. Among the DNA samples sequenced in this study, ETV6 was altered in 33% of ETP-ALL but only 10% of T-ALL cases.

Next-generation sequencing in hematologic malignancies: what will be the dividends?

Jason D. MerkerAnton Valouev, and Jason Gotlib
Ther Adv Hematol. 2012 Dec; 3(6): 333–339.
http://dx.doi.org:/10.1177/2040620712458948

The application of high-throughput, massively parallel sequencing technologies to hematologic malignancies over the past several years has provided novel insights into disease initiation, progression, and response to therapy. Here, we describe how these new DNA sequencing technologies have been applied to hematolymphoid malignancies. With further improvements in the sequencing and analysis methods as well as integration of the resulting data with clinical information, we expect these technologies will facilitate more precise and tailored treatment for patients with hematologic neoplasms.

Leveraging cancer genome information in hematologic malignancies.

Rampal R1Levine RL.
J Clin Oncol. 2013 May 20; 31(15):1885-92.
http://dx.doi.org:/10.1200/JCO.2013.48.7447

The use of candidate gene and genome-wide discovery studies in the last several years has led to an expansion of our knowledge of the spectrum of recurrent, somatic disease alleles, which contribute to the pathogenesis of hematologic malignancies. Notably, these studies have also begun to fundamentally change our ability to develop informative prognostic schema that inform outcome and therapeutic response, yielding substantive insights into mechanisms of hematopoietic transformation in different tissue compartments. Although these studies have already had important biologic and translational impact, significant challenges remain in systematically applying these findings to clinical decision making and in implementing new technologies for genetic analysis into clinical practice to inform real-time decision making. Here, we review recent major genetic advances in myeloid and lymphoid malignancies, the impact of these findings on prognostic models, our understanding of disease initiation and evolution, and the implication of genomic discoveries on clinical decision making. Finally, we discuss general concepts in genetic modeling and the current state-of-the-art technology used in genetic investigation.

p53 mutations are associated with resistance to chemotherapy and short survival in hematologic malignancies

E Wattel, C Preudhomme, B Hecquet, M Vanrumbeke, et AL.
Blood, (Nov 1), 1994; 84(9): pp 3148-3157
http://www.bloodjournal.org/content/bloodjournal/84/9/3148.full.pdf

We analyzed the prognostic value of p53 mutations for response to chemotherapy and survival in acute myeloid leukemia (AML), myelodysplastic syndrome (MDS), and chronic lymphocytic leukemia (CLL). Mutations were detected by single-stranded conformation polymorphism (SSCP) analysis of exons 4 to 10 of the P53 gene, and confirmed by direct sequencing. A p53 mutation was found in 16 of 107 (15%) AML, 20 of 182 (11%) MDS, and 9 of 81 (11%) CLL tested. In AML, three of nine (33%) mutated cases and 66 of 81 (81%) nonmutated cases treated with intensive chemotherapy achieved complete remission (CR) (P = .005) and none of five mutated cases and three of six nonmutated cases treated by low-dose Ara C achieved CR or partial remission (PR) (P = .06). Median actuarial survival was 2.5 months in mutated cases, and 15 months in nonmutated cases (P < lo-‘). In the MDS patients who received chemotherapy (intensive chemotherapy or low-dose Ara C), 1 of 13 (8%) mutated cases and 23 of 38 (60%) nonmutated cases achieved CR or PR (P = .004), and median actuarial survival was 2.5 and 13.5 months, respectively (P C lo-’). In all MDS cases (treated and untreated), the survival difference between mutated cases and nonmutated cases was also highly significant. In CLL, 1 of 8 (12.5%) mutated cases treated by chemotherapy (chlorambucil andlor CHOP andlor fludarabine) responded, as compared with 29 of 36 (80%) nonmutated cases (P = .02). In all CLL cases, survival from p53 analysis was significantly shorter in mutated cases (median 7 months) than in nonmutated cases (median not reached) (P < IO-’). In 35 of the 45 mutated cases of AML, MDS, and CLL, cytogenetic analysis or SSCP and sequence findings showed loss of the nonmutated P53 allele. Our findings show that p53 mutations are a strong prognostic indicator of response to chemotherapy and survival in AML, MDS, and CLL. The usual association of p53 mutations to loss of the nonmutated P53 allele, in those disorders, ie, to absence of normal p53 in tumor cells, suggests that p53 mutations could induce drug resistance, at least in part, by interfering with normal apoptotic pathways in tumor cells.

Genomic approaches to hematologic malignancies

Benjamin L. Ebert and Todd R. Golub
Blood. 2004; 104:923-932
https://www.broadinstitute.org/mpr/publications/projects/genomics/Review%20Genomics%20of%20Heme%20Malig,%20Blood%202004.pdf

In the past several years, experiments using DNA microarrays have contributed to an increasingly refined molecular taxonomy of hematologic malignancies. In addition to the characterization of molecular profiles for known diagnostic classifications, studies have defined patterns of gene expression corresponding to specific molecular abnormalities, oncologic phenotypes, and clinical outcomes. Furthermore, novel subclasses with distinct molecular profiles and clinical behaviors have been identified. In some cases, specific cellular pathways have been highlighted that can be therapeutically targeted. The findings of microarray studies are beginning to enter clinical practice as novel diagnostic tests, and clinical trials are ongoing in which therapeutic agents are being used to target pathways that were identified by gene expression profiling. While the technology of DNA microarrays is becoming well established, genome-wide surveys of gene expression generate large data sets that can easily lead to spurious conclusions. Many challenges remain in the statistical interpretation of gene expression data and the biologic validation of findings. As data accumulate and analyses become more sophisticated, genomic technologies offer the potential to generate increasingly sophisticated insights into the complex molecular circuitry of hematologic malignancies. This review summarizes the current state of discovery and addresses key areas for future research.

2.4.3.6 Flow cytometry

Introduction to Flow Cytometry: Blood Cell Identification

Dana L. Van Laeys
https://www.labce.com/flow_cytometry.aspx

No other laboratory method provides as rapid and detailed analysis of cellular populations as flow cytometry, making it a valuable tool for diagnosis and management of several hematologic and immunologic diseases. Understanding this relevant methodology is important for any medical laboratory scientist.

Whether you have no previous experience with flow cytometry or just need a refresher, this course will help you to understand the basic principles, with the help of video tutorials and interactive case studies.

Basic principles include:

  1. Immunophenotypic features of various types of hematologic cells
  2. Labeling cellular elements with fluorochromes
  3. Blood cell identification, specifically B and T lymphocyte identification and analysis
  4. Cell sorting to isolate select cell population for further analysis
  5. Analyzing and interpreting result reports and printouts

Read Full Post »

Cancer Biomarkers [11.3.2.3]

Writer and Curator: Larry H. Bernstein, MD, FCAP

Cancer Biomarkers

This discussion is extracted from the Special Section – Cancer Biomarker Update – in May Arch Pathology and Lab Med.  It is not intended to be complete, but it has quite timely content.  There are three articles I shall cover.

11.3.2.3 Cancer Biomarkers

11.3.2.3.1 Cancer Biomarkers in Structured Data Reporting

11.3.2.3.2 Cancer Biomarkers in Myeloid Malignancies

11.3.2.3.3 National Comprehensive Cancer Network Consensus on Use of Cancer Biomarkers

Cancer Biomarkers – The Role of Structured Data Reporting
Simpson,  RW; Berman, MA; Foulis PR, et al.
Arch Pathol Lab Med. 2015;139:587–593
http://dx.doi.org:/10.5858/arpa.2014-0082-RA

The College of American Pathologists has been producing cancer protocols since 1986 to aid pathologists in the diagnosis and reporting of cancer cases. Many pathologists use the included cancer case summaries as templates for dictation/data entry into the final pathology report. These summaries are now available in a computer-readable format with structured data elements for interoperability, packaged as ‘‘electronic cancer checklists.’’ Most major vendors of anatomic pathology reporting software support this model. Objectives.—To outline the development and advantages of structured electronic cancer reporting using the electronic cancer checklist model, and to describe its extension to cancer biomarkers and other aspects of cancer reporting. Data Sources.—Peer-reviewed literature and internal records of the College of American Pathologists. Conclusions.—Accurate and usable cancer biomarker data reporting will increasingly depend on initial capture of this information as structured data. This process will support the standardization of data elements and biomarker terminology, enabling the meaningful use of these datasets by pathologists, clinicians, tumor registries, and patients.

Narrative Versus Structured Data Reporting Clinical laboratory reports typically consist of discrete data elements with structured qualitative or quantitative information, often using standardized laboratory methods, data elements, and units. When discrete data elements are electronically transmitted to external clinical information systems, the transmitted information may be annotated with one or more terminologies such as Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT)4 and Logical Observation Identifiers Names and Codes (LOINC),5 although the consistent application of such codes to structured laboratory data is not yet an interoperable standard. Because the structure of clinical laboratory data tends to be fixed and standardized before the point of data entry, reporting these data elements in a tabular synoptic format is a relatively simple process. The report output may not include all data collected (eg, methodologic details), but clinically relevant data can be easily extracted by computer algorithm and automatically reported in easily readable format (including custom text, result explanations, and test value trends).

Anatomic pathology reports, by contrast, have traditionally been narrative and recorded as unstructured or partially structured fields of text. Unfortunately, narrative reporting often lacks consistency in organization, content, units, terminology, and completeness.6–8 These structural inconsistencies create difficulties in finding and understanding clinically important data and increase the chance of omitting key data elements and misinterpreting information present in the narrative. This is particularly problematic when clinicians encounter reports from multiple pathology laboratories or when patients receive care at multiple institutions. Narrative reporting has equally negative effects on computer readability; the ability of computers to correctly parse and classify information contained in a narrative report is imperfect even when using advanced natural language processing software designed specifically for anatomic pathology.9,10 Natural language processing–parsed text must always undergo human review, editing, and signoff before release for patient care or research.

Ensuring consistency and readability of cancer and biomarker reports requires a reporting solution integrated into the pathologist workflow that supports entry of standardized data directly into a laboratory information system and/or electronic health record system. These systems can produce highly readable synoptic reports and can include computer-based report validation of standardized data elements to reduce or eliminate the chance of omitting required data elements. With the transition from narrative to synoptic reporting for cancer cases, many laboratories have been using modified CCPs or locally developed templates or macros, which may or may not contain all required data elements. This common mode of data entry fits well into the pathologist workflow and can result in organized, highly readable synoptic reports, but generally results in information stored as text in a single large data field. Even when results are entered as discrete data elements, subsequent storage mechanisms usually result in nonstructured text or nonstandard custom data fields in a local computer system. Unfortunately, narrative and nonstandardized data sets are very difficult to reliably aggregate and analyze for laboratory quality assurance, research, or cancer registry surveillance. Such aggregated data also remain relatively unreliable because of changes in information systems. Many of these issues can be eliminated by entering and reporting structured data with standardized electronic templates.

CAP eCC History, Development, and Adoption Efforts to bring structure to cancer pathology reporting began in the late 1980s and early 1990s1,11,12 with publication of templates that were the precursors of the current CCPs (Table). The primary goal was to improve the care of cancer patients by improving the reporting rate of clinically important data elements. The checklist approach was adopted to help standardize terminology and ensure all relevant data elements are reported. The 66 current CCPs, 3 new cancer biomarker templates,13–16 and 85 eCC templates represent the evolution of the original 1986 CCP model. The CCPs and templates are created and maintained through the ongoing work of the CAP Cancer and Cancer Biomarker Reporting Committees. The CCPs are widely adopted by laboratories and used for accreditation purposes by the American College of Surgeons–Commission on Cancer17 and CAP Laboratory Accreditation Program.18

During the past several years, the CAP has worked to create standardized pathology reporting templates that enable individual pathologists and software vendors to capture, store, retrieve, transmit, and analyze diagnostic cancer pathology information. Electronic versions of these templates, the eCCs, are based on consistent structured data representation, which enables simple yet robust computerization of cancer pathology data elements suitable for patient care, cancer registry transmission, and research. Synoptic reports are not the same as ‘‘structured data.’’ Although synoptic reports are formatted for optimum human readability and understanding, they consist of textbased questions and answers (ideally one pair per line) that present problems for computer readability and interoperability. Structured data, by contrast, refers to representation of data elements in a computer-readable data exchange format such as XML. The structured data model used by eCC XML templates assigns a unique identifier (a composite key) to every question and answer choice, template, section, and note listed in the template. Composite keys are used throughout the entire eCC life cycle to transmit the precise identity of each data element and its origin in a specific version of an eCC template.

The CAP eCC model has been implemented province wide by Cancer Care Ontario through their multiyear synoptic pathology reporting and change-management project.19–21 The Canadian Partnership Against Cancer is also currently working with several other Canadian provinces to implement population-level electronic synoptic reporting based on the CAP eCC. The Cancer Care Ontario project has shown that there is high acceptability among pathologists and clinicians22 and that data are usable for the secondary needs of tumor registries,20 but there remains room for improvement. For instance, both the Reporting Pathology Protocols project reports23–25 and the Cancer Care Ontario implementation reports22 suggest that pathologists require more time to complete the reporting task.

While this may meet quality and data reporting needs, it remains a potential barrier to acceptance. Automated human-readable report generation also could be improved, especially in terms of creating best practice guidelines for report output. Both data entry and report generation have traditionally been supported by laboratory information systems vendors, but the success of implementation has varied, and often significant effort is required to modify the resultant human readable report to satisfy local clinical needs. Because the final report remains most important for patient care, the CAP Diagnostic Intelligence in Health Information Technology and Pathology Electronic Reporting committees have initiated work on creating and promoting a standardized data structure within cancer pathology reports.

Abbreviated CAP eCC History and Milestones

2010–2012 CCO successfully implements population level electronic synoptic reporting in nearly all disease sites based on 2010 CAP eCC standards, which include AJCC 7th edition TNM staging; 97% of labs report using structured data from eCC.20

2010 CAP Laboratory Accreditation Program begins to survey institutions for inclusion of required CCP data elements in AP reports.38

2010 NAACCR Pathology Data Workgroup develops implementation guide to assist with CAP eCC-based transmissions of cancer data to central cancer registries.39

2011 CCO user acceptability data demonstrate high level of acceptance for eCC-derived synoptic reports among clinicians and pathologists.22

2012 CAP forms the multi-organizational Cancer Biomarker Reporting Workgroup, tasked to produce standardized reporting templates for breast, colorectal, and lung cancer biomarkers.40

2013 eCC-based reporting in Ontario is used to improve quality and practice performance.20

2013 The first cancer biomarker templates are produced for breast, colorectal, and lung cancer.13–16 They are available on the http://www.cap.org/cancerprotocols Web site in Word and PDF format (accessed April 28, 2014). The eCC versions are available through CAP.

2013 Launch of CAP eFRM, a software product to aid vendor integration of eCCs into AP-LIS systems or for use as a standalone product.

2014 By December 2013, CAP is maintaining current versions of 66 CCPs, 3 cancer biomarker templates, and 85 corresponding eCC templates.

References

13. Cagle PT, Sholl LM, Lindeman NI, et al. Template for reporting results of biomarker testing of specimens from patients with non–small cell carcinoma of the lung. Arch Pathol Lab Med. 2014; 138(2):171–174. http://dx.doi.org:/10.5858/arpa.2013-0232-CP

14. Cagle PT, Allen TC, Olsen RJ. Lung cancer biomarkers: present status and future developments. Arch Pathol Lab Med. 2013; 137(9):1191–1198.
http://dx.doi.org:/10.5858/arpa.2013-0319-CR

15. Bartley AN, Hamilton SR, Alsabeh R, et al. Template for reporting results of biomarker testing of specimens from patients with carcinoma of the colon and rectum. Arch Pathol Lab Med. 2014; 138(2):166–170.
http://dx.doi.org:/10.5858/arpa.2013-0231-CP

16. Fitzgibbons PL, Dillon DA, Alsabeh R, et al. Template for reporting results of biomarker testing of specimens from patients with carcinoma of the breast. Arch Pathol Lab Med. 2014; 138(5):595–601.
http://dx.doi.org:/10.5858/arpa.2013-0566-CP

20. Srigley J, Lankshear S, Brierley J, et al. Closing the quality loop: facilitating improvement in oncology practice through timely access to clinical performance indicators. J Oncol Pract. 2013; 9(5):e255–e261. http://dx.doi.org:/10.1200/JOP.2012.000818

22. Lankshear S, Srigley J, McGowan T, Yurcan M, Sawka C. Standardized synoptic cancer pathology reports—so what and who cares?: a population-based satisfaction survey of 970 pathologists, surgeons, and oncologists. Arch Pathol Lab Med. 2013; 137(11):1599–1602.
http://dx.doi.org:/10.5858/arpa.2012-0656-OA

38. College of American Pathologists. CAP cancer protocols frequently asked questions. http://www.cap.org/apps//cap.portal
39. Klein WT, Havener LA, eds. Standards for Cancer Registries Volume V: Pathology Laboratory Electronic Reporting, Version 4.0. Springfield, IL: North American Association of Central Cancer Registries; 2011:1–310.
40. Fitzgibbons PL, Lazar AJ, Spencer S. Introducing new College of American Pathologists reporting templates for cancer biomarkers. Arch Pathol Lab Med. 2014; 138(2):157–158. http://dx.doi.org:/10.5858/arpa.2013-0233-ED

41. Amin MB. The 2009 version of the cancer protocols of the College of American Pathologists. Arch Pathol Lab Med. 2010; 134(3):326–330.
http://dx.doi.org:/10. 1043/1543-2165-134.3.326.

Figure 1. (not shown) Narrative versus synoptic versus structured reporting of breast biomarker testing (excerpts). The narrative row shows a portion of a dictated biomarker report. The synoptic row satisfies the College of American Pathologists (CAP) synoptic reporting requirements, but is not computer readable.

 

 

Figure 2. (not shown) The College of American Pathologists (CAP) Cancer Protocols (CCPs) are developed by the CAP Cancer Committee. Each CCP is reformulated as question/answer structures, entered into the CAP electronic Cancer Checklist (eCC) Template Editor (not shown), and stored in the eCC template database. The eCC files in XML format are produced from this database and delivered to vendors of anatomic pathology/laboratory information system (LIS) software systems. Vendors convert the eCC files into data entry form implementations using their local technologies. In addition, most vendors create eCC-based templates for creating synoptic reports. When pathologists enter data into the eCC-based data-entry forms, the vendor software is able to run validation checks such as assessing whether all CCP-required data elements are recorded. Synoptic reports are developed from the eCC-derived data and delivered to health care providers for patient care. The eCC-structured data is stored in the vendor database, where it can be transmitted in interoperable format to other computer systems. Secondary uses of eCC-based data include cancer registry reporting, quality assurance, biospecimen annotation, research, decision support, and financial reporting. The horizontal arrows involve the exchange of eCC composite keys, preserving the fidelity of the data as part of an eCC template, providing the foundation for interoperable data transmission formats, and enabling the regeneration of eCC datasets in the exact format in which they were recorded. Activity columns that directly impact health care activities are shaded in light blue. Abbreviation: EHR, electronic health record.

Future The use of standardized, structured data elements is foundational for the development of improved reporting and clinical decision support for biomarker results. Clinicians are currently faced with synthesizing data from multiple narrative reports to decide on treatment options. Often these narrative reports are from different laboratories with very different report formats and include variable methodologic details, all of which hinders understanding of important results. For biomarkers that determine a patient’s eligibility for specific drugs, a computer-generated report that presents test results in a tabular form, similar to antibiotic susceptibility testing, may be desirable. This reporting method would allow for display of biomarker test results over time and could also link to other databases.

Structured data allows for clinical decision support such that the report displays only eligible drugs, or the report displays a note stating that a test result suggests a patient is not eligible for a specific drug. Using standardized terminology allows these rules to be the same between institutions, even if electronic health record system vendors use different means of implementation.

Figure 3. (not shown) College of American Pathologists electronic Cancer Checklist lung cancer biomarker template—anaplastic lymphoma kinase (ALK). Abbreviations: EML4, echinoderm microtubule-associated protein-like 4; KIF5B, kinesin family member 5B; KLC1, kinesin light chain 1; TFG, tropomyosin receptor kinase–fused gene.

Figure 4. (not shown) Examples of tumor biomarker dashboards. Abbreviations: ALK, anaplastic lymphoma kinase; ROS1, ROS proto-oncogene 1, receptor tyrosine kinase.

This system would allow for more efficient, more accurate, and safer methods of providing data for optimizing patient care, with all of the discrete data transmitted electronically and linked to the original tumor report. In Ontario, Canada, this vision is rapidly advancing, as demonstrated by the Cancer Care Ontario successes with eCC implementation and current plans to implement the eCC biomarker templates across the province. Future challenges include the identity and tracking of related tumor samples over time and integration of testing from different laboratories. Because testing on a given specimen can be performed at different times and in different laboratories, a future standard must address the annotation of results with tumor source, procurement dates, and other biospecimen-specific data.30 The relationship of test results from multiple specimens from the same patient needs to be recorded in a standard format so that this parent-child hierarchical relationship can be analyzed over time.

Pathologists are increasingly asked to provide biomarker information for patient care, tumor registries, epidemiologic studies, translational research, and quality improvement activities.20 The eCC model provides a pathway to meet these demands, with efficient and error-free data entry, reporting, and transmission of data elements, and with the ability to produce output that is human readable, efficient to use, and easy to interpret. As the CCPs and eCCs have matured, Ontario pathologists and cancer registries have demonstrated success with large-scale implementations. However, continued improvements are needed. As the field of pathology grows, particularly in the area of biomarkers, structured electronic reporting will become critical to helping physicians provide optimal patient care and will facilitate secondary uses of pathology data.

  1. Robb JA, Gulley ML, Fitzgibbons PL, et al. A call to standardize preanalytic data elements for biospecimens. Arch Pathol Lab Med. 2014; 138(4):526–537.

Molecular Genetic Biomarkers in Myeloid Malignancies
Matynia AP, Szankasi P, Shen W, Kelley TW.
Arch Pathol Lab Med. 2015;139:594–601
http://dx.doi.org:/10.5858/arpa.2014-0096-RA

Recent studies using massively parallel sequencing technologies, so-called next-generation sequencing, have uncovered numerous recurrent, single-gene variants or mutations across the spectrum of myeloid malignancies. Objectives.—To review the recent advances in the understanding of the molecular basis of myeloid neoplasms, including their significance for diagnostic and prognostic purposes and the possible implications for the development of novel therapeutic strategies. Data Sources.—Literature review. Conclusions.—The recurrent mutations found in myeloid malignancies fall into distinct functional categories.

These include (1) cell signaling factors, (2) transcription factors, (3) regulators of the cell cycle, (4) regulators of DNA methylation, (5) regulators of histone modification, (6) RNA-splicing factors, and (7) components of the cohesin complex. As the clinical significance of these mutations and mutation combinations is established, testing for their presence is likely to become a routine part of the diagnostic workup. This review will attempt to establish a framework for understanding these mutations in the context of myeloproliferative neoplasms, myelodysplastic syndromes, and acute myeloid leukemia.

Pathways Affected by Recurrent Mutations in Myeloid Malignancies

Cell Signaling The ability of a cell to respond to diverse physiologic stimuli, including cytokines, chemokines, growth factors, and hormones, or to the presence of bacteria and other microorganisms is mediated via the interaction of specific ligands and their corresponding cell surface receptors. Ligand binding usually results in receptor dimerization and activation of a tyrosine kinase, either intrinsically present in the cytoplasmic domain or as an associated polypeptide. Further propagation of the signal from the cell surface to the nucleus involves the formation of macromolecular complexes and the activation or inactivation of various enzymes. The final outcome of the signal transduction is modulation of the expression of certain genes and their products, which ultimately produces a cellular response. In normal cells, this process is tightly regulated owing to the involvement of negative or inhibitory signals. In tumor cells these processes may be perturbed owing to mutations that impart inappropriate activation or deactivation of enzymatic function. Genes for receptor protein tyrosine kinases, such as FLT3 and KIT, or receptor-associated kinases, such as JAK2, are the most commonly mutated cell-signaling factors in myeloid malignancies. Activating mutations in these proteins occur in narrowly defined hotspots, resulting in ligand-independent dimerization or constitutive kinase activation. An example is the protein tyrosine kinase JAK2, which transduces signals from ligand-bound cell surface receptors for thrombopoietin (TPOR/MPL) and erythropoietin (EPOR).

Activating mutations in JAK2 are commonly found in the myeloproliferative neoplasms (MPNs): polycythemia vera (PV), essential thrombocythemia (ET), and primary myelofibrosis (PMF). These disorders appear to be driven, in large part, by the inappropriate activation of growth factor– signaling pathways, and JAK2 is central to signaling from EPOR and TPOR/MPL via STAT5, STAT3, the RAS-MAP kinase pathway, and the PI-3 kinase–AKT pathway. Many negative regulators of cytokine signaling, such as SH2B adaptor protein 3 (SH2B3, also known as LNK)1,2 and cytokine-inducible SH2-containing protein (CISH, also known as SOCS)3 keep the pathway in balance. Downstream RAS signaling is counteracted by NF1, a protein that stimulates the intrinsic RAS guanosine triphosphatase activity. An effect similar to JAK2 mutations may be achieved through mutations in other proteins involved in these pathways, including activating mutations of a surface receptor (MPL) or loss-of-function mutations in negative regulators (SH2B3, CISH), all of which promote survival, proliferation, and differentiation of committed myeloid progenitors.4 Example genes included in this group: JAK2, MPL, KIT, FLT3, CSF3R, PTPN11, KRAS, NRAS, and NF1.

Transcription Transcription is a tightly regulated process that depends on the formation and assembly of protein and protein-DNA complexes. These complexes, called transcription factors, bind to specific DNA sequences adjacent to the genes they regulate and promote (in the case of an activator) or block (in the case of a repressor) the recruitment of RNA polymerases to those genes. This regulatory activity controls the formation of messenger RNA (mRNA) transcripts. Mutations that block the activity of transcriptional activators may, in certain circumstances, lead to a block in cellular differentiation due to the lack of the necessary gene products. Many of the transcription factors that are recurrently mutated in myeloid malignancies, such as RUNX1, GATA1, GATA2, and CEBPA, are involved in fundamental aspects of myelopoiesis and it is believed that these mutations lead to a block in myeloid differentiation.  Example genes included in this group: RUNX1, CEBPA, GATA1, GATA2, ETV6, and PHF6.

Epigenetic Modifiers: Regulation of DNA Methylation DNA methylation involves the addition of a methyl group to cytosine bases in the context of cytosine-guanine sequences (so-called CpG sites), leading to the creation of 5-methylcytosine. CpG islands are usually located in or near promoter regions. Their methylation is an important epigenetic mechanism for regulating gene expression and, in the context of heritable methylation patterns, underlies the process of genomic imprinting. Additionally, it has been hypothesized that aberrant DNA methylation may contribute to the pathogenesis of cancers,5–8 including myeloid neoplasms. Although cancer genomes tend to be globally hypomethylated, in comparison with normal tissues, hypermethylation of specific CpG islands at tumor suppressor genes, resulting in their inactivation, is common in many tumors.9

ormation of compact, inactive heterochromatin.10 Several factors regulate the process of DNA methylation. Mutations in some of these factors have been found recurrently in myeloid neoplasms.11–13 DNA methyltransferases catalyze the methylation at the 50 position of cytosine. DNMT3A and DNMT3B are involved in de novo methylation, whereas DNMT1 maintains hemimethylated DNA during replication. Once created, 5-methylcytosine can be further modified by a group of methylcytosine dioxygenases (Ten Eleven Translocation dioxygenases: TET1, TET2, and TET3) to 50-hydroxymethylcytosine, a presumed short-lived intermediary that may lead to demethylation of cytosine.

Mutations in both DNMT3A and TET2 likely lead to loss of function of the respective enzyme activities. Recently, mutations in IDH1 and IDH2 have been identified in myeloid neoplasms and other cancers. Interestingly, recurring mutations of an arginine residue in the active site (R132 in IDH1 and R140 in IDH2) prevent the normal catalytic function of the enzyme (conversion of isocitrate to aketoglutarate) and appear to induce a neomorphic enzyme activity resulting in the formation of the rare oncometabolite 2-hydroxyglutarate.14 TET2 belongs to a family of dioxygenases that requires a-ketoglutarate as a cofactor.15,16 It has been shown that 2-hyroxyglutarate acts as a competitive inhibitor of a-ketoglutarate–dependent dioxygenases, which include TET2 and members of the KDM family of histone demethylases, thereby inducing epigenetic changes at both the level of DNA methylation and histone modification.14,17

Therapeutic inhibitors of mutant forms of IDH proteins and the resulting 2-hydroxyglutarate are also under investigation.18 Example genes included in this group: DNMT3A, TET2, IDH1, and IDH2.

Epigenetic Modifiers: Mutations Affecting Histone Function  Histone proteins are involved in the dynamic organization of DNA into zones of active euchromatin and inactive heterochromatin in a process that is regulated, in part, by a complex series of posttranslational modifications to histone tails, including acetylation and methylation. These modifications affect the recruitment of regulatory proteins such as transcription factors, corepressors, and coactivators, as well as histone-modifying enzymes themselves. Trimethylation of the lysine at position 27 in histone H3 (H3K27), one of the more common modifications, generally leads to reduced gene expression and it would be expected that mutations that reduce methylation at H3K27 would activate transcription. Recently, recurrent mutations in several genes encoding histone regulators, including ASXL1, EZH2, SUZ12, and KDM6A (also known as UTX), have been identified.

Perturbations in epigenetic pathways result in global, genome-wide effects and it is often difficult to identify which altered cellular function eventually leads to neoplasia. The same also holds true for perturbations in the RNA splicing machinery and the cohesion complex (see below). Example genes included in this group: ASXL1, EZH2, SUZ12, and KDM6A.

Cohesin Complex Genes  The cohesin complex is a conserved multimeric protein complex that regulates cohesion of sister chromatids during cell division,21 postreplicative DNA repair,22,23 and global gene expression.24–28
Example genes included in this group: STAG2, RAD21, SMC1A, and SMC3.

RNA-Splicing Factors  RNA splicing results in the formation of mature mRNA transcripts derived from exons, the protein coding portion of the genome. Splicing occurs in a macromolecular complex of small nuclear RNAs and proteins assembled de novo on each pre-mRNA strand in a multistep process. This complex is known as the spliceosome. Transcripts may undergo alternative splicing in a tissue, or context-specific manner and the protein products of alternatively spliced transcripts may have altered function.
Example genes included in this group: SF3B1, SRSF2, ZRSR2, and U2AF1.

Cell Cycle Regulators  Example genes included in this group: TP53 and NPM1.

Genetic Biomarkers in Myeloid Malignancies

Myeloproliferative Neoplasms  Myeloproliferative neoplasms encompass a group of clonal stem cell disorders characterized by expansion of 1 or more of the myeloid lineages resulting in bone marrow hypercellularity and increased peripheral blood myeloid cell counts. The MPN category includes chronic myelogenous leukemia, PV, ET, PMF, chronic neutrophilic leukemia, mastocytosis, and others. The underlying genetic landscape of some of these disorders is very well understood, as in the case of chronic myelogenous leukemia with t(9;22), but is much less well understood in many other entities.

The discovery of a recurrent codon 617 activating mutation (V617F) in exon 14 of the tyrosine kinase JAK237–40 and additional mutations in JAK2 exon 1241 provided the first genetic evidence of the importance of dysregulated growth factor signaling in these disorders. The prevalence of JAK2 mutations in classical MPNs varies from 95% to 99% in PV, 50% to 70% in ET, 40% to 50% in PMF,37–41,43 and molecular tests for their detection are available and widely used in clinical practice. Similarly, activating mutations in the MPL gene, encoding the thrombopoietin receptor, are present in approximately 4% of ET cases and approximately 11% of PMF cases.44–47 Recently, calreticulin (CALR), encoding an endoplasmic reticulum chaperone, has also been shown to be important. Somatic CALR mutations are found in 70% to 84% of patients with ET or PMF with wild-type JAK2 and MPL, 8% of MDS cases, and occasionally in other myeloid neoplasms.48 Clonal analyses suggest CALR mutations act as an initiating mutation in some patients.48 In ET and PMF, CALR mutations and JAK2 and MPL mutations are mutually exclusive,49 and CALR mutations appear to be absent in PV.49 CALR mutations appear to be primarily insertion or deletion mutations that result in a frameshift and the subsequent generation of a novel C-terminal peptide.48,49

Many other genes involved in intracellular signaling, such as negative regulators of the JAK2 signaling pathway, are mutated in MPNs. Among these is SH2B3, which negatively regulates JAK2 activation through its SH2 domain. Mutations in SH2B3 during the chronic phase are uncommon, fewer than 5% in ET and PMF50; however, their frequency increases during leukemic transformation, suggesting a role in disease progression.51 Another negative regulator found mutated in MPNs is the Cbl proto-oncogene, E3 ubiquitin protein ligase gene (CBL). CBL acts as a multifunctional adapter with ubiquitin ligase activity and by competitive blockade of signaling.

A shift from a simple, chronic myelogenous leukemia–like model for MPN pathophysiology to a more complex model occurred with the emergence of evidence of a ‘‘pre-JAK2’’ genetic event. This concept is based on the observation that mutations in signaling molecules are not sufficient for disease development and that several cooperating genetic hits appear to be required.4 Mutations in genes involved in epigenetic regulation, including EZH2, ASXL1, and TET2 (also found in many other myeloid neoplasms), are postulated to act as those initialing events, preceding JAK2V617F mutations.55 EZH2 mutations do not occur in ET, but are present in 3% of PVs and 13% of MFs.56 ASXL1 mutations are found in only approximately 7% of patients with ET and PV but more frequently in PMF cases (from 19%–40%).57,58 TET2 mutations occur in approximately 14% of MPNs, ranging from 11% in ETs to 19% in PMFs.59,60 Finally, there are mutations that are rarely found during the chronic phase but which may be present at transformation, and are therefore thought to play a role in disease progression. IDH1 and IDH2 mutations, for example, have a low frequency in the chronic phase (0.8% in ET, 1.9% in PV, and 4.2% in MF) but a much higher frequency in blast phase.61

Myelodysplastic Syndromes and MDS/MPN Overlap Disorders 

The myelodysplastic syndromes are a group of clonal hematopoietic stem cell disorders characterized by ineffective hematopoiesis, morphologic evidence of dysplasia in at least 1 of the myeloid lineages, peripheral cytopenias, bone marrow hypercellularity, and an increased risk of development of AML.74 Clonal cytogenetic abnormalities, including large deletions and chromosome gains, as well as balanced translocations, are observed in approximately 50% of MDS cases by routine methods,74 and their identification aids in establishing the diagnosis and may provide prognostic information.

Acute Myelogenous Leukemia  Acute myeloid leukemia is a genetically heterogeneous disease resulting in the accumulation of myeloblasts in bone marrow with a concomitant reduction in normal hematopoiesis. The diagnosis and subclassification of AML depends on detecting the presence of recurrent cytogenetic abnormalities.74 In many cases, particularly those that are cytogenetically normal (CN-AML), several single-gene mutations further aid in the stratification of disease outcomes. The significance of mutations in genes such as FLT3, NPM1, and CEBPA is well established but nextgeneration sequencing has led to the discovery of numerous additional recurrent mutations, including in TET2,12,59 ASXL1,104 IDH1/IDH2,13,105,106 DNMT3A,11,107 and PHF6.108

Among the most common mutations found in de novo AML are NMP1, FLT3, and DNMT3A mutations, present in 22% to 29%, 22% to 37%, and 15% to 26% of samples, respectively.31,110,111 Other genes less commonly targeted by mutations are IDH1/IDH2 (15%– 20%), KRAS/NRAS (12% combined), RUNX1 (5%–10%),TET2 (8%–14%), TP53 (2%–8%), CEBPA (6%–14%), WT1 (6%–8%), PTPN11 (4%), and KIT (4%–6%).31,110,111

The prognostic significance of a subset of recurrent mutations is well established. In CN-AML, biallelic CEBPA mutations127,128 and NPM1 mutations without FLT3-ITD mutations129–133 are associated with a favorable prognosis. In contrast, FLT3-ITD without NPM1 mutations112,113,129–132 and MLL–partial tandem duplication mutations134–136 portend poor outcome. KIT mutations in AML with t(8;21) or inv(16)131,137 are also associated with unfavorable outcome. The European LeukemiaNet panel138 first proposed a standardized classification according to both cytogenetic and molecular genetic data to allow a better comparison of prognosis among patients with AML. However, only mutations of NPM1, CEBPA, and FLT3 were included in their recommendations. The relevance of more recently discovered mutations, including IDH1, IDH2, WT1, TET2, ASXL1, among others, remains unclear.139–142 The presence of certain mutations may also allow for more targeted therapeutic regimens; for example, FLT kinase inhibitors may be useful in cases with mutations and IDH1 inhibitors are under investigation in patients with IDH1 mutations.143,144

An enormous amount of new information illuminating the genetic complexity of myeloid neoplasms has been generated during the last few years. Much work remains to be done but it is clear that the future role of the pathologist in collecting and interpreting this information will be an essential component of the management of these patients.

 

The Cancer Genomics Resource List 2014
Zutter MM, Bloom KJ, Cheng L, Hagemann IS, et al.
Arch Pathol Lab Med. http://dx.doi.org:/10.5858/arpa.2014-0330-C

Optimizing the Clinical Utility of Biomarkers in Oncology: The NCCN Biomarkers Compendium
Marian L. Birkeland, Joan S. McClure
Arch Pathol Lab Med. 2015;139:608–611
http://dx.doi.org:/10.5858/arpa.2014-0146-RA

The rapid development of commercial biomarker tests for oncology indications has led to confusion about which tests are clinically indicated for oncology care. By consolidating biomarker testing information recommended within National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines), the NCCN Biomarkers Compendium aims to ensure that patients have access to appropriate biomarker testing based on the evaluations and recommendations of the expert NCCN panel members.
Objectives.—To present the recently launched NCCN Biomarkers Compendium. Data Sources.—Biomarker testing information recommended within NCCN Clinical Treatment Guidelines as well as published resources for genetic and biological information. Conclusions.—The NCCN Biomarkers Compendium is a continuously updated resource for clinicians who need access to relevant and succinct information about biomarker testing in oncology and is linked directly to the recommendations provided within the NCCN Clinical Practice Guidelines.

Most recommendations contained within the NCCN Guidelines are based upon lower-level evidence and uniform NCCN consensus (category 2A).1 This is not a deficiency of the guidelines, but is rather because high-level evidence is not available for most decisions across the continuum of care. A deeper look at what constitutes a ‘‘recommendation’’ might begin to clarify that issue. A recommendation can include all of the recommended workup, surgical options, options for chemotherapy, and tests recommended for ongoing surveillance. Although many of these options are routinely used as standard of care in clinical practice, there is often not the available body of high-level evidence that supports category 1 recommendations, thus most are category 2A levels of evidence and consensus. In other instances, recommendations for chemotherapy regimens for which there is high-level, randomized clinical trial evidence are listed as category 1.

Several derivative products arise from the NCCN Guidelines. The NCCN Drugs & Biologics Compendium (NCCN Compendium) is a resource outlining appropriate use of drugs and biologics as recommended in the NCCN Guidelines. To be included in the compendium, an agent must first be recommended in at least 1 of the NCCN Guidelines. The compendium is typically used by clinicians and payors to determine appropriate use and as a standard to determine coverage. The rapid development and commercialization of biomarker and companion diagnostic testing in cancer gave rise to the NCCN Biomarkers Compendium, to be used by both payors and clinicians to facilitate identification of biomarker tests recommended for use by NCCN guideline panels. The NCCN uses a broad definition of ‘‘biomarker’’ for the purposes of this compendium. All tests measuring genes or gene products, which are used for diagnosis, screening, monitoring, surveillance, or for providing predictive or prognostic information, are included in the biomarkers compendium. This compendium focuses on the biology of the biomarker itself and its clinical utility in supporting clinical decision making. Information is organized by the biomarker being measured, and not by listing of commercially available tests or test kits. Close to 1000 biomarker testing recommendations are included in the NCCN Biomarkers Compendium.

The NCCN Biomarkers Compendium is presented on the NCCN Web site as a series of drop-down menus, allowing users to pick from menus listing Guideline, Disease, Molecular Abnormality, or Gene Symbol.3 Users can retrieve all recommendations for a particular disease, or can select a gene-based search in order to show which diseases have a validated use for a particular gene test. Additional fields can be displayed by selecting from a series of boxes to the right of the drop-down menus (see Figure 1, which shows default fields for RAS testing in colon cancer). Once records are displayed, the resulting table can be sorted by the information in any of the displayed columns. If a searcher chooses, all records can be displayed and then searched with any text term or sorted by any of the columns for a more comprehensive picture of the contents of the database.

Figure 1. (not shown)  KRAS mutation testing recommendation from the National Comprehensive Cancer Network (NCCN) Biomarkers Compendium.3 Reproduced with permission from the NCCN Biomarkers Compendium [1] 2014 National Comprehensive Cancer Network, Inc. (NCCN.org; accessed February 21, 2014). To view the most recent and complete version of the NCCN Biomarkers Compendium, go online to NCCN.org. National Comprehensive Cancer Network, NCCN, NCCN Guidelines, and all other NCCN content are trademarks owned by the National Comprehensive Cancer Network, Inc.

Disease Description Colon cancer
Specific Indication Metastatic disease
Molecular Abnormality KRAS/NKRAS mutation
Test KRAS/NKRAS
Chromosome 1p13.2, 12p12.1
Gene Symbol KRAS/NKRAS
Test Detects Mutation
Methodology
Category of Evidence 2A
Specimen Types FFPE tumor tissue
Recommendation …Determination of RAS mutations.
Test Purpose Predictive
Guideline Page COL 4 of 5, COL 4, COL 9
Note All patients with metastatic colorectal cancer should be genotyped for RAS mutations. At the very least …

Figure 2. (Table)  Example of PDF file generated from ‘‘print’’ command of National Comprehensive Cancer Network (NCCN) Biomarkers Compendium record. Reproduced with permission from the NCCN Biomarkers Compendium [1] 2014 National Comprehensive Cancer Network, Inc (NCCN.org; accessed February 21, 2014). To view the most recent and complete version of the NCCN Biomarkers Compendium, go online to NCCN.org. National Comprehensive Cancer Network, NCCN, NCCN Guidelines, and all other NCCN content are trademarks owned by the National Comprehensive Cancer Network, Inc.

Table 2. (List) Summary of Testing Types Included in the National Comprehensive Cancer Network Biomarkers Compendiuma,b

Protein expression
Translocation
Mutation
Chromosomal defect
Gene rearrangement
Virus detection
Antigen expression
Serum proteins
Amplification
Short repeated sequences
Promoter methylation
Gene expression pattern
Helicobacter pylori

Table 3. Predictive Tests Used for Treatment Decision Making, Extracted From National Comprehensive Cancer Network Guidelines and Biomarkers Compendiuma,b

Test   Disease
21-gene RT-PCR

BCR-ABL1 translocation

Breast cancer
ABL1 mutation Ph+ acute lymphoblastic leukemia, chronic myelogenous leukemia
ALK rearrangement Non–small cell lung cancer
BRAF mutation Non–small cell lung cancer, melanoma, colon cancer, rectal cancer
EGFR mutation Non–small cell lung cancer
ERBB2 amplification/overexpression Breast cancer, esophageal and esophagogastric junction cancers, gastric cancer
ESR1 expression Breast cancer
KIT mutation Soft tissue sarcoma: GIST
KRAS mutation Colon cancer, rectal cancer, non–small cell lung cancer
MGMT promoter methylation Central nervous system cancers: anaplastic glioma/glioblastoma
MLH1, MSH2, MSH6, PMS2 expression and/or mutation, MSI testing Colon cancer, rectal cancer
PDGFRA mutation Soft tissue sarcoma: GIST
PGR expression Breast cancer
ROS1 rearrangement Non–small cell lung cancer

A large number of tests were grouped for the purposes of this simplified table into the category of gross chromosomal abnormalities. Interestingly, the guidelines so far contain only a single recommendation for the use of a gene expression profiling test, and this is the 21-gene reverse transcription–polymerase chain reaction test recommended within the breast cancer treatment guideline, where the score for this test can be used as part of a decision-making process for chemotherapy recommendations in node negative, hormone receptor–positive, HER2-negative disease.

Table 3 summarizes the biomarker tests included in the NCCN Biomarkers Compendium that are predictive for either responsiveness (eg, BRAF mutation and vemurafenib sensitivity) or nonresponsiveness (eg, KRAS mutation testing and cetuximab or panitumumab insensitivity) to a particular type of therapy. As the number of companion diagnostics and targeted therapies grows, we expect this category of test to become one of the largest categories of testing contained within the Biomarkers Compendium, and it may be surprising to note that only 15 of these types of test are currently recommended within the NCCN Guidelines.

The NCCN Biomarkers Compendium generally avoids recommendations for particular methodologies or test kits to use to assess mutations and translocations. The choice of methodology and supplier for carrying out the recommended biomarker tests remains that of the treating oncologists and pathologists.

The NCCN Biomarkers Compendium may be used by payers as a reference for coverage decisions and by clinicians as a guide to which biomarkers are appropriate to test. The Biomarkers Compendium focuses on the clinical usefulness of biomarker testing rather than specific tests or test kits that identify the presence or absence of the marker. Other groups are continuing to assess clinical and analytic validity for specific biomarker test methodologies. Even the US Food and Drug Administration approval process is limited to clinical and analytic validity, and does not specifically address clinical utility. The NCCN Biomarkers Compendium is complementary to these other efforts. By providing biomarker testing information, the NCCN Biomarkers Compendium aims to ensure that patients have coverage and access to appropriate biomarker testing, based on the evaluations and recommendations of the expert NCCN panel members.

Read Full Post »

The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

AUTOMATION IN CLINICAL CHEMISTRY: CURRENT SUCCESSES AND TRENDS
FOR THE FUTURE
Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

GENERAL STEPS OF A CLINICAL CHEMISTRY PROCEDURE –
1 – PRELIMINARY TREATMENT (DEPR0TEINIZATION)
2 – SAMPLE + REAGENT(S)
3 – INCUBATION
L – READING
5 – CALCULATION
Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

ANALYTICAL METHODOLOGY –
– VERY ACTIVE ENZYMATIC REAGENTS
–                          SHORTER REACTION TIME
– KINETIC AND FIXED TIME REACTIONS
–                        No NEED OF DEPROTEINIZATION
– SURFACTANTS
–                      AUTOMATIC SAtIPLE BLANK CALCULATION
– POLYCHROMATIC ANALYSIS

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.

…more…

Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.

Technicon

The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.

http://www.ncbi.nlm.nih.gov/pubmed/1344702

The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Read Full Post »

Older Posts »

%d bloggers like this: