Feeds:
Posts
Comments

Posts Tagged ‘informatics’

Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care? Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care?

Reporter: Stephen J. Williams, Ph.D.

 

Boston Healthcare sponsored a Webinar recently entitled ” Role of Informatics in Precision Medicine: Implications for Innovators”.  The webinar focused on the different informatic needs along the Oncology Care value chain from drug discovery through clinicians, C-suite executives and payers. The presentation, by Joseph Ferrara and Mark Girardi, discussed the specific informatics needs and deficiencies experienced by all players in oncology care and how innovators in this space could create value. The final part of the webinar discussed artificial intelligence and the role in cancer informatics.

 

Below is the mp4 video and audio for this webinar.  Notes on each of the slides with a few representative slides are also given below:

Please click below for the mp4 of the webinar:

 

 


  • worldwide oncology related care to increase by 40% in 2020
  • big movement to participatory care: moving decision making to the patient. Need for information
  • cost components focused on clinical action
  • use informatics before clinical stage might add value to cost chain

 

 

 

 

Key unmet needs from perspectives of different players in oncology care where informatics may help in decision making

 

 

 

  1.   Needs of Clinicians

– informatic needs for clinical enrollment

– informatic needs for obtaining drug access/newer therapies

2.  Needs of C-suite/health system executives

– informatic needs to help focus of quality of care

– informatic needs to determine health outcomes/metrics

3.  Needs of Payers

– informatic needs to determine quality metrics and managing costs

– informatics needs to form guidelines

– informatics needs to determine if biomarkers are used consistently and properly

– population level data analytics

 

 

 

 

 

 

 

 

 

 

 

 

What are the kind of value innovations that tech entrepreneurs need to create in this space? Two areas/problems need to be solved.

  • innovations in data depth and breadth
  • need to aggregate information to inform intervention

Different players in value chains have different data needs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Data Depth: Cumulative Understanding of disease

Data Depth: Cumulative number of oncology transactions

  • technology innovators rely on LEGACY businesses (those that already have technology) and these LEGACY businesses either have data breath or data depth BUT NOT BOTH; (IS THIS WHERE THE GREATEST VALUE CAN BE INNOVATED?)
  • NEED to provide ACTIONABLE as well as PHENOTYPIC/GENOTYPIC DATA
  • data depth more important in clinical setting as it drives solutions and cost effective interventions.  For example Foundation Medicine, who supplies genotypic/phenotypic data for patient samples supplies high data depth
  • technologies are moving to data support
  • evidence will need to be tied to umbrella value propositions
  • Informatic solutions will have to prove outcome benefit

 

 

 

 

 

How will Machine Learning be involved in the healthcare value chain?

  • increased emphasis on real time datasets – CONSTANT UPDATES NEED TO OCCUR. THIS IS NOT HAPPENING BUT VALUED BY MANY PLAYERS IN THIS SPACE
  • Interoperability of DATABASES Important!  Many Players in this space don’t understand the complexities integrating these datasets

Other Articles on this topic of healthcare informatics, value based oncology, and healthcare IT on this OPEN ACCESS JOURNAL include:

Centers for Medicare & Medicaid Services announced that the federal healthcare program will cover the costs of cancer gene tests that have been approved by the Food and Drug Administration

Broad Institute launches Merkin Institute for Transformative Technologies in Healthcare

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Paradoxical Findings in HealthCare Delivery and Outcomes: Economics in MEDICINE – Original Research by Anupam “Bapu” Jena, the Ruth L. Newhouse Associate Professor of Health Care Policy at HMS

Google & Digital Healthcare Technology

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

The Future of Precision Cancer Medicine, Inaugural Symposium, MIT Center for Precision Cancer Medicine, December 13, 2018, 8AM-6PM, 50 Memorial Drive, Cambridge, MA

Live Conference Coverage @Medcity Converge 2018 Philadelphia: Oncology Value Based Care and Patient Management

2016 BioIT World: Track 5 – April 5 – 7, 2016 Bioinformatics Computational Resources and Tools to Turn Big Data into Smart Data

The Need for an Informatics Solution in Translational Medicine

 

 

 

 

Read Full Post »

Medical Informatics View

Chapter 1 Statement of Inferential    Second Opinion

Realtime Clinical Expert Support

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.

 

Keywords: Entropy, Maximum Likelihood Function, separatory clustering, peripheral smear, automated hemogram, Anomaly, classification by anomaly, multivariable and multisyndromic, automated second opinion

Abbreviations: Akaike Information Criterion, AIC;  Bayes Information Criterion, BIC, Systemic Inflammatory Response Syndrome, SIRS.

 

Background: The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record by services, by diagnostic method, and by date, to cite examples.  This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports in a workstation entered by keying to icons.  This requires that the medical practitioner finds the history, medications, laboratory reports, cardiac imaging and EKGs, and radiology in different workspaces.  The introduction of a DASHBOARD has allowed a presentation of drug reactions, allergies, primary and secondary diagnoses, and critical information about any patient the care giver needing access to the record.  The advantage of this innovation is obvious.  The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success.

Intent: We are proposing an innovation that supercedes the main design elements of a DASHBOARD and utilizes the conjoined syndromic features of the disparate data elements.  So the important determinant of the success of this endeavor is that it facilitates both the workflow and the decision-making process with a reduction of medical error. Continuing work is in progress in extending the capabilities with model datasets, and sufficient data because the extraction of data from disparate sources will, in the long run, further improve this process.  For instance, the finding of  both ST depression on EKG coincident with an elevated cardiac biomarker (troponin), particularly in the absence of substantially reduced renal function. The conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates the review of a peripheral smear.  While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the production, release or suppression of the formed elements from the blood-forming organ to the circulation.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of size, density, and concentration, resulting in many characteristic features of classification. In the diagnosis of hematological disorders proliferation of marrow precursors, the domination of a cell line, and features of suppression of hematopoiesis provide a two dimensional model.  Other dimensions are created by considering the maturity of the circulating cells.  The application of rules-based, automated problem solving should provide a valid approach to the classification and interpretation of the data used to determine a knowledge-based clinical opinion. The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.  As the complexity of statistical models has increased the dependencies have become less clear to the individual.  Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.  The development of an evidence-based inference engine that can substantially interpret the data at hand and convert it in real time to a “knowledge-based opinion” could improve clinical decision-making by incorporating multiple complex clinical features as well as duration of onset into the model.

An example of a difficult area for clinical problem solving is found in the diagnosis of SIRS and associated sepsis.  SIRS (and associated sepsis) is a costly diagnosis in hospitalized patients.   Failure to diagnose sepsis in a timely manner creates a potential financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria (temperature, heart rate, respiratory rate and WBC count) by the clinician.   The application of those clinical criteria, however, defines the condition after it has developed and has not provided a reliable method for the early diagnosis of SIRS.  The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including transthyretin, C-reactive protein and procalcitonin.  Immature granulocyte (IG) measurement has been proposed as a more readily available indicator of the presence of granulocyte precursors (left shift).  The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, provides a promising approach to enhance workflow and decision making.   Such a system utilizes the conjoined syndromic features of disparate data elements with an anticipated reduction of medical error.  This study is only an extension of our approach to repairing a longstanding problem in the construction of the many-sided electronic medical record (EMR).  In a classic study carried out at Bell Laboratories, Didner found that information technologies reflect the view of the creators, not the users, and Front-to-Back Design (R Didner) is needed.

Costs would be reduced, and accuracy improved, if the clinical data could be captured directly at the point it is generated, in a form suitable for transmission to insurers, or machine transformable into other formats.  Such data capture, could also be used to improve the form and structure of how this information is viewed by physicians, and form a basis of a more comprehensive database linking clinical protocols to outcomes, that could improve the knowledge of this relationship, hence clinical outcomes.

 

 

How we frame our expectations is so important that it determines the data we collect to examine the process.   In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.   This has meaning for hospital operations, for nonhospital laboratory operations, for companies in the diagnostic business, and for planning of health systems.

In 1983, a vision for creating the EMR was introduced by Lawrence Weed,  expressed by McGowan and Winstead-Fry (J J McGowan and P Winstead-Fry. Problem Knowledge Couplers: reengineering evidence-based medicine through interdisciplinary development, decision support, and research. Bull Med Libr Assoc. 1999 October; 87(4): 462–470.)   PMCID: PMC226622    Copyright notice

 

 

 

 

They introduce Problem Knowledge Couplers as a clinical decision support software tool that  recognizes that functionality must be predicated upon combining unique patient information, but obtained through relevant structured question sets, with the appropriate knowledge found in the world’s peer-reviewed medical literature.  The premise of this is stated by LL WEED in “Idols of the Mind” (Dec 13, 2006): “ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require”.  HIT use has been focused on information retrieval, leaving the unaided mind burdened with information processing.

 

 

The data presented has to be comprehended in context with vital signs, key symptoms, and an accurate medical history.  Consequently, the limits of memory and cognition are tested in medical practice on a daily basis.  We deal with problems in the interpretation of data presented to the physician, and how through better design of the software that presents this data the situation could be improved.  The computer architecture that the physician uses to view the results is more often than not presented as the designer would prefer, and not as the end-user would like.  In order to optimize the interface for physician, the system would have a “front-to-back” design, with the call up for any patient ideally consisting of a dashboard design that presents the crucial information that the physician would likely act on in an easily accessible manner.  The key point is that each item used has to be closely related to a corresponding criterion needed for a decision.  Currently, improved design is heading in that direction.  In removing this limitation the output requirements have to be defined before the database is designed to produce the required output.  The ability to see any other information, or to see a sequential visualization of the patient’s course would be steps to home in on other views.  In addition, the amount of relevant information, even when presented well, is a cognitive challenge unless it is presented in a disease- or organ-system structure.  So the interaction between the user and the electronic medical record has a significant effect on practitioner time, ability to minimize errors of interpretation, facilitate treatment, and manage costs.  The reality is that clinicians are challenged by the need to view a large amount of data, with only a few resources available to know which of these values are relevant, or the need for action on a result, or its urgency. The challenge then becomes how fundamental measurement theory can lead to the creation at the point of care of more meaningful actionable presentations of results.  WP Fisher refers to the creation of a context in which computational resources for meeting the challenges will be incorporated into the electronic medical record.  The one which he chooses is a probabilistic conjoint (Rasch) measurement model, which uses scale-free standard measures and meets data quality standards. He illustrates this by fitting a set of data provided by Bernstein (19)(27 items for the diagnosis of acute myocardial infarction (AMI) to a Rasch multiple rating scale model testing the hypothesis that items work together to delineate a unidimensional measurement continuum. The results indicated that highly improbable observations could be discarded, data volume could be reduced based on internal, and increased ability of the care provider to interpret the data.

 

Classified data a separate issue from automation

 Feature Extraction. This further breakdown in the modern era is determined by genetically characteristic gene sequences that are transcribed into what we measure.  Eugene Rypka contributed greatly to clarifying the extraction of features in a series of articles, which set the groundwork for the methods used today in clinical microbiology.  The method he describes is termed S-clustering, and will have a significant bearing on how we can view hematology data.  He describes S-clustering as extracting features from endogenous data that amplify or maximize structural information to create distinctive classes.  The method classifies by taking the number of features with sufficient variety to map into a theoretic standard. The mapping is done by a truth table, and each variable is scaled to assign values for each: message choice.  The number of messages and the number of choices forms an N-by N table.  He points out that the message choice in an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced by this compression, even though there is risk of loss of information.  Yet the real issue is how a combination of variables falls into a table with meaningful information.  We are concerned with accurate assignment into uniquely variable groups by information in test relationships. One determines the effectiveness of each variable by its contribution to information gain in the system.  The reference or null set is the class having no information.  Uncertainty in assigning to a classification is only relieved by providing sufficient information.  One determines the effectiveness of each variable by its contribution to information gain in the system.  The possibility for realizing a good model for approximating the effects of factors supported by data used for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike found a simple relationship between K-L information and Fisher’s maximized log-likelihood function. A solid foundation in this work was elaborated by Eugene Rypka.  Of course, this was made far less complicated by the genetic complement that defines its function, which made  more accessible the study of biochemical pathways.  In addition, the genetic relationships in plant genetics were accessible to Ronald Fisher for the application of the linear discriminant function.    In the last 60 years the application of entropy comparable to the entropy of physics, information, noise, and signal processing, has been fully developed by Shannon, Kullback, and others,  and has been integrated with modern statistics, as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and unrelated work by Coifman. Dr. Magidson writes about Latent Class Model evolution:

 

The recent increase in interest in latent class models is due to the development of extended algorithms which allow today’s computers to perform LC analyses on data containing more than just a few variables, and the recent realization that the use of such models can yield powerful improvements over traditional approaches to segmentation, as well as to cluster, factor, regression and other kinds of analysis.

Perhaps the application to medical diagnostics had been slowed by limitations of data capture and computer architecture as well as lack of clarity in definition of what are the most distinguishing features needed for diagnostic clarification.  Bernstein and colleagues had a series of studies using Kullback-Liebler Distance  (effective information) for clustering to examine the latent structure of the elements commonly used for diagnosis of myocardial infarction (CK-MB, LD and the isoenzyme-1 of LD),  protein-energy malnutrition (serum albumin, serum transthyretin, condition associated with protein malnutrition (see Jeejeebhoy and subjective global assessment), prolonged period with no oral intake), prediction of respiratory distress syndrome of the newborn (RDS), and prediction of lymph nodal involvement of prostate cancer, among other studies.   The exploration of syndromic classification has made a substantial contribution to the diagnostic literature, but has only been made useful through publication on the web of calculators and nomograms (such as Epocrates and Medcalc) accessible to physicians through an iPhone.  These are not an integral part of the EMR, and the applications require an anticipation of the need for such processing.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common causes of hospital admission that carry a high cost and morbidity.  For example: anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome); pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia; multiple organ failure and hemodynamic shock; electrolyte/acid base balance disorders; acute and chronic liver disease; acute and chronic renal disease; diabetes mellitus; protein-energy malnutrition; acute respiratory distress of the newborn; acute coronary syndrome; congestive heart failure; disordered bone mineral metabolism; hemostatic disorders; leukemia and lymphoma; malabsorption syndromes; and cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].

Extension of conditions and presentation to the electronic medical record (EMR)

We have published on the application of an automated inference engine to the Systemic Inflammatory Response (SIRS), a serious infection, or emerging sepsis.  We can report on this without going over previous ground.  Of considerable interest is the morbidity and mortality of sepsis, and the hospital costs from a late diagnosis.  If missed early, it could be problematic, and it could be seen as a hospital complication when it is not. Improving on previous work, we have the opportunity to look at the contribution of a fluorescence labeled flow cytometric measurement of the immature granulocytes (IG), which is now widely used, but has not been adequately evaluated from the perspective of diagnostic usage.  We have done considerable work on protein-energy malnutrition (PEM), to which the automated interpretation is currently in review.  Of course, the

cholesterol, lymphocyte count, serum albumin provide the weight of evidence with the primary diagnosis (emphysema, chronic renal disease, eating disorder), and serum transthyretin would be low and remain low for a week in critical care.  This could be a modifier with age in providing discriminatory power.

 

Chapter  3           References

 

The Cost Burden of Disease: U.S. and Michigan. CHRT Brief. January 2010. @www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

 

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of

myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Yale J Biol Med 1999; 72: 5-13.

 

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.; Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

 

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.

 

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

 

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression. C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

 

Chapter 4           Clinical Expert System

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options. The primary purpose is to gather medical information, generate metrics, analyze them in realtime and provide a differential diagnosis, meeting the highest standard of accuracy. The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community. The main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies in which anomalous subprofiles are extracted and compared to potentially relevant cases. As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise. We anticipate that the effect of implementing this diagnostic amplifier would result in higher physician productivity at a time of great human resource limitations, safer prescribing practices, rapid identification of unusual patients, better assignment of patients to observation, inpatient beds, intensive care, or referral to clinic, shortened length of patients ICU and bed days.

The main benefit is a real time assessment as well as diagnostic options based on comparable cases, flags for risk and potential problems as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

 

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further Hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

 

 

 

 

 

 

 

 

As we can see the following treatment, the SIRS risk as a major concern was eliminated and the system provides a positive feedback for the treatment of the physician.

 

Method for data organization and classification via characterization metrics.

Our database organized to enable linking a given profile to known profiles. This is achieved by associating a patient to a peer group of patients having an overall similar profile, where the similar profile is obtained through a randomized search for an appropriate weighting of variables. Given the selection of a patients’ peer group, we build a metric that measures the dissimilarity of the patient from its group. This is achieved through a local iterated statistical analysis in the peer group.

We then use this characteristic metric to locate other patients with similar unique profiles, for each of whom we repeat the procedure described above. This leads to a network of patients with similar risk condition. Then, the classification of the patient is inferred from the medical known condition of some of the patients in the linked network. Given a set of points (the database) and a newly arrived sample (point), we characterize the behavior of the newly arrived sample, according to the database. Then, we detect other points in the database that match this unique characterization. This collection of detected points defines the characteristic neighborhood of the newly arrived sample. We use the characteristic neighbor hood in order to classify the newly arrived sample. This process of differential diagnosis is repeated for every newly arrived point.   The medical colossus we have today has become a system out of control and beset by the elephant in the room – an uncharted complexity. We offer a method that addresses the complexity and enables rather than disables the practitioner.  The method identifies outliers and combines data according to commonality of features.

 

 

Read Full Post »

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for Non-Small Cell Lung Cancer

Curator, Writer: Stephen J. Williams, Ph.D.

UPDATED 10/10/2021

lung cancer

(photo credit: cancer.gov)

A report Lung Cancer Genome Surveys Find Many Potential Drug Targets, in the NCI Bulletin,

http://www.cancer.gov/ncicancerbulletin/091812/page2

summarizes the clinical importance of five new lung cancer genome sequencing projects. These studies have identified genetic and epigenetic alterations in hundreds of lung tumors, of which some alterations could be taken advantage of using currently approved medications.

The reports, all published this month, included genomic information on more than 400 lung tumors. In addition to confirming genetic alterations previously tied to lung cancer, the studies identified other changes that may play a role in the disease.

Collectively, the studies covered the main forms of the disease—lung adenocarcinomas, squamous cell cancers of the lung, and small cell lung cancers.

“All of these studies say that lung cancers are genomically complex and genomically diverse,” said Dr. Matthew Meyerson of Harvard Medical School and the Dana-Farber Cancer Institute, who co-led several of the studies, including a large-scale analysis of squamous cell lung cancer by The Cancer Genome Atlas (TCGA) Research Network.

Some genes, Dr. Meyerson noted, were inactivated through different mechanisms in different tumors. He cautioned that little is known about alterations in DNA sequences that do not encode genes, which is most of the human genome.

Four of the papers are summarized below, with the first described in detail, as the Nature paper used a multi-‘omics strategy to evaluate expression, mutation, and signaling pathway activation in a large cohort of lung tumors. A literature informatics analysis is given for one of the papers.  Please note that links on GENE names usually refer to the GeneCard entry.

Paper 1. Comprehensive genomic characterization of squamous cell lung cancers[1]

The Cancer Genome Atlas Research Network Project just reported, in the journal Nature, the results of their comprehensive profiling of 230 resected lung adenocarcinomas. The multi-center teams employed analyses of

  • microRNA
  • Whole Exome Sequencing including
    • Exome mutation analysis
    • Gene copy number
    • Splicing alteration
  • Methylation
  • Proteomic analysis

Summary:

Some very interesting overall findings came out of this analysis including:

  • High rates of somatic mutations including activating mutations in common oncogenes
  • Newly described loss of function MGA mutations
  • Sex differences in EGFR and RBM10 mutations
  • driver roles for NF1, MET, ERBB2 and RITI identified in certain tumors
  • differential mutational pattern based on smoking history
  • splicing alterations driven by somatic genomic changes
  • MAPK and PI3K pathway activation identified by proteomics not explained by mutational analysis = UNEXPLAINED MECHANISM of PATHWAY ACTIVATION

however, given the plethora of data, and in light of a similar study results recently released, there appears to be a great need for additional mining of this CGAP dataset. Therefore I attempted to curate some of the findings along with some other recent news relevant to the surprising findings with relation to biomarker analysis.

Makeup of tumor samples

230 lung adenocarcinomas specimens were categorized by:

Subtype

33% acinar

25% solid

14% micro-papillary

9% papillary

8% unclassified

5% lepidic

4% invasive mucinous
Gender

Smoking status

81% of patients reported past of present smoking

The authors note that TCGA samples were combined with previous data for analysis purpose.

A detailed description of Methodology and the location of deposited data are given at the following addresses:

Publication TCGA Web Page: https://tcga-data.nci.nih.gov/docs/publications/luad_2014/

Sequence files: https://cghub.ucsc.edu

Results:

Gender and Smoking Habits Show different mutational patterns

 

WES mutational analysis

  1. a) smoking status

– there was a strong correlations of cytosine to adenine nucleotide transversions with past or present smoking. In fact smoking history separated into transversion high (past and previous smokers) and transversion low (never smokers) groups, corroborating previous results.

mutations in groups              Transversion High                   Transversion Low

TP53, KRAS, STK11,                 EGFR, RB1, PI3CA

     KEAP1, SMARCA4 RBM10

 

  1. b) Gender

Although gender differences in mutational profiles have been reported, the study found minimal number of significantly mutated genes correlated with gender. Notably:

  • EGFR mutations enriched in female cohort
  • RBM10 loss of function mutations enriched in male cohort

Although the study did not analyze the gender differences with smoking patterns, it was noted that RBM10 mutations among males were more prevalent in the transversion high group.

Whole exome Sequencing and copy number analysis reveal Unique, Candidate Driver Genes

Whole exome sequencing revealed that 62% of tumors contained mutations (either point or indel) in known cancer driver genes such as:

KRAS, EGFR, BRMF, ERBB2

However, authors looked at the WES data from the oncogene-negative tumors and found unique mutations not seen in the tumors containing canonical oncogenic mutations.

Unique potential driver mutations were found in

TP53, KEAP1, NF1, and RIT1

The genomics and expression data were backed up by a proteomics analysis of three pathways:

  1. MAPK pathway
  2. mTOR
  3. PI3K pathway

…. showing significant activation of all three pathways HOWEVER the analysis suggested that activation of signaling pathways COULD NOT be deduced from DNA sequencing alone. Phospho-proteomic analysis was required to determine the full extent of pathway modification.

For example, many tumors lacked an obvious mutation which could explain mTOR or MAPK activation.

 

Altered cell signaling pathways included:

  • Increased MAPK signaling due to activating KRAS
  • Higher mTOR due to inactivating STK11 leading to increased proliferation, translation

Pathway analysis of mutations revealed alterations in multiple cellular pathways including:

  • Reduced oxidative stress response
  • Nucleosome remodeling
  • RNA splicing
  • Cell cycle progression
  • Histone methylation

Summary:

Authors noted some interesting conclusions including:

  1. MET and ERBB2 amplification and mutations in NF1 and RIT1 may be unique driver events in lung adenocarcinoma
  2. Possible new drug development could be targeted to the RTK/RAS/RAF pathway
  3. MYC pathway as another important target
  4. Cluster analysis using multimodal omics approach identifies tumors based on single-gene driver events while other tumor have multiple driver mutational events (TUMOR HETEROGENEITY)

Paper 2. A Genomics-Based Classification of Human Lung Tumors[2]

The paper can be found at

http://stm.sciencemag.org/content/5/209/209ra153

by The Clinical Lung Cancer Genome Project (CLCGP) and Network Genomic Medicine (NGM),*,

Paper Summary

This sequencing project revealed discrepancies between histologic and genomic classification of lung tumors.

Methodology

– mutational analysis by whole exome sequencing of 1255 lung tumors of histologically

defined subtypes

– immunohistochemistry performed to verify reclassification of subtypes based on sequencing data

Results

  • 55% of all cases had at least one oncogenic alteration amenable to current personalized treatment approaches
  • Marked differences existed between cluster analysis within and between preclassified histo-subtypes
  • Reassignment based on genomic data eliminated large cell carcinomas
  • Prospective classification of 5145 lung cancers allowed for genomic classification in 75% of patients
  • Identification of EGFR and ALK mutations led to improved outcomes

Conclusions:

It is feasible to successfully classify and diagnose lung tumors based on whole exome sequencing data.

Paper 3. Genomic Landscape of Non-Small Cell Lung Cancer in Smokers and Never-Smokers[3]

A link to the paper can be found here with Graphic Summary: http://www.cell.com/cell/abstract/S0092-8674%2812%2901022-7?cc=y?cc=y

Methodology

  • Whole genome sequencing and transcriptome sequencing of cancerous and adjacent normal tissues from 17 patients with NSCLC
  • Integrated RNASeq with WES for analysis of
    • Variant analysis
    • Clonality by variant allele frequency anlaysis
    • Fusion genes
  • Bioinformatic analysis

Results

  • 3,726 point mutations and more than 90 indels in the coding sequence
  • Smokers with lung cancer show 10× the number of point mutations than never-smokers
  • Novel lung cancer genes, including DACH1, CFTR, RELN, ABCB5, and HGF were identified
  • Tumor samples from males showed high frequency of MYCBP2 MYCBP2 involved in transcriptional regulation of MYC.
  • Variant allele frequency analysis revealed 10/17 tumors were at least biclonal while 7/17 tumors were monoclonal revealing majority of tumors displayed tumor heterogeneity
  • Novel pathway alterations in lung cancer include cell-cycle and JAK-STAT pathways
  • 14 fusion proteins found, including ROS1-ALK fusion. ROS1-ALK fusions have been frequently found in lung cancer and is indicative of poor prognosis[4].
  • Novel metabolic enzyme fusions
  • Alterations were identified in 54 genes for which targeted drugs are available.           Drug-gable mutant targets include: AURKC, BRAF, HGF, EGFR, ERBB4, FGFR1, MET, JAK2, JAK3, HDAC2, HDAC6, HDAC9, BIRC6, ITGB1, ITGB3, MMP2, PRKCB, PIK3CG, TERT, KRAS, MMP14

Table. Validated Gene-Fusions Obtained from Ref-Seq Data

Note: Gene columns contain links for GeneCard while Gene function links are to the    gene’s GO (Gene Ontology) function.

GeneA (5′) GeneB (3′) GeneA function (link to Gene Ontology) GeneB function (link to Gene Ontology) known function (refs)
GRIP1 TNIP1 glutamate receptor IP transcriptional repressor
SGMS1 STK10 sphingolipid synthesis ser/thr kinase
RASSF3 TTYH2 GTP-binding protein chloride anion channel
KDELR2 ROS1, GOPC ER retention seq. binding proto-oncogenic tyr kinase
ACSL4 DCAF6 fatty acid synthesis ?
MARCH8 PRKG1 ubiquitin ligase cGMP dependent protein kinase
APAF1 UNC13B, TLN1 caspase activation cytoskeletal
EML4 ALK microtubule protein tyrosine kinase
EDR3,PHC3 LOC441601 polycomb pr/DNA binding ?
DKFZp761L1918,RHPN2 ANKRD27 Rhophilin (GTP binding pr ankyrin like
VANGL1 HAO2 tetraspanin family oxidase
CACNA2D3 FLNB VOC Ca++ channel filamin (actin binding)

Author’s Note:

There has been a recent literature on the importance of the EML4-ALK fusion protein in lung cancer. EML4-ALK positive lung tumors were found to be les chemo sensitive to cytotoxic therapy[5] and these tumor cells may exhibit an epitope rendering these tumors amenable to immunotherapy[6]. In addition, inhibition of the PI3K pathway has sensitized EMl4-ALK fusion positive tumors to ALK-targeted therapy[7]. EML4-ALK fusion positive tumors show dependence on the HSP90 chaperone, suggesting this cohort of patients might benefit from the new HSP90 inhibitors recently being developed[8].

Table. Significantly mutated genes (point mutations, insertions/deletions) with associated function.

Gene Function
TP53 tumor suppressor
KRAS oncogene
ZFHX4 zinc finger DNA binding
DACH1 transcription factor
EGFR epidermal growth factor receptor
EPHA3 receptor tyrosine kinase
ENSG00000205044
RELN cell matrix protein
ABCB5 ABC Drug Transporter

Table. Literature Analysis of pathways containing significantly altered genes in NSCLC reveal putative targets and risk factors, linkage between other tumor types, and research areas for further investigation.

Note: Significantly mutated genes, obtained from WES, were subjected to pathway analysis (KEGG Pathway Analysis) in order to see which pathways contained signicantly altered gene networks. This pathway term was then used for PubMed literature search together with terms “lung cancer”, “gene”, and “NOT review” to determine frequency of literature coverage for each pathway in lung cancer. Links are to the PubMEd search results.

KEGG pathway Name # of PUBMed entries containing Pathway Name, Gene ANDLung Cancer
Cell cycle 1237
Cell adhesion molecules (CAMs) 372
Glioma 294
Melanoma 219
Colorectal cancer 207
Calcium signaling pathway 175
Prostate cancer 166
MAPK signaling pathway 162
Pancreatic cancer 88
Bladder cancer 74
Renal cell carcinoma 68
Focal adhesion 63
Regulation of actin cytoskeleton 34
Thyroid cancer 32
Salivary secretion 19
Jak-STAT signaling pathway 16
Natural killer cell mediated cytotoxicity 11
Gap junction 11
Endometrial cancer 11
Long-term depression 9
Axon guidance 8
Cytokine-cytokine receptor interaction 8
Chronic myeloid leukemia 7
ErbB signaling pathway 7
Arginine and proline metabolism 6
Maturity onset diabetes of the young 6
Neuroactive ligand-receptor interaction 4
Aldosterone-regulated sodium reabsorption 2
Systemic lupus erythematosus 2
Olfactory transduction 1
Huntington’s disease 1
Chemokine signaling pathway 1
Cardiac muscle contraction 1
Amyotrophic lateral sclerosis (ALS) 1

A few interesting genetic risk factors and possible additional targets for NSCLC were deduced from analysis of the above table of literature including HIF1-α, mIR-31, UBQLN1, ACE, mIR-193a, SRSF1. In addition, glioma, melanoma, colorectal, and prostate and lung cancer share many validated mutations, and possibly similar tumor driver mutations.

KEGGinliteroanalysislungcancer

 please click on graph for larger view

Paper 4. Mapping the Hallmarks of Lung Adenocarcinoma with Massively Parallel Sequencing[9]

For full paper and graphical summary please follow the link: http://www.cell.com/cell/abstract/S0092-8674%2812%2901061-6

Highlights

  • Exome and genome characterization of somatic alterations in 183 lung adenocarcinomas
  • 12 somatic mutations/megabase
  • U2AF1, RBM10, and ARID1A are among newly identified recurrently mutated genes
  • Structural variants include activating in-frame fusion of EGFR
  • Epigenetic and RNA deregulation proposed as a potential lung adenocarcinoma hallmark

Summary

Lung adenocarcinoma, the most common subtype of non-small cell lung cancer, is responsible for more than 500,000 deaths per year worldwide. Here, we report exome and genome sequences of 183 lung adenocarcinoma tumor/normal DNA pairs. These analyses revealed a mean exonic somatic mutation rate of 12.0 events/megabase and identified the majority of genes previously reported as significantly mutated in lung adenocarcinoma. In addition, we identified statistically recurrent somatic mutations in the splicing factor gene U2AF1 and truncating mutations affecting RBM10 and ARID1A. Analysis of nucleotide context-specific mutation signatures grouped the sample set into distinct clusters that correlated with smoking history and alterations of reported lung adenocarcinoma genes. Whole-genome sequence analysis revealed frequent structural rearrangements, including in-frame exonic alterations within EGFR and SIK2 kinases. The candidate genes identified in this study are attractive targets for biological characterization and therapeutic targeting of lung adenocarcinoma.

Paper 5. Integrative genome analyses identify key somatic driver mutations of small-cell lung cancer[10]

Highlights

  • Whole exome and transcriptome (RNASeq) sequencing 29 small-cell lung carcinomas
  • High mutation rate 7.4 protein-changing mutations/million base pairs
  • Inactivating mutations in TP53 and RB1
  • Functional mutations in CREBBP, EP300, MLL, PTEN, SLIT2, EPHA7, FGFR1 (determined by literature and database mining)
  • The mutational spectrum seen in human data also present in a Tp53-/- Rb1-/- mouse lung tumor model

 

Curator Graphical Summary of Interesting Findings From the Above Studies

DGRAPHICSUMMARYNSLCSEQPOST

The above figure (please click on figure) represents themes and findings resulting from the aforementioned studies including

questions which will be addressed in Future Posts on this site.

UPDATED 10/10/2021

The following article uses RNASeq to screen lung adenocarcinomas for fusion proteins in patients with either low or high tumor mutational burden. Findings included presence of MET fusion proteins in addition to other fusion proteins irrespective if tumors were driver negative by DNASeq screening.

High Yield of RNA Sequencing for Targetable Kinase Fusions in Lung Adenocarcinomas with No Mitogenic Driver Alteration Detected by DNA Sequencing and Low Tumor Mutation Burden

Source:

High Yield of RNA Sequencing for Targetable Kinase Fusions in Lung Adenocarcinomas with No Mitogenic Driver Alteration Detected by DNA Sequencing and Low Tumor Mutation Burden
Ryma BenayedMichael OffinKerry MullaneyPurvil SukhadiaKelly RiosPatrice DesmeulesRyan PtashkinHelen WonJason ChangDarragh HalpennyAlison M. SchramCharles M. RudinDavid M. HymanMaria E. ArcilaMichael F. BergerAhmet ZehirMark G. KrisAlexander Drilon and Marc Ladanyi

Abstract

Purpose: Targeted next-generation sequencing of DNA has become more widely used in the management of patients with lung adenocarcinoma; however, no clear mitogenic driver alteration is found in some cases. We evaluated the incremental benefit of targeted RNA sequencing (RNAseq) in the identification of gene fusions and MET exon 14 (METex14) alterations in DNA sequencing (DNAseq) driver–negative lung cancers.

Experimental Design: Lung cancers driver negative by MSK-IMPACT underwent further analysis using a custom RNAseq panel (MSK-Fusion). Tumor mutation burden (TMB) was assessed as a potential prioritization criterion for targeted RNAseq.

Results: As part of prospective clinical genomic testing, we profiled 2,522 lung adenocarcinomas using MSK-IMPACT, which identified 195 (7.7%) fusions and 119 (4.7%) METex14 alterations. Among 275 driver-negative cases with available tissue, 254 (92%) had sufficient material for RNAseq. A previously undetected alteration was identified in 14% (36/254) of cases, 33 of which were actionable (27 in-frame fusions, 6 METex14). Of these 33 patients, 10 then received matched targeted therapy, which achieved clinical benefit in 8 (80%). In the 32% (81/254) of DNAseq driver–negative cases with low TMB [0–5 mutations/Megabase (mut/Mb)], 25 (31%) were positive for previously undetected gene fusions on RNAseq, whereas, in 151 cases with TMB >5 mut/Mb, only 7% were positive for fusions (P < 0.0001).

Conclusions: Targeted RNAseq assays should be used in all cases that appear driver negative by DNAseq assays to ensure comprehensive detection of actionable gene rearrangements. Furthermore, we observed a significant enrichment for fusions in DNAseq driver–negative samples with low TMB, supporting the prioritization of such cases for additional RNAseq.

Translational Relevance

Inhibitors targeting kinase fusions have shown dramatic and durable responses in lung cancer patients, making their comprehensive detection critical. Here, we evaluated the incremental benefit of targeted RNA sequencing (RNAseq) in the identification of gene fusions in patients where no clear mitogenic driver alteration is found by DNA sequencing (DNAseq)–based panel testing. We found actionable alterations (kinase fusions or MET exon 14 skipping) in 13% of cases apparently driver negative by previous DNAseq testing. Among the driver-negative samples tested by RNAseq, those with low tumor mutation burden (TMB) were significantly enriched for gene fusions when compared with the ones with higher TMB. In a clinical setting, such patients should be prioritized for RNAseq. Thus, a rational, algorithmic approach to the use of targeted RNA-based next-generation sequencing (NGS) to complement large panel DNA-based NGS testing can be highly effective in comprehensively uncovering targetable gene fusions or oncogenic isoforms not just in lung cancer but also more generally across different tumor types.

A Commentary is in the same issue at https://clincancerres.aacrjournals.org/content/25/15/4586?iss=15

Wake Up and Smell the Fusions: Single-Modality Molecular Testing Misses Drivers

by Kurtis D. Davies and Dara L. Aisner

Abstract

Multitarget assays have become common in clinical molecular diagnostic laboratories. However, all assays, no matter how well designed, have inherent gaps due to technical and biological limitations. In some clinical cases, testing by multiple methodologies is needed to address these gaps and ensure the most accurate molecular diagnoses.

See related article by Benayed et al., p. 4712

In this issue of Clinical Cancer Research, Benayed and colleagues illustrate the growing need to consider multiple molecular testing methodologies for certain clinical specimens (1). The rapidly expanding list of actionable molecular alterations across cancer types has resulted in the wide adoption of multitarget testing approaches, particularly those based on next-generation sequencing (NGS). NGS-based assays are commonly viewed as “one-stop shops” to detect a vast array of molecular variants. However, as Benayed and colleagues discuss, even well-designed and highly vetted NGS assays have inherent gaps that, under certain circumstances, are ideally addressed by analyzing the sample using an alternative approach.

In the article, the authors examined a cohort of lung adenocarcinoma patient samples that had been deemed “driver- negative” via MSK-IMPACT, an FDA-cleared test that is widely considered by experts in the field to be one of the best examples of a DNA-based large gene panel NGS assay (2). Of 589 driver-negative cases, 254 had additional material amenable for a different approach: RNA-based NGS designed specifically for gene fusion and oncogenic gene isoform detection. After accounting for quality control failures, 232 samples were successfully sequenced, and, among these, 36 samples (representing an astonishing 15.5% of tested cases) were found to be positive for a driver gene fusion or oncogenic isoform that had not been detected by DNA-based NGS. The real-world value derived from this orthogonal testing schema was more than theoretical, with 8 of 10 (80%) patients demonstrating clinical benefit when treated according to the alteration identified via the RNA-based approach.

To detect gene rearrangements that lead to oncogenic gene fusions (and to detect mutations and insertions/deletions that lead to MET exon 14 skipping), MSK-IMPACT employs hybrid capture-based enrichment of selected intronic regions from genomic DNA. While this approach has proven to be successful in a variety of settings, there are associated limitations that were determined in this study to underlie the discrepancies between MSK-IMPACT and the RNA-based assay. First, some introns that are involved in clinically actionable rearrangement events are very large, thus requiring substantial sequencing capital that can represent a disproportionate fraction of the assay. Despite the ability via NGS to perform sequencing at a large scale, this sequencing capacity is still finite, and thus decisions must be made to sacrifice coverage of certain large genomic regions to ensure sufficient sequencing depth for other desired genomic targets. In the case of MSK-IMPACT (and most other DNA-based NGS assays), certain important introns in NTRK3 and NRG1 are not included in covered content, simply because they are too large (>90 Kb each). The second primary problem with DNA-based analysis of introns is that they often contain highly repetitive elements that are extremely difficult to assess via NGS due to their recurring presence across the genome. Attempts to sequence these regions are largely unfruitful because any sequencing data obtained cannot be specifically aligned/mapped to the desired targeted region of the genome (3). This is particularly true for intron 31 of ROS1, because it contains two repetitive long interspersed nuclear elements, and many DNA-based assays, including MSK-IMPACT, poorly cover this intron (4). In this study by Benayed and colleagues, the most common discrepant alteration was fusion involving ROS1, which accounted for 10 of 36 (28%) cases. At least six of these, those that demonstrated fusion to ROS1 exon 32, were likely directly explained by incomplete intron 31 sequencing. RNA-based analysis is able to overcome the above described limitations owing to the simple fact that sequencing is focused on exons post-splicing and the need to sequence introns is entirely avoided (Fig. 1).

Figure 1.

Schematic representation of underlying genomic complexities that can lead to false-negative gene fusion results in DNA-based NGS analysis. In some cases, RNA-based approaches may overcome the limitations of DNA-based testing.

Lack of sufficient intronic coverage could not account for all of the discrepancies between DNA-based and RNA-based analysis however. Six samples in the cohort were found to be positive for MET exon 14 skipping based on RNA. In five of these, genomic alterations in MET introns 13 or 14 were observed, however they did not conform to canonical splice site alterations and thus were not initially called (although this was addressed by bioinformatics updates). In RNA-based testing, however, determination of exon skipping is simplified such that, regardless of the specific genomic alteration that interferes with splicing, absence of the exon in the transcript is directly observed (5). In another two of the discrepant cases, tumor purity was observed to be low in the sample, meaning that the expected variant allele frequency (VAF) for a genomic event would also likely be low, potentially below detectable levels. However, overexpression of the fusions at the transcript level was theorized to compensate for low VAF (Fig. 1). Additional explanations for discordant findings between the assays included sample-specific poor sequencing in selected introns and complex rearrangements that hindered proper capture (Fig. 1).

The take home message from Benayed and colleagues is simply this: there is no perfect assay that will detect 100% of the potential actionable alterations in patient samples. Even an extremely well designed, thoroughly vetted, and FDA-cleared assay such as MSK-IMPACT will have inherent and unavoidable “holes” due to intrinsic limitations. The solution to this dilemma, as adeptly described by Benayed and colleagues, is additional testing using a different approach. While in an ideal world every clinical tumor sample would be tested by multiple modalities to ensure the most comprehensive clinical assessment, the reality is that these samples are often scant and testing is fiscally burdensome (and often not reimbursed). Therefore, algorithms to determine which samples should be reflexed to secondary assays after testing with a primary assay are critical for maximizing benefit. In this study, the first algorithmic step was lack of an identified driver (because activated oncogenic drivers tend to exist exclusively of each other), which amounted to 23% of samples tested with the primary assay. In addition, the authors found a significantly higher rate of actionable gene fusions in samples with a low (<5 mut/Mb) tumor mutational burden, meaning that this metric, which was derived from the primary assay, could also be used to help inform decision making regarding additional testing. While this scenario is somewhat specific to lung cancer, similar approaches could be prescribed on a cancer type–specific basis.

These findings should be considered a “wake-up call” for oncologists in regard to the ordering and interpretation of molecular testing. It is clear from these and other published findings that advanced molecular analysis has limitations that require nuanced technical understanding. As this arena evolves, it is critical for oncologists (and trainees) to gain an increased comprehension of how to identify when the “gaps” in a test might be most clinically relevant. This requires a level of technical cognizance that has been previously unexpected of clinical practitioners, yet is underscored by the reality that opportunities for effective targeted therapy can and will be missed if the treating oncologist is unaware of how to best identify patients for whom additional testing is warranted. This study also highlights the mantra of “no test is perfect” regardless of prestige of the testing institution, number of past tests performed, or regulatory status. NGS, despite its benefits, does not mean all-encompassing. It is only through the adaptability of laboratories to utilize knowledge such as is provided by Benayed and colleagues that advances in laboratory medicine can be quickly deployed to maximize benefits for oncology patients.

References:

  1. Comprehensive genomic characterization of squamous cell lung cancers. Nature 2012, 489(7417):519-525.
  2. A genomics-based classification of human lung tumors. Science translational medicine 2013, 5(209):209ra153.
  3. Govindan R, Ding L, Griffith M, Subramanian J, Dees ND, Kanchi KL, Maher CA, Fulton R, Fulton L, Wallis J et al: Genomic landscape of non-small cell lung cancer in smokers and never-smokers. Cell 2012, 150(6):1121-1134.
  4. Takeuchi K, Soda M, Togashi Y, Suzuki R, Sakata S, Hatano S, Asaka R, Hamanaka W, Ninomiya H, Uehara H et al: RET, ROS1 and ALK fusions in lung cancer. Nature medicine 2012, 18(3):378-381.
  5. Morodomi Y, Takenoyama M, Inamasu E, Toyozawa R, Kojo M, Toyokawa G, Shiraishi Y, Takenaka T, Hirai F, Yamaguchi M et al: Non-small cell lung cancer patients with EML4-ALK fusion gene are insensitive to cytotoxic chemotherapy. Anticancer research 2014, 34(7):3825-3830.
  6. Yoshimura M, Tada Y, Ofuzi K, Yamamoto M, Nakatsura T: Identification of a novel HLA-A 02:01-restricted cytotoxic T lymphocyte epitope derived from the EML4-ALK fusion gene. Oncology reports 2014, 32(1):33-39.
  7. Yang L, Li G, Zhao L, Pan F, Qiang J, Han S: Blocking the PI3K pathway enhances the efficacy of ALK-targeted therapy in EML4-ALK-positive nonsmall-cell lung cancer. Tumour biology : the journal of the International Society for Oncodevelopmental Biology and Medicine 2014.
  8. Workman P, van Montfort R: EML4-ALK fusions: propelling cancer but creating exploitable chaperone dependence. Cancer discovery 2014, 4(6):642-645.
  9. Imielinski M, Berger AH, Hammerman PS, Hernandez B, Pugh TJ, Hodis E, Cho J, Suh J, Capelletti M, Sivachenko A et al: Mapping the hallmarks of lung adenocarcinoma with massively parallel sequencing. Cell 2012, 150(6):1107-1120.
  10. Peifer M, Fernandez-Cuesta L, Sos ML, George J, Seidel D, Kasper LH, Plenker D, Leenders F, Sun R, Zander T et al: Integrative genome analyses identify key somatic driver mutations of small-cell lung cancer. Nature genetics 2012, 44(10):1104-1110.

Other posts on this site which refer to Lung Cancer and Cancer Genome Sequencing include:

Multi-drug, Multi-arm, Biomarker-driven Clinical Trial for patients with Squamous Cell Carcinoma called the Lung Cancer Master Protocol, or Lung-MAP launched by NCI, Foundation Medicine, and Five Pharma Firms

US Personalized Cancer Genome Sequencing Market Outlook 2018 –

Comprehensive Genomic Characterization of Squamous Cell Lung Cancers

International Cancer Genome Consortium Website has 71 Committed Cancer Genome Projects Ongoing

Non-small Cell Lung Cancer drugs – where does the Future lie?

Lung cancer breathalyzer trialed in the UK

Diagnosing Lung Cancer in Exhaled Breath using Gold Nanoparticles

Multi-drug, Multi-arm, Biomarker-driven Clinical Trial for patients with Squamous Cell Carcinoma called the Lung Cancer Master Protocol, or Lung-MAP launched by NCI, Foundation Medicine, and Five Pharma Firms

Read Full Post »

The Role of Informatics in The Laboratory

Larry H. Bernstein, M.D.

Introduction

The clinical laboratory industry, as part of a larger healthcare entrerprise, is in the midst of large changes that can be traced to the mid 1980’s, and that have accelerated in the last decade.   These changes are associated with a host of dramatic events that require accelerated readjustments in the work force, scientific endeavors, education, and the healthcare enterprise.   These changes are highlighted by the following (not unrelated) events:  globalization, a postindustrial information explosion driven by advances in computers and telecommunications networks, genomics and proteomics in drug discovery, consolidation in retail, communication, transportation, the healthcare and pharmaceutical industries.   Let us consider some of these events.   Globalization is driven by the principle that a manufacturer may seek to purchase labor, parts or supplies from sources that are less than is available at home.   The changes in the airline industry have been characterized by growth in travel, reductions in force, and ability of customers to find the best fares.   The discoveries in genetics that have evolved from asking questions about replication, translation and transcription of the genetic code, has moved to functional genomics and to elucidation of cell signaling pathways.   All of these changes were impossible without the information explosion.

The Laboratory as a Production Environment

The clinical laboratory produces about 60 percent of the information used by nurses and physicians to make decisions about patient care.   In addition, the actual cost of the laboratory is only about 3 – 4 percent of the cost of the enterprise.   The result is that the requirements for the support of the laboratory don’t receive attention without a proactive argument of how it contributes to realizing the goals of the organization.   The key issues affecting laboratory performance are:  staffing requirement, instrument configuration, workflow, what to send out, what to move to point-of-care, how to reconfigure workstations, and how to manage the information generated by the laboratory.

Staffing requirement, instrument configuration and workflow are being addressed by industry automation.   The first attempt was based on connecting instruments by tracks.   This  system proved unable to handle STAT specimens without a noticeable degrading of turnaround time.   The consequence of the failure is to drive creation of  a parallel system of point-of-care, and connecting them in a network with a RAWLS.  Another adjustment was to have an infrastructure for pneumatic tube delivery of specimens, and to redesign the laboratory. This had some success, but required capitalization.   The pneumatic tube system could be justified on the basis to a value to the organization in supporting services besides the laboratory. The industry is moving in the direction of connected modules that share an automated pipettor and reduce the amount of specimen splitting.   These are primarily PREANALYTICAL refinements.

There are other improvements that affect quality and cost that are not standard, and should be.   These are:  autoverification, embedded quality control rules and algorithms, and incorporation of the X-bar into standard quality monitoring.   This can be accomplished using middleware between the enterprise computer and the instruments designed to do more than just connect instruments with the medical information system.   The most common problem encountered when installing a medical repository is the repeated slowdown of the system as more users are connected with the system.   The laboratory has to be protected from this phenomenon, which can be relieved considerably by an open-architecture.   Another function of middleware will be to keep track of productivity by instrument, and to establish the cost per reportable result.

The Laboratory and Informatics

A few informatics requirements for the processing of tests are:

  1. Reject release of runs that fail Quality Control rules
  2. Flag results that fail clinical rules for automatic review
  3. Ability to construct a report that has correlated information for physician review, regardless of where the test is produced (RBC, MCV, reticulocytes and ferritin)
  4. Ability to present critical information in a production environment without technologist intervention (platelet count or hemoglobin in preparation of transfusion request)
  5. Ability to download 20,000 patients from an instrument for review of reference ranges
  6. Ability to look at quality control of results on more than one test on more than one instrument at a time
  7. Ability to present risks in a report for physicians for medical decisions as an alternative to a traditional cutoff value

I list essential steps of the workload processing sequence and identification of informatics enhancement of the process (bolded):

Prelaboratory (ER) 1:

Nurse draws specimens from patient (without specimen ID) and places tubes in bag labeled with name

Nurse prints labels after patient is entered.

Labels put on tubes

Orders entered into computer and labels put on tubes

Tubes sent to laboratory

Lab test  is shown as PENDING

Prelaboratory 2:

Tubes in bags sent to lab (by pneumatic tube)

Time of arrival is not same as time of order entry (10 minutes later)

If order entry is not done prior to sending specimen – entry is done in front processing area –

Sent to lab area 10 minutes later after test is entered into computer

Preanalytical:

Centrifugation

Delivery to workareas (bins)

Aliquoting for serological testing

Workstation assignment

Dating and amount of reagents

Blood gas or co-oximetry – no centrifugation

Hematology – CBC – no centrifugation
send specimen for Hgb A1c

Send specimen for Hgb electrophoresis and Hgb F/Hgb A2

Specimen to Aeroset and then to Centaur

Analytical:

Use of bar code to encode information

Check alignment of bar code

Quality control and calibration at required interval – check before run

Run tests

Manual:

2 hrs per run

enter accession #

enter results 1 accession at a time

Post analytical:

Return to racks or send to another workarea

Verify results

Enter special comments

Special  problems:

Calling results

Add-on tests

Misaligned bar code label

Inability to find specimen

Coagulation

Manual differentials

Informatics and Information Technology

The traditional view of the laboratory environment has been that it is a manufacturing center, but the main product of the laboratory is information, and the environment is a knowledge business.   This will require changes in the education of clinical laboratory professionals.   Biomedical Informatics has been defined as the scientific field that deals with the storage, retrieval, sharing, and optimal use of biomedical information, data, and knowledge for problem solving and decision making. It touches on all basic and applied fields in biomedical science and is closely tied to modern information technologies, notably in the areas of computing and communication.   The services supported by an informatics architecture include operations and quality management, clinical monitoring, data acquisition and management, and statistics supported by information technology.

The importance of a network architecture is clear.   We are moving from computer-centric processing to a data-centric environment. We will soon manage a wide array of complex and inter-related decision-making resources. The resources, commonly referred to as objects and contents, can now include voice, video, text, data, images, 3D models, photos, drawings, graphics, audio and compound documents.  The architectural features required to achieve this is in Fig 1.

According to Coeira and Dowton (Coiera E and Dowton SB. Reinventing ourselves: How innovations such as on-line ‘just-in-time’ CME may help bring about a genuinely evidence-based clinical practice. Medical Journal of Australia 2000;173:343-344), echoing Lawrence Weed, “Clinicians in the past were trained to master clinical knowledge and become experts in knowing why and how. Today’s clinicians have no hope of mastering any substantial portion of the medical knowledge base.  Every time we make a clinical decision, we should stop to consider whether we need to access the clinical evidence-base. Sometimes that will be in the form of on-line guidelines, systematic reviews or the primary clinical literature.”

Fig 1

Interoperability across environments

Define representation for storage that is independent of  implementation

Define a representation of collection that is independent of the database – schema, table structures

Informatics and the Education of Laboratory Professionals

The increasing dependence on laboratory information and the incorporation of laboratory information into Evidence-Based Guidelines necessitates a significant component of education in informatics.   The public health service has mandated informatics as a component of competencies for health services professionals (“Core Competencies for Public Health Professionals” compendium developed by the Council on Linkages Between Academia and Public Health Practice.), and nursing informatics competencies have already been written.   Coiera (E. Coiera, Medical informatics meets medical education: There’s more to understanding information than technology, Medical Journal of Australia 1998; 168: 319-320) has suggested 10 essential informatics skills for physicians.

I have put together a list below with items taken from Coiera and the Public Health Service competencies for elaboration of competencies for Clinical Laboratory Sciences.
A.   Personal Maintenance
1.   Understands the dynamic and uncertain nature of medical knowledge and know how to keep personal knowledge and skills up-to-date

  1.  Searches for and assesses knowledge according to the statistical basis of scientific evidence
  2. Understands some of the logical and statistical models of the diagnostic process
  3. Interprets uncertain clinical data and deals with artefact and error
  4. Evaluates clinical outcomes in terms of risks and benefits

B.   Effective Use of Information

Analytic Assessment Skills

  1. Identifies and retrieves current relevant scientific evidence
  2. Identifies the limitations of research
  3. Determines appropriate uses and limitations of both quantitative and qualitative data

9.  Evaluates the integrity and comparability of data and identifies gaps in data sources

10.  Applies ethical principles to the collection, maintenance, use, and dissemination of data and information
11.  Makes relevant inferences from quantitative and qualitative data
12.  Applies data collection processes, information technology applications, and computer systems storage/retrieval strategies

13.  Manages information systems for collection, retrieval, and use of data for decision-making
14.  Conducts cost-effectiveness, cost-benefit, and cost utility analyses

  1. Effective Use of Information Technology
  1.  Select and utilize the most appropriate communication method for a given task (eg, face-to-face conversation, telephone, e-mail, video, voice-mail, letter)
  2.  Structure and communicate messages in a manner most suited to the recipient, task and chosen communication medium.

17.  Utilizes personal computers and other office information technologies for working with documents and other computerized files

  1.  Utilizes modern information technology tools for the full range of electronic communication appropriate to one’s duties and programmatic area.
  2. Utilizes information technology so as to ensure the integrity and protection of electronic files and computer systems
  1.  Applies all relevant procedures (policies) and technical means (security) to ensure that confidential information is appropriately protected.

I expand on these recommended standards.   The first item is personal maintenance.   This requires continued education to meet the changing needs of the profession in expanding knowledge and access to knowledge that requires critical evaluation.   The payment for the profession has been paid for recognizing the technical contributions made by the laboratory profession as a task oriented contribution, but not for a contribution as a knowledge worker.   This can be changed, but it can’t be realized through the usual bacchalaureate educated requirement.   Most technologists want to get out in the workforce, but after they are out in the workforce – what next?   In many institutions, it falls back on the laboratory to provide the expertise to drive the organization in the computer and information restructuring, from staff taken from the transfusion service, microbiology, and elsewhere.   The laboratory is recognized for an information expertise, but then there is still reason to do more.   The fact is that the mind set of the laboratory staff has been in a manufacturing productivity related to test production, but the data that the production represents is information.   We have the quality control of the test process, but we are required to manage the total process, including the quality of the information we generate.   Another consideration is that the information we generate is used for clinical trials, and a huge variation in the way the information is used is problematic.

The first category for discussion is personal maintenance.  These items are keeping up with knowledge about advances in medical knowledge,  being critical about the quality of the evidence for current knowledge, and being aware of the statistical underpinnings for that thinking (1-5).   It is not enough to keep up with changes in medical thinking using only the professional laboratory literature. A systematic review of problem topics using PubMed as a guide is also essential.  This requires that the clinical laboratory scientist will have to know how to access the internet and search for key studies concerning the questions that are being asked.   The reading of abstracts and papers also requires an education in methods of statistical analysis, contingency tables, study design, and critical thinking.   The most common methods used in clinical laboratory evaluation are linear regression, linear regression, and yes, linear regression.  A discussion over distance learning among members of the American Statistical Association reveals that much of statistical education for the biologists, chemists, and engineers now comes from *software*.  Knowledge workers in drug development and in molecular diagnostics are increasingly challenged with larger, more complicated data sets, and there is a need to interpret and report results quickly. This need is not confined to basic research or the clinical setting, and it may have to be done without consulting with statisticians.  Category A slides into category B, effective use of information.

Effective use of information requires skills that support the design of evaluations of laboratory tests, methods of statistical analysis, and the critical assessment of published work (6-9), and the processes for collecting data, using information technology application, and interpreting the data (10-12).   Items 13 and 14 address management issues.

There is a vocabulary that has to be mastered and certain questions that have to be answered whenever a topic is being investigated.   I identify a number of these at this point in the discussion.

Contingency Table:  A table of frequencies, usually two-way, with event type in columns and test results as positive or negative in rows.   A multi-way table can be used for multivalued categorical analysis.   The conventional 2X2 contingency table is shown below –

No disease Disease
Test negative A  (TN) B  (FN) A+B

PVN =

TN/(FN+TN)Test positiveC  (FP)D  (TP)C+D

PVP =

TP/(TP+FP) A+C

Specificity=
TN/(FP+TN)B+C

Sensitivity =
TP/(TP+FN)A+B+C+D

Type I error:  There is no finding when one actually exists (missed diagnosis)(false negative error).

Type II error:  There is a finding when none exists (false positive error).

Sensitivity:  Percentage of true positive results.  D/(B + D)

Specificity:  Percentage of true negative results. A/(A + C)

False positive error rate:  The percentage of results that are positive in the absence of disease (1 – specificity).  C/(A + C)

ROC curve:  Receiver operator characteristic curve is plot of sensitivity vs I-specificity.  Two methods can be compared in ROC analysis by the area under the curve.   The optimum decision point can be identified as within a narrow range of coordinates on the curve.

Predictive value (+)(PVP):  Probability there is disease when a test is positive (D/C + D), or percentage of patients with disease, given a positive test.   The observed and expected probability may be the same or different.

Predictive value (-)(PVN):  Probability of absence of disease given a negative test result (A/A + B), or percentage of patients without disease given a negative test.  The observed and expected probability may be the same or different.

Power:   When a statement is made that there is no effect, or a test fails to predict the finding of disease, are there enough patients included in the study to see the effect if it exists.   This applies to randomized controlled drug studies as well as studies of tests. Power protects against the error of finding no effect when it exists.

Selection Bias:  It is common to find a high performance claimed for a test that is not later substantiated when it is introduced and widely used.   Why does this occur?   A common practice in experimental design is to define inclusion criteria and exclusion criteria so that the effect is very specific for the condition and to eliminate the interference by “confounders”, unanticipated effects that are not intended.   A common example of this is the removal of patients with acute renal failure and chronic renal insufficiency because of delayed clearance of analytes from the circulation.   The result is that the test is introduced into a population different than the trial population with claims  based on the performance in a limited population.   The error introduced could be prediction of disease in an individual in whom the effect is not true.   This error is reduced by elimination of selection bias, which may require multiple studies using patients who have the confounding conditions (renal insufficiency, myxedema).   Unanticipated effects often aren’t designed into a study.   In many studies about cardiac markers, the study design included only patients who had Acute Coronary Syndrome (ACS)  This is an example of selection bias.   Patients who have ACS  have chest pain of anginal nature that lasts at least 30 minutes, and usually have more than a single episode in 24 hours.   That is not how a majority of patients present to the emergency department who are suspected of having a myocardial infarct.   How then is one to evaluate the effectiveness of a cardiac marker?

Randomization:   Randomization is the assignment of the treatment group to either placebo (no treatment) or treatment.   The investigator and the participant enrolled in the study are blinded.   The analyst might also be blinded.   A potential problem is selection bias from dropouts who skew the characteristics of the population.

Critical questions:

What is the design of the study that you are reading?   Is there sufficient power or is there selection bias?  What are the conclusions of the authors?   Are the conclusions in line with the study design, or overstated?

Statistical tests and terms:   

Normal distribution:  Symmetrical bell shaped curve (Gaussian distribution).   The 2 standard deviation limits is approximately the 95% confidence interval.

Chi square test:  Has a chi square distribution.   Used for measuring probability from a contingency table.   Non-parametric test.

Student’s t-test:  Parametric measure of difference between two population means.

F-test:  An F-test ( Snedecor and Cochran, 1983) is used to test if the standard deviations of two populations are equal.  In comparing two independent samples of size N1 and N2 the F Test provides a measure for the probability that they have the same variance. The estimators of the variance are s12 and s22. We define as test statistic their ratio T = s12/ s22, which follows an F Distribution with f1= N1-1 and f2= N2-1 degrees of freedom.

F Distribution: The F distribution is the ratio of two chi-square distributions with degrees of freedom and , respectively, where each chi-square has first been divided by its degrees of freedom.

Z scores:  Z scores are sometimes called “standard scores”. The z score transformation is especially useful when seeking to compare the relative standings of items from distributions with different means and/or different standard deviations.

Analysis of variance:  Parametric measure of two or more population means by the comparison of variances between the populations.   Probability is measured by the F-test.

Linear Regression:  A classic statistical problem is to try to determine the relationship between two random variables  X and Y. For example, we might consider height and weight of a sample of adults.  Linear regression attempts to explain this relationship with a straight line fit to the data.  The simplest case of regression — one dependent and one independent variable — one can visualize in a scatterplot, is simple linear regression (see below).   The linear regression model is the most commonly used model in Clinical Chemistry.

Multiple Regression:  The general purpose of multiple regression (the term was first used by Pearson, 1908) is to learn more about the relationship between several independent or predictor variables and a dependent or criterion variable.  The general computational problem that needs to be solved in multiple regression analysis is to fit a straight line to a number of points.  A multiple regression fits a line using two or more predictors to the dependent variable by a model — Y = a1X1 + a2X + b + g.

Discriminant function:  Discriminant analysis is a technique for classifying a set of observations into predefined classes. The purpose is to determine the class of an observation based on a set of variables known as predictors or input variables. The model is built based on a set of observations for which the classes are known. This set of observations is sometimes referred to as the training set. Based on the training set , the technique constructs a set of linear functions of the predictors, known as discriminant functions, such that

L = b1x1 + b2x2 + … + bnxn + c , where the b’s are discriminant coefficients, the x’s are the input variables or predictors and c is a constant.

These discriminant functions are used to predict the class of a new observation with unknown class. For a k class problem k discriminant functions are constructed. Given a new observation, all the k discriminant functions are evaluated and the observation is assigned to class i if the ith discriminant function has the highest value.

Nonparametric Methods:

Logistic Regression: Researchers often want to analyze whether some event occurred or not.  The outcome is binary.  Logistic regression is a type of regression analysis where the dependent variable is a dummy variable (coded 0, 1).   The linear probability model, expressed as Y = a + bX + e, is problematic because

  1. The variance of the dependent variable is dependent on the values of the independent variables.
  2. e, the error term, is not normally distributed.
  3. The predicted probabilities can be greater than 1 or less than 0.

The “logit” model has the form:

ln[p/(1-p)] = a + BX + e or

[p/(1-p)] = expa expBX expe

where:

  • ln is the natural logarithm, logexp, where exp=2.71828…
  • p is the probability that the event Y occurs, p(Y=1)
  • p/(1-p) is the “odds ratio”
  • ln[p/(1-p)] is the log odds ratio, or “logit”

The logistic regression model is simply a non-linear transformation of the linear regression. The logit distribution constrains the estimated probabilities to lie between 0 and 1.

Graphical Ordinal Logit Regression:  The logistic regression fits a non-parametric solution to a two-valued event.   The outcome in question might have 3 or more values.

For example, scaled values of a test – low, normal, and high – might have different meanings.   This type of behavior occurs in certain classification problems.  For example, the model has to deal with anemia, normal, and polycythemia, or similarly, neutropenia, normal, and systemic inflammatory response (sepsis).   This model fits the data quite readily.

Clustering methods:  There are a number of methods to classify data when the dependent variable is not known, but is presumed to exist.   A commonly used method classifies data using geometric distance of the average point coordinates.   A very powerful method used is Latent Class Cluster analysis.

Data Extraction:

Data can be extracted from databases, but have to be worked at in a flat file format.   The easiest and most commonly used methods are to collect data in a relational database, such as Access (if the format is predefined), or the convert data into an Excel format.   A common problem is the inability to extract certain data because it is not in an extractable or usable format.

Let us examine how these methods are actually used in a clinical laboratory setting.

The first example is a test introduced almost 30 years ago into quality control in hematology by Brian Bull at Loma LindaUniversity called the x-bar function (also the Bull algorithm).   The method looks at the means of runs of the population data on the assumption the means of the MCV don’t vary for a stable population from day-to-day.  This is a very useful method that can be applied to the evaluation of laboratory.   It is a standard quality control program used in industrial processes since the 1930s.

We next examine the Chi Square distribution.  Review the formula for calculating chi square and calculations of expected frequencies.  Take a two-by-two table of the type

Effect               No effect          Sum Column

Predictor positive          87                     12                     99

Predictor negative         18                     93                    111

Sum Rows                    105                   105                   210

Experiment with the recalculation of chi square by changing the frequencies in the columns for effect and no effect, keeping the total frequencies the same.  The result is a decrease in the chi square as predictor negative – effect and predictor positive – no effect both increase.  The exercise can be carried out on the chi square calculator using Google to find the site.   The chi square can be used to test the contingency table that is used to indicate the effectiveness of fetal fibronectin for assessing low risk of preterm delivery.

For example,

No Preterm Labor Yes Preterm Labor Sum Row
FFN – neg

99

1

100

FFN – pos

35

65

100

Sum Column

134

66

200

PVN = 100*(1/100)% = 99%

99% observed probability that there will not be preterm delivery with a negative test.

Chi square goodness of fit:

Degrees of freedom: 1
Chi-square = 92.6277702397105
p is less than or equal to 0.001.
The distribution is significant.

Examine the effects of scaling of continuous data from a heart attack study to obtain ordered intervals.  Look at the chi square test for the heart attack test by a Nx2 table with the table columns as heart attack or no heart attack.  This allowed us to determine the significance of the test in predicting heart attack.   Look at the Student T test for comparing the continuous values of the test between the heart attack and non-heart attack population.  The T test is like the one-way analysis of variance with only two values for the factor variable.  The T test and ANOVA1 compares the means between two populations.  If the result is significant, then the null hypothesis that the data is taken from the same population is rejected.  The alternative hypothesis is that they are different.

One can visualize the difference by plotting the means and confidence intervals for the two groups.

One can visualize the difference by plotting the means and confidence intervals for the two groups.

We can plot a frequency distribution before we calculate the means and check the distribution around the means.   The simplest way to do this is the histogram.   The histogram for a large sample of potassium values is used to illustrate this.   The mean is 4.2.

We can use a method for quality control called the X-bar (Beckman Coulter has it on the hematology analyzer) to test the deviation from the means of runs.   I illustrate the validity of the X-bar by comparing the means of a series of runs.

Sample size                   =       958

Lowest value                  =        84.0000

Highest value                 =        90.7000

Arithmetic mean               =        87.8058

Median                        =        87.8000

Standard deviation            =         0.9362

————————————————————

Kolmogorov-Smirnov test

for Normal distribution       :   accept Normality (P=0.353)

If I compare the means by the T-test, I am testing whether the sampling is taken from the same or different populations.   When we introduce a third group, then we are asking whether the sampling is taken from a single population or to reject the hypothesis, taking the alternative hypothesis that the samples are different.   This is illustrated by sampling from a group of patients with no cardiac disease and normal, neither of which have acute myocardial infarction.   This is illustrated below:

Two-sample t-test on CKMB grouped by OTHER against Alternative = ‘not equal’

      Group N Mean SD
   0          660       1.396       3.085
   1          90       4.366       4.976

Separate variance:

t                         =       -5.518

df                        =         98.5

p-value                   =        0.000

Bonferroni adj p-value    =        0.000

Pooled variance:

t                         =       -7.851

df                        =          748

p-value                   =        0.000

Bonferroni adj p-value    =        0.000

Two-sample t-test on TROP grouped by OTHER against Alternative = ‘not equal’

      Group N Mean SD
   0          661       0.065       0.444
   1          90       1.072       3.833

Separate variance:

t                         =       -2.489

df                        =         89.3

p-value                   =        0.015

Bonferroni adj p-value    =        0.029

Pooled variance:

t                         =       -6.465

df                        =          749

p-value                   =        0.000

Bonferroni adj p-value    =        0.000

Another example illustrates the application of this significance test.   Beta thalassemia is characterized by an increase in hemoglobin A2.   Thalassemia gets more complicated when we consider delta beta deletion and alpha thalassemia.   Nevertheless, we measure the hemoglobin A2 by liquid chromatography on the Biorad Variant II.   The comparison of hemoglobin A2 in affected and unaffected is shown below (with random resampling):

Two-sample t-test on A2 grouped by THALASSEMIA DIAGNOSIS against Alternative = ‘not equal’

      Group N Mean SD
   0          257       3.250       1.131
   1          61       6.305       2.541

Separate variance:

t                         =       -9.177

df                        =         65.7

p-value                   =        0.000

Bonferroni adj p-value    =        0.000

Pooled variance:

t                         =      -14.263

df                        =          316

p-value                   =        0.000

Bonferroni adj p-value    =        0.000

When we do a paired comparison of the Variant hemoglobin A2 versus quantitation of Helena isoelectric focusing, the results with the T-test shows no significance.

Paired samples t-test on A2 vs A2E with 130 cases

Alternative = ‘not equal’

Mean A2                   =        3.638

Mean A2E                  =        3.453

Mean difference           =        0.185

SD of difference          =        1.960

t                         =        1.074

df                        =          129

p-value                   =        0.285

Bonferroni adj p-value    =        0.285

Consider overlay box plots of the troponin I means for normal, stable cardiac patients and AMI patients:

The means between two subgroups may be close and the confidence intervals around the means may be wide so that it is not clear whether to accept or reject the null hypothesis.  I illustrate this by taking for comparison the two groups that feature normal cardiac status and stable cardiac disease, neither having myocardial infarction.   I use the nonparametric Kruskal Wallis analysis of ranks between two groups, and I increase the sample size to 100,000 patients by a resampling algorithm.   The result for CKMB and for troponin I is:

Kruskal-Wallis One-Way Analysis of Variance for 93538 cases

Dependent variable is CKMB

Grouping variable is OTHER

Group       Count   Rank Sum

0                     83405            3.64937E+09

1                    10133             7.25351E+08

Mann-Whitney U test statistic =  1.71136E+08

Probability is        0.000

Chi-square approximation =     9619.624 with 1 df

Kruskal-Wallis One-Way Analysis of Variance for 93676 cases

Dependent variable is TROP

Grouping variable is OTHER

Group       Count   Rank Sum

0                    83543             3.59446E+09

1                    10133             7.93180E+08

Mann-Whitney U test statistic =  1.04705E+08

Probability is        0.000

Chi-square approximation =    21850.251 with 1 df

Examine a unique data set in which a test is done on amniotic fluid to determine whether there is adequate surfactant activity so that fetal lung compliance is good at delivery.  If there is inadequate surfactant activity there is risk of respiratory distress of the newborn soon after delivery.  The data includes the measure of surfactant activity, gestational age, and fetal status at delivery.  This study emphasized the calculation of the odds-ratio and probability of RDA using surfactant measurement with, and without gestational age for infants delivered within 72 hours of the test.  The statistical method (Goldmine) has a graphical display with the factor variable as the abscissa and the scaled predictor and odds-ratio as the ordinate.  The data acquisition required a multicenter study of the National Academy of Clinical Biochemistry led by John Chapman (Chapel Hill, NC) and Lawrence Kaplan (Bellevue Hospital, NY, NY), published in Clin Chimica Acta (2002).

The table generated is as follows:

Probability and Odds-Ratios for Regression of S/A on Respiratory Outcomes

S/A interval Probability of RDS Odds Ratio
0 – 10  0.87  713
11 – 20  0.69  239
21 – 34  0.43  80
35 – 44  0.20  27
45 – 54  0.08 9
55 – 70  0.03 3
> 70 0.01 1

There is a plot corresponding to the table above.  It is patented as GOLDminer (graphical ordinal logit display).  As the risk increases, the odds-ratio (and probability of an event)  increases.  The calculation is an advantage when there is more than two values of the factor variable, such as, heart attack, not heart attack, and something else.  We  look at the use of the Goldminer algorithm, this time using the acute myocardial infarction and troponin T example.   The ECG finding is scaled so that the result is normal (0), NSSTT (1), ST depression or t-wave inversion, ST elevation.   The troponin T is scaled to: 0.03, 0.031-0.06, 0.061-0.085, 0.086-0.1, 0.11-0.2, > 0.20 ug/L.   The Goldminer plot is shown below with troponin T as 2nd predictor.

(Joint Y)                                   DXSCALE

average            0                      4

X-profile          score                1.00                 0.00

4,5                   3.64                 0.00                 0.68

4,4                   3.51                 0.00                 0.59

4,3                   3.35                 0.00                 0.48

3,5                   3.07                 0.01                 0.34

4,1                   2.87                 0.02                 0.27

3,4                   2.79                 0.02                 0.24

4,0                   2.54                 0.04                 0.17

3,3                   2.43                 0.06                 0.15

3,2                   2.00                 0.12                 0.08

2,5                   1.88                 0.15                 0.07

3,1                   1.55                 0.23                 0.04

2,4                   1.42                 0.26                 0.03

3,0                   1.12                 0.36                 0.01

2,3                   1.02                 0.40                 0.01

2,2                   0.70                 0.53                 0.00

2,1                   0.47                 0.65                 0.00

2,0                   0.32                 0.74                 0.00

1,3                   0.29                 0.77                 0.00

1,2                   0.20                 0.83                 0.00

1,1                   0.13                 0.88                 0.00

1,0                   0.09                 0.91                 0.00

The table is the table of probabilities from the Goldminer program.   The diagnosis scale 4 is MI.   Diagnosis 0 is baseline normal.

We return to a comparison of CKMB and troponin I.   CKMB may be used as a surrogate test for examining the use of troponin I.   We scale the CKMB to 3 and the troponin to 6 intervals.   We construct a 3-by-6 table shown below, with the chi square analysis.

Frequencies

TNISCALE (rows) by CKMBSCALE (columns)

0 1 2 Total
         0          709          12          9          730
         1          14          0          2          16
         2          3          0          0          3
         3          2          0          0          2
         4          4          0          0          4
         5          22          5          17          44
Total          754          17          28          799

Expected values

TNISCALE (rows) by CKMBSCALE (columns)

0 1 2
   0             688.886             15.532             25.582
   1             15.099             0.340             0.561
   2             2.831             0.064             0.105
   3             1.887             0.043             0.070
   4             3.775             0.085             0.140
   5             41.522             0.936             1.542
Test statistic Value df Prob
Pearson Chi-square             198.580             10.000             0.000

How do we select the best value for a test?  The standard accepted method is a ROC plot.  We have seen how to calculate sensitivity, specificity, and error rates.  The false positive error is 1 – specificity.  The ROC curve plots sensitivity vs 1 – specificity.  The ROC plot requires determination of the “disease” variable by some means other than the test that is being evaluated.   What if the true diagnosis is not accurately known?   The question posed introduces the concept of Latent Class Models.

 

A special nutritional study set was used in which the definition of the effect is not as clear as that for heart attack.  The risk of malnutrition is assessed at the bedside by a dietitian using observed features (presence of wound, malnutrition related condition, and poor oral intake), and by laboratory tests, using serum albumin (protein), red cell hemoglobin, and lymphocyte count.  The composite score was a value of 1 to 4.  Data was collected by Linda Brugler, RD, MBA, at St.FrancisHospital, (Wilmington, DE) on 62 patients to determine whether a better model could be developed using new predictors.

The new predictors were laboratory tests not used in the definition of the risk level, which could be problematic.  The tests albumin, lymphocyte count, and hemoglobin were expected to be highly correlated with the risk level because they were used in its definition.  The prealbumin, but not retinol binding protein or C reactive protein, was correlated with risk score and improved the prediction model.

The crosstable for risk level versus albumin is significant at p < 0.0001.

  A GOLDminer plot showed scaled prealbumin versus levels 3 & 4.    A value less than 5 is severe malnutrition and over 19 is not malnourished.  Mild and moderate malnutrition are between these values.

A method called latent class cluster analysis is used to classify the data.   A latent class is identified when the classification isn’t accurately known.   The result of the analysis is shown in Table 4.   The percent of variable subclasses are shown within each class and total 1.00 (100%).

Cluster1           Cluster2           Cluster3

Cluster Size

0.5545             0.3304             0.1151

PAB1COD

1          0.6841             0.0383             0.0454

2          0.3134             0.6346             0.6662

3          0.0024             0.1781             0.1656

4          0.0001             0.1490             0.1227

ALB0COD

1          0.9491             0.4865             0.1013

2          0.0389             0.1445             0.0869

3          0.0117             0.3167             0.5497

4          0.0003             0.0523             0.2621

LCCOD

1          0.1229             0.0097             0.7600

2          0.3680             0.0687             0.2381

4          0.2297             0.2383             0.0016

5          0.2793             0.6832             0.0002

There are other aspects of informatics that are essential for educational design of the laboratory professional of the future.  These include preparation of powerpoint presentations, use of the internet to obtain current information, quality control designed into the process of handling laboratory testing, evaluating data from different correlated workstations, and instrument integration.   The integrated open architecture will be essential for financial management of the laboratory as well. The continued improvement of the technology base of the laboratory will become routine over the next few years.   The education of the CLS for a professional career in medical technology will require an individual who is adaptive and well prepared for a changing technology environment.   The next section of this document will describe the information structure needed just to carry out the day-to-day operations of the laboratory.

Cost linkages important to define value

Traditional accounting methods do not take into account the cost relationships that are essential for economic survival in a competitive environment so that the only items on the ledger are materials and supplies, labor and benefits, and indirect costs.   This is a description of the business as set forth by an NCCLS cost manual, but it is not sufficient to account for the dimensions of the business in relationship to its activities.   The emergence of spreadsheets, and even as importantly, the development of relational database structures, has transformed and is transforming how we can look at the costing of organizations in relationship to how individuals and groups within the organization carry out the business plan and realize the mission set forth by the governing body.   In this sense, the traditional model was incomplete because it only accounted for the costs incurred by departments in a structure that allocates resources to each department based on the assessed use of resources in providing services.   The model has to account for the allocation of resources to product lines of services (as a DRG model developed by Dr. Eleanor Travers).   A revised model has to take into account two new dimensions.   The first dimension is that of the allocation of resources to provide services that are distinct medical/clinical activities.   This means that in the laboratory service business there may be distinctive services as well as market sectors.   That is, health care organizations view their markets as defined by service Zip codes which delineate the lines drawn between their market and the competition (in the absence of clear overlap).

We have to keep in mind that there are service groups that were defined by John Thompson and Robert Fetter in the development of the DRGs (Diagnosis Related Groups) that have a real relationship to resource requirements for pediatric, geriatric, obstetrics, gynecology, hematology, oncology, cardiology, medical and surgical.   These groups are derived from bundles of ICDs (International Code of Diagnosis) that have comparable within group use of laboratory, radiology, nutrition, pharmacy and other resources.   There was an early concern that there was too much variability within DRGs, which was addressed by severity of illness adjustment (Susan Horn).   It is now clear that ICD’s don’t capture a significant content of the medical record. A method is being devised to correct this problem by Kaiser and Mayo using the SNOMED codes as a starting point.   The point is that it is essential that the activities, resources required, and payment be aligned for validity of the payment system.   Of some interest is the association of severity of illness with more than two comorbidities, and of an association with critical values of a few laboratory tests, e.g., albumin, sodium, potassium, hemoglobin, white cell count.   The actual linkages of these resources to cost of the ten or 20 most common diagnostic categories is only a recent event.   As a rule the top 25 categories account for a substantial volume of the costs that it is of great interest to control.   The improvement of database technology makes it conceivable that 100 categories of disease classification could be controlled without difficulty in the next ten years.

Quality cost synergism

What is traditionally described is only one dimension of the business of the operation.   It is the business of the organization, but it is only one-third of the description of the organization and the costs that drive it.   The second dimension of the organization’s cost profile is only obtained by cost accounting how the organization creates value.   Value is simply the ratio of outputs to inputs.   The traditional cost accounting model looks only at business value added.   The value generated by an organization is attributable to a service or good produced that a customer is willing to purchase.   We have to measure the value by measuring some variable that is highly correlated with the value created.   That measure is partly accounted for by transaction times.  We can borrow from the same model that is used in other industries.   The transportation business is an example.   A colleague has designed a surgical pathology information system on the premise that a report in the pathology office or a phone inquiry by a surgeon is a failure of the service.   This is analogous to the Southeast Airlines mission to have the lowest time on the ground in the industry.   The growing complexity of service needs, the capital requirements to support the needs, and the contractual requirements are driving redesign of services in a constantly changing environment.

 

Technology requirements

We have gone from predominantly batch and large scale production to predominantly random access and a growing point-of-care application with pneumatic tube delivery systems in the acute care setting in the last 15 years.   The emphasis on population-based health and increasing shift from acute care to ambulatory care has increased the pressure for point-of-care testing to reduce second visits for adjustment of medication.   The laboratory, radiology and imaging services, and pharmacy information have to be directed to a medical record that may be accessed in acute care or ambulatory setting.   We not only have the proposition that faster is better, but access is from anyplace and almost anytime – connectivity.

There has been a strategic discussion about configuration of information services that is resolving itself by the needs of the marketplace.   Large, self contained organizations are short-lived, and with the emergence of networked provider organizations there will be no compelling interest in having systems that are not tailored to the variety of applications and environments that are served.   The migration from minicomputer to microcomputer client-server networks will go rapidly to N-tiered systems with distributed object-oriented features.   The need for laboratory information systems as a separate application can be seriously challenged by the new paradigm.

 

Utilization and Cost Linkages

Laboratory utilization has to be looked at from more than one perspective in relationship to costs and revenues.   The redefinition of panels cuts the marginal added cost to produce an additional test, but it doesn’t cut the largest cost in obtaining and processing the specimen.   Unfortunately, there is a fixed cost of the operations that has to be achieved, which also drives the formation of laboratory consolidations to have sufficient volume.    If one looks at the capital requirements and labor to support a minimum volume of testing, the marginal cost of added tests decreases with large volume.   The problem with the consolidation argument is that one has to remove testing from the local site in order to increase the volume with an anticipated effect on cycle time for processing.   There is also a significant resource cost for courier service, specimen handling and reporting.   Lets look at the reverse.   What is the effect of decreasing utilization?   One increases the marginal added cost per unit of testing on specimens or accessions.   There is the same basic fixed cost, and if the volume of testing needed to break even is met, the advantage of additional volume is lost.   Fixing the expected cost per patient or per accession becomes problematic if there is a requirement to reduce utilization.

The key volume for processing in the service sense is the number of specimens processed, which has an enormous impact on the processing requirements (number of tests adds to reagent costs and turnaround time per accession).   The result is that one might consider the reduction of testing that is done to monitor critical patients’ status more frequently than is needed.   One can examine the frequency of the CBC, PT/APTT, panels, electrolytes, glucose, and blood gases in the ICUs.   The use of the laboratory is expected to be more intense, reflecting severity of illness, in this setting.   On the other hand, excess redundancy may reflect testing that makes no meaningful contribution to patient care.   This may be suggested by repeated testing with no significant variation in the lab results.

Intangible elements

Competitive advantage may have marginal costs with enormous value enhancement.   This is in the manner of reporting the results.   My colleagues have proposed the importance of a scale-free representation of the laboratory data for presentation to the provider and the patients.   This can be extended further by the scaling of the normalized data into intervals associated with expected risks for outcomes.   This would move the laboratory into the domain of assisting in the management of population adjusted health outcomes.

Blume P. Design of a clinical laboratory computer system. Laboratory and  Hospital Information Systems. In Clinics Lab Med 1991;11:83-104.

Didner RS. Back-to-front systems design: a guns and butter approach. Proc Intl Ergonomics Assoc 1982;–

Didner RS, Butler KA. Information requirements for user decision support: designing systems from back to front. Proc Intl Conf on Cybernetics and Society. IEEE. 1982;–:415-419.

Bernstein LH. An LIS is not all pluses. MLO 1986;18:75-80.

Bernstein LH, Sachs B. Selecting an automated chemistry analyzer: cost analysis. Amer Clin Prod Rev 1988;–:16-19.

Bernstein L, Sachs E, Stapleton V, Gorton J. Replacement of a laboratory instrument system based on workflow design. Amer Clin Prod Rev 1988; –: 22-24.

Bernstein LH. Computer-assisted restructuring services. Amer Clin Prod Rev1986;9:–

Bernstein LH, Sachs B, Stapleton V, Gorton J, Lardas O. Implementing a laboratory information management system and verifying its performance. Informatics in Pathol 1986;1:224-233.

Bernstein LH. Selecting a laboratory computer system: the importance of auditing laboratory performance. Amer Clin Prod Rev 1985;–:30-33.

Castaneda-Mendez K, Bernstein LH. Linking costs and quality improvement to clinical outcomes through added value. J Healthcare Qual 1997;19:11-16.

Bernstein LH. The contribution of laboratory information systems to quality assurance. Amer Clin Prod Rev 1987;18:10-15.

Bernstein LH. Predicting the costs of laboratory testing. Pathologist 1985;39:–

Bernstein LH, Davis G, Pelton T. Managing and reducing lab costs. MLO 1984;16:53-56.

Bernstein LH, Brouillette R. The negative impact of untimely data in the diagnosis of acute myocardial infarction. Amer Clin Lab 1990;__:38-40.

Bernstein LH, Spiekerman AM, Qamar A, Babb J. Effective resource management using a clinical and laboratory algorithm for chest pain triage. Clin Lab Management Rev 1996;–:143-152.

Shaw-Stiffel TA, Zarny LA, Pleban WE, Rosman DD, Rudolph RA, Bernstein LH. Effect of nutrition status and other factors on length of hospital stay after major gastrointestinal surgery. Nutrition (Intl) 1993;9:140-145.

Bernstein LH. Relationship of nutritional markers to length of hospital stay. Nutrition (Intl)(suppl) 1995;11:205-209.

Bernstein LH, Coles M, Granata A. The BridgeportHospital experience with autologous transfusion in orthopedic surgery. Orthopedics 1997;20:677-680.

Bernstein LH. Realization of the projected impact of a chemistry workflow management system at BridgeportHospital. In Quality and Statistics: Total Quality Management. Kowalewski MJ, Ed. 1994; 120-133 ASTM: STP 1209. Phila, PA.

Bernstein LH, Kleinman GM, Davis GL, Chiga M. Part A reimbursement: what is your role in medical quality assurance? Pathologist 1986;40:–.

Bernstein LH. What constitutes a laboratory quality monitoring program? Amer J Qual Util Rev 1990;5:95-99.

Mozes B, Easterling J, Sheiner LB, Melmon KL, Kline R, Goldman ES, Brown AN. Case-mix adjustment using objective measures of severity: the case for laboratory data. Health Serv Res 1994;28:689711.

            Bernstein LH, Shaw-Stiffel T, Zarny L, Pleban W. An informational approach to likelihood of malnutrition. Nutr (Intl) 1996;12:772-226.

Read Full Post »

 

Demonstration of a diagnostic clinical laboratory neural network agent applied to three laboratory data conditioning problems

Izaak Mayzlin                                                                        Larry Bernstein, MD

Principal Scientist, MayNet                                            Technical Director

Boston, MA                                                                          Methodist Hospital Laboratory, Brooklyn, NY

Our clinical chemistry section services a hospital emergency room seeing 15,000 patients with chest pain annually.  We have used a neural network agent, MayNet, for data conditioning.  Three applications are – troponin, CKMB, EKG for chest pain; B-type natriuretic peptide (BNP), EKG for congestive heart failure (CHF); and red cell count (RBC), mean corpuscular volume (MCV), hemoglobin A2 (Hgb A2) for beta thalassemia.  Three data sets have been extensively validated prior to neural network analysis using receiver-operator curve (ROC analysis), Latent Class Analysis, and a multinomial regression approach.  Optimum decision points for classifying using these data were determined using ROC (SYSTAT, 11.0), LCM (Latent Gold), and ordinal regression (GOLDminer).   The ACS and CHF studies both had over 700 patients, and had a different validation sample than the initial exploratory population.  The MayNet incorporates prior clustering, and sample extraction features in its application.   Maynet results are in agreement with the other methods.

Introduction: A clinical laboratory servicing a hospital with an  emergency room seeing 15,000 patients with chest pain to produce over 2 million quality controlled chemistry accessions annually.  We have used a neural network agent, MayNet, to tackle the quality control of the information product.  The agent combines a statistical tool that first performs clustering of input variables by Euclidean distances in multi-dimensional space. The clusters are trained on output variables by the artificial neural network performing non-linear discrimination on clusters’ averages.  In applying this new agent system to diagnosis of acute myocardial infarction (AMI) we demonstrated that at an optimum clustering distance the number of classes is minimized with efficient training on the neural network. The software agent also performs a random partitioning of the patients’ data into training and testing sets, one time neural network training, and an accuracy estimate on the testing data set. Three examples to illustrate this are – troponin, CKMB, EKG for acute coronary syndrome (ACS); B-type natriuretic peptide (BNP), EKG for the estimation of ejection fraction in congestive heart failure (CHF); and red cell count (RBC), mean corpuscular volume (MCV), hemoglobin A2 (Hgb A2) for identifying beta thalassemia.  We use three data sets that have been extensively validated prior to neural network analysis using receiver-operator curve (ROC analysis), Latent Class Analysis, and a multinomial regression approach.

In previous studies1,2 CK-MB and LD1 sampled at 12 and 18 hours postadmission were near-optimum times used to form a classification by the analysis of information in the data set. The population consisted of 101 patients with and 41 patients without AMI based on review of the medical records, clinical presentation, electrocardiography, serial enzyme and isoenzyme  assays, and other tests. The clinical or EKG data, and other enzymes or sampling times were not used to form a classification but could be handled by the program developed. All diagnoses were established by cardiologist review. An important methodological problem is the assignment of a correct diagnosis by a “gold standard” that is independent of the method being tested so that the method tested can be suitably validated. This solution is not satisfactory in the case of myocardial infarction because of the dependence of diagnosis on a constellation of observations with different sensitivities and specificities. We have argued that the accuracy of diagnosis is  associated with the classes formed by combined features and has greatest uncertainty associated with a single measure.

Methods:  Neural network analysis is by MayNet, developed by one of the authors.  Optimum decision points for classifying using these data were determined using ROC (SYSTAT, 11.0), LCM (Latent Gold)3, and ordinal regression (GOLDminer)4.   Validation of the ACS and CHF study sets both had over 700 patients, and all studies had a different validation sample than the initial exploratory population.  The MayNet incorporates prior clustering, and sample extraction features in its application.   We now report on a new classification method and its application to diagnosis of acute myocardial infarction (AMI).  This method is based on the combination of clustering by Euclidean distances in multi-dimensional space and non-linear discrimination fulfilled by the Artificial Neural Network (ANN) trained on clusters’ averages.   These studies indicate that at an optimum clustering distance the number of classes is minimized with efficient training on the ANN. This novel approach to ANN reduces the number of patterns used for ANN learning and works also as an effective tool for smoothing data, removing singularities,  and increasing the accuracy of classification by the ANN. The studies  conducted involve training and testing on separate clinical data sets, which subsequently achieves a high accuracy of diagnosis (97%).

Unlike classification, which assumes the prior definition of borders between classes5,6, clustering procedure includes establishing these borders as a result of processing statistical information and using a given criteria for difference (distance) between classes.  We perform clustering using the geometrical (Euclidean) distance between two points in n-dimensional space, formed by n variables, including both input and output variables. Since this distance assumes compatibility of different variables, the values of all input variables are linearly transformed (scaled) to the range from 0 to 1.

The ANN technique for readers accustomed to classical statistics can be viewed as an extension of multivariate regression analyses with such new features as non-linearity and ability to process categorical data. Categorical (not continuous) variables represent two or more levels, groups, or classes of correspondent feature, and in our case this concept is used to signify patient condition, for example existence or not of AMI.

The ANN is an acyclic directed graph with input and output nodes corresponding respectively to input and output variables. There are also “intermediate” nodes, comprising so called “hidden” layers.  Each node nj is assigned the value xj that has been evaluated by the node’s “processing” element, as a non-linear function of the weighted sum of values xi of nodes ni, connected with nj by directed edges (ni, nj).

xj = f(wi(1),jxi(1) + wi(2),jxi(2) + … + wi(l),jxi(l)),

where xk is the value in node nk and wk,j is the “weight” of the edge (nk, nj).  In our research we used the standard function f(x), “sigmoid”, defined as f(x)=1/(1+exp(-x)).  This function is suitable for categorical output and allows for using an efficient back-propagation algorithm7 for calculating the optimal values of weights, providing the best fit for learning set of data, and eventually the most accurate classification.

Process description:  We implemented the proposed algorithm for diagnosis of AMI. All the calculations were performed on PC with Pentium 3 Processor applying the authors’ unique Software Agent Maynet. First, using the automatic random extraction procedure, the initial data set (139 patients) was partitioned into two sets — training and testing.  This randomization also determined the size of these sets (96 and 43, respectively) since the program was instructed to assign approximately 70 % of data to the training set.

The main process consists of three successive steps: (1) clustering performed on training data set, (2) neural network’s training on clusters from previous step, and (3) classifier’s accuracy evaluation on testing data.

The classifier in this research will be the ANN, created on step 2, with output in the range [0,1], that provides binary result (1 – AMI, 0 – not AMI), using decision point 0.5.

In this demonstartion we used the data of two previous studies1,2 with three patients, potential outliers, removed (n = 139). The data contains three input variables, CK-MB, LD-1, LD-1/total LD, and one output variable, diagnoses, coded as 1 (for AMI) or 0 (non-AMI).

Results: The application of this software intelligent agent is first demonstrated here using the initial model. Figures 1-2 illustrate the history of training process. One function is the maximum (among training patterns) and lower function shows the average error. The latter defines duration of training process. Training terminates when the average error achieves 5%.

There was slow convergence of back-propagation algorithm applied to the training set of 96 patients. We needed 6800 iterations to achieve the sufficiently small (5%) average error.

Figure 1 shows the process of training on stage 2. It illustrates rapid convergence because we deal only with 9 patterns representing the 9 classes, formed on step 1.

Table 1 illustrates the effect of selection of maximum distance on the number of classes formed and on the production of errors. The number of classes increased with decreasing distance, but accuracy of classification does not decreased.

The rate of learning is inversely related to the number of classes. The use of the back-propagation to train on the entire data set without prior processing is slower than for the training on patterns.

     Figures 2 is a two-dimensional projection of three-dimensional space of input variables CKMB and LD1 with small dots corresponding to the patterns and rectangular as cluster centroids (black – AMI, white – not AMI).

     We carried out a larger study using troponin I (instead of LD1) and CKMB for the diagnosis of myocardial infarction (MI).  The probabilities and odds-ratios for the TnI scaled into intervals near the entropy decision point are shown in Table 2 (N = 782).  The cross-table shows the frequencies for scaled TnI results versus the observed MI, the percent of values within MI, and the predicted probabilities and odds-ratios for MI within TnI intervals.  The optimum decision point is at or near 0.61 mg/L (the probability of MI at 0.46-0.6 mg/L is 3% and the odds ratio is at 13, while the probability of MI at 0.61-0.75 mg/L is 26% at an odds ratio of 174) by regressing the scaled values.

     The RBC, MCV criteria used were applied to a series of 40 patients different than that used in deriving the cutoffs.  A latent class cluster analysis is shown in Table 3.  MayNet is carried out on all 3 data sets for MI, CHF, and for beta thalassemia for comparison and will be shown.

Discussion:  CKMB has been heavily used for a long time to determine heart attacks. It is used in conjunction with a troponin test and the EKG to identify MI but, it isn’t as sensitive as is needed. A joint committee of the AmericanCollege of Cardiology and European Society of Cardiology (ACC/ESC) has established the criteria for acute, recent or evolving AMI predicated on a typical increase in troponin in the clinical setting of myocardial ischemia (1), which includes the 99th percentile of a healthy normal population. The improper selection of a troponin decision value is, however, likely to increase over use of hospital resources.  A study by Zarich8 showed that using an MI cutoff concentration for TnT from a non-acute coronary syndrome (ACS) reference improves risk stratification, but fails to detect a positive TnT in 11.7% of subjects with an ACS syndrome8. The specificity of the test increased from 88.4% to 96.7% with corresponding negative predictive values of 99.7% and 96.2%. Lin et al.9 recently reported that the use of low reference cutoffs suggested by the new guidelines results in markedly increased TnI-positive cases overall. Associated with a positive TnI and a negative CKMB, these cases are most likely false positive for MI. Maynet relieves this and the following problem effectively.

Monitoring BNP levels is a new and highly efficient way of diagnosing CHF as well as excluding non-cardiac causes of shortness of breath. Listening to breath sounds is only accurate when the disease is advanced to the stage in which the pumping function of the heart is impaired. The pumping of the heart is impaired when the circulation pressure increases above the osmotic pressure of the blood proteins that keep fluid in the circulation, causing fluid to pass into the lung’s airspaces.  Our studies combine the BNP with the EKG measurement of QRS duration to predict whether a patient has a high or low ejection fraction, a measure to stage the severity of CHF.

We also had to integrate the information from the hemogram (RBC, MCV) with the hemoglobin A2 quantitation (BioRad Variant II) for the diagnosis of beta thalassemia.  We chose an approach to the data that requires no assumption about the distribution of test values or the variances.   Our detailed analyses validates an approach to thalassemia screening that has been widely used, the Mentzer index10, and in addition uses critical decision values for the tests that are used in the Mentzer index. We also showed that Hgb S has an effect on both Hgb A2 and Hgb F.  This study is adequately powered to assess the usefulness of the Hgb A2 criteria but not adequately powered to assess thalassemias with elevated Hgb F.

References:

1.  Adan J, Bernstein LH, Babb J. Lactate dehydrogenase isoenzyme-1/total ratio: accurate for determining the existence of myocardial infarction. Clin Chem 1986;32:624-8.

2. Rudolph RA, Bernstein LH, Babb J. Information induction for predicting acute myocardial infarction.  Clin Chem 1988;34:2031- 2038.

3. Magidson J. “Maximum Likelihood Assessment of Clinical Trials Based on an Ordered Categorical Response.” Drug Information Journal, Maple Glen, PA: Drug Information Association 1996;309[1]: 143-170.

4. Magidson J and Vermoent J.  Latent Class Cluster Analysis. in J. A. Hagenaars and A. L. McCutcheon (eds.), Applied Latent Class Analysis. Cambridge: CambridgeUniversity Press, 2002, pp. 89-106.

5. Mkhitarian VS, Mayzlin IE, Troshin LI, Borisenko LV. Classification of the base objects upon integral parameters of the attached network. Applied Mathematics and Computers.  Moscow, USSR: Statistika, 1976: 118-24.

6.Mayzlin IE, Mkhitarian VS. Determining the optimal bounds for objects of different classes. In: Dubrow AM, ed. Computational Mathematics and Applications. MoscowUSSR: Economics and Statistics Institute. 1976: 102-105.

7. RumelhartDE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In:

RumelhartDE, Mc Clelland JL, eds. Parallel distributed processing.   Cambridge, Mass: MIT Press, 1986; 1: 318-62.

8. Zarich SW, Bradley K, Mayall ID, Bernstein, LH. Minor Elevations in Troponin T Values Enhance Risk Assessment in Emergency Department Patients with Suspected Myocardial Ischemia: Analysis of Novel Troponin T Cut-off Values.  Clin Chim Acta 2004 (in press).

9. Lin JC, Apple FS, Murakami MM, Luepker RV.  Rates of positive cardiac troponin I and creatine kinase MB mass among patients hospitalized for suspected acute coronary syndromes.  Clin Chem 2004;50:333-338.

10.Makris PE. Utilization of a new index to distinguish heterozygous thalassemic syndromes: comparison of its specificity to five other discriminants.Blood Cells. 1989;15(3):497-506.

Acknowledgements:   Jerard Kneifati-Hayek and Madeleine Schlefer, Midwood High School, Brooklyn, and Salman Haq, Cardiology Fellow, Methodist Hospital.

Table 1. Effect of selection of maximum distance on the number of classes formed and on the accuracy of recognition by ANN

ClusteringDistanceFactor F(D = F * R)  Number ofClasses  Number of Nodes inThe HiddenLayers  Number ofMisrecognizedPatterns inThe TestingSet of 43 Percent ofMisrecognized   
  10.90.80.7  2414135  1, 02, 03, 01, 02, 03, 0

3, 2

3, 2

121121

1

1

2.34.62.32.34.62.3

2.3

2.3

Figure 1.

Figure 2.

Table 2.  Frequency cross-table, probabilities and odds-ratios for scaled TnI versus expected diagnosis

Range Not MI MI N Pct in MI Prob by TnI Odds Ratio
< 0.45 655 2 657 2 0 1
0.46-0.6 7 0 7 0 0.03 13
0.61-0.75 4 0 4 0. 0.26 175
0.76-0.9 13 59 72 57.3 0.82 2307
> 0.9 0 42 42 40.8 0.98 30482
679 103 782 100

 

Read Full Post »

%d bloggers like this: