Diagnostics Industry and Drug Development in the Genomics Era: Mid 80s to Present
Author and Curator: Larry H Bernstein, MD, FCAP
This article is a Chronological continuation to my article
Milestones in the Evolution of Diagnostics in the US HealthCare System: 1920s to Pre-Genomics
Diagnostics Industry and Drug Development in the Genomics Era: Mid 80s to Present
Part 1.1: Companion Diagnostics
http://pharmaceuticalintelligence.com/?s=Companion+Diagnostics
As the life sciences industry adjusts to the insurance-driven environment,
- specialized medicine is emerging as a critical asset in a development strategy.
Biomarkers and companion diagnostics (CDx) offer companies
- the ability to target a subset of patients for testing with some potential for greater efficacy
- based on biomarker selectivity, accelerated regulatory approval and better reimbursement.
Companion diagnostics are not necessarily suitable for developing a product. The decision to embark on a CDx program is a complex one involving a pharmaceutical company’s resources and its clinical goals. Top-performing companies prepare for success by
- developing consistent criteria and protocols that lead to thoughtful partner selection and follow-through —
- ensuring a smooth co-development timeline under budget.
Partner selection implies a cooperative development with
- a diagnostics company having the needed diagnostics selectivity.
Consistent criteria and protocols requires that
- the test criteria and performance characteristics accurately allow for
- objective measurement of disease progression or effective treatment.
Criteria to Select the Right Test for a Companion Diagnostics Program
After the company reviews the clinical data for a particular compound and decides to pursue a companion diagnostic, it is time to focus on the test itself. The company should evaluate a potential companion diagnostic test considering the following criteria:
- Accuracy
- Reliability
- Speed
- Cost
Ideally, the companion diagnostic test for a drug would be
- both extremely accurate and reliable, while requiring little time and money to develop —
but in reality, these tests can be complicated. A survey was conducted. The four evaluation criteria and the average importance survey respondents assign to them are rated on a scale of 1 to 5. In selecting a companion diagnostics test, all four criteria are expected to earn an average rating of 4.0 or greater, indicating that they are very important . But
- the two clinical criteria (accuracy and reliability) are absolutely crucial.
- accuracy and reliability each earned an average rating of 4.6.
Survey respondents agree — accuracy and reliability are at the heart of a successful companion diagnostic program. These criteria are critically important to the test selection process. The success of an entire companion diagnostic program (and maybe the drug itself)
- hinges on the test’s ability to accurately and reliably select patients
- who will respond effectively to a drug.
Without these two characteristics, the test likely
- will not receive regulatory approval,
- nor will it be widely used.
The third most important criteria in selecting a companion diagnostic test is speed, earning an average rating of 4.3.
- It is critical because investigational compounds are on a strict development timeline of their own.
If the companion diagnostic test cannot be developed quickly enough, then the drug might experience delays in
- regulatory approval or market access and reimbursement.
Because patent protection for a compound is finite, delays can be costly, so most companies would rather spend the extra money to ensure the test is developed quickly. Those additional costs can be recouped if the drug and the companion diagnostic test enter the market sooner than expected.
Cost, earning an average rating of 4.0, ranked last on the list of test selection criteria. Because all, or most, of the companion diagnostic program will be developed outside the pharmaceutical company, cost is a relevant consideration when selecting a test.
Excepted from – Companion Diagnostics Report
http://cuttingedgeinfo.com/research/clinical-development/companion-diagnostics-biomarkers/
1.2 Refining Processes for the Co-Development of Genome-Based Therapeutics and Companion Diagnostic Tests: Workshop Summary.
Overview Of Co-Development and Companion Diagnostics Policy
http://www.ncbi.nlm.nih.gov/books/NBK195940/
Important Points Highlighted
-
The draft guidance on companion diagnostics from FDA recommends developing both the test and the drug together, which creates a new opportunity for developers to work and learn together to more clearly define a process and to coordinate a timeline for development.
-
The performance of a companion diagnostic is closely tied to the performance of the associated drug, and this relationship is essential for determining the safety and effectiveness of the products for patient use.
-
The major challenge for co-development is more commercial than regulatory because there are inherent differences in developing tests and drugs, including mismatched markets and resources.
-
Combining the cost of a test and a drug may provide a solution for aligning market differences and accelerating regulatory approval and reimbursement decisions for two different products.
Elizabeth Mansfield, director of the personalized medicine staff in the Office of In Vitro Diagnostics and Radiological Health at FDA, described the historical development of FDA’s policies for companion diagnostics and the main features of the companion diagnostic draft guidance.
The concept of companion diagnostics is not new. Testing for estrogen and progesterone receptor expression has been done since the 1980s (Gustavo Reynoso, CS Wilson Hospital, Binghampton, NY)
- to determine if a patient would benefit from hormone therapy in treating breast cancer.
FDA approved the use of the drug Herceptin in patients with breast cancer
- who tested positive for human epidermal growth factor receptor 2 (HER2) in 1998.
This was one of the earliest examples of a co-development companion diagnostic model before there was a formal process in place. Herceptin was approved for use in patients with metastatic breast cancer, and
- FDA also approved a test to examine HER2 levels.
One reason for having a test was to decrease risks, Herceptin’s cardiotoxic side effects are now well known. In the case of the drugs Selzentry and Tykerb, which were approved in 2007 and whose use depends on test results, a companion diagnostic policy was not yet in place when they were approved and FDA did not apply the policy retroactively.
Recognizing that tests can be drivers of therapy, FDA began to develop guidance to reflect drug development strategies
- that account for genetic information.
It held public discussions about
- pharmacogenomics,
requested voluntary genomic data submissions, and addressed other issues
- concerning the use of genomic data to guide drug development,
FDA realized that a policy was needed
- to protect patients while also allowing companies to plan for
- the development of tests that would support therapeutic approval.
FDA also realized that companies want predictability in the regulatory process.
Companion diagnostics are tests,
- but the test performance is closely tied to the performance of the associated drug.
Thus, knowledge about the test is essential
- to understand the safety and effectiveness of the drug.
Tests for the same analyte can differ, sometimes significantly. The technology, cut-off levels, and performance can all vary, and different tests are likely to identify different populations. “It is essential to know all of these parameters
- before deciding on which test is going to be used,
and if you want to use multiple tests [you will need to know] how these
- different performance parameters compare and are related to the outcome
- since test performance actually is a measure of the drug performance.
All of this information is needed to determine
- which patients will benefit from the drug and
- how to adequately label a drug.
In July 2011 FDA released
- a draft guidance document for industry and FDA staff on IVD companion devices and
- held a 90-day comment period.
At the time of the workshop, FDA expected to release the final version of the guidance soon. Mansfield reviewed a few key pieces of the policy. First, the policy defines an IVD companion device as
- “an in vitro diagnostic device that provides information that is essential
- for the safe and effective use of a corresponding therapeutic product.”
Such a diagnostic could identify a population
- for efficacy, for safety, or for other purposes.
The document also differentiates companion diagnostics from
diagnostics used for other purposes.
Thousands of diagnostic tests have been cleared or approved, but
- only perhaps 15 companion diagnostics had been approved (see Table 2-1) at the time of the workshop.
Understanding Co-Development
In the context of co-development, effectiveness means that a drug or test
- is adequate to accomplish a specific purpose,
- produces the intended or desired result, and
- is actually in operation as opposed to just having the potential for use,
Felix Frueh, entrepreneur-in-residence at Third Rock Ventures stated: For a product to be approved by FDA,
- effectiveness needs to be demonstrated, so it is primarily a regulatory concern.
- efficacy refers to the power or capacity to achieve the desired effect under ideal conditions,
It is more of a concern for payers. In a clinical trial inclusion and exclusion criteria create a more idealized situation than would be encountered outside of this environment. Co-development is
- the “development of a test with a drug to make the use of the drug more effective or safer,”
He said: “It really is a method to make medicine more precise.” Co-development is not
- an approach to make a clinical trial less costly,
- nor is it a way to accelerate the time it takes to bring a product to market,
- and not a biomarker discovery tool,
“By definition you have to know your marker and you have to know what you’re using it for;
- only that allows you to create a strategy to align the development of the marker with the drug.”
Next-Generation Sequencing
Some devices are complicated in that they have more than one indication, such as
- prostate-specific antigen (PSA) tests for monitoring cancer, which are classified as a Class II device, and
- PSA tests for diagnosis, which are classified as a Class III device.
In the case of molecular diagnostics, genomic information will
- point toward therapies for which no indication in a drug label exists,
- which is a much bigger regulatory challenge,
Frueh said. “The sheer magnitude of the information that we’ll find
- on the genetic and molecular level is going to far surpass our capacity to run clinical trials,”
In fact, perhaps clinical trials will not be needed, especially because
- clinical trials cannot be run for every marker and every condition.
But the same mutation is not always the driving factor for different cancers, said Walter Koch, vice president of global research at Roche Molecular Systems, and a workshop speaker. In contrast to Frueh’s view, Koch said that
- only through clinical trials can a drug target be validated
- by showing that the drug produces better outcomes.
Next-generation sequencing will find many variants, but they will not always be targets for a particular disease, he said.
1.3 Gray Sheet – Companion Diagnostics for Breakthrough Drugs Also Getting Swift Attention, FDA Says
Device center officials cite a host of formal and informal actions available under existing legislative authority to ensure that
- development and review of companion diagnostics do not hinder
- the ability of FDA-designated breakthrough therapies from coming to market.
Streamlining coDx Development
Created by FDASIA, the breakthrough therapy designation is aimed at
- expediting development of drugs and biologics intended to treat serious conditions and
- for which preliminary clinical evidence indicates the product may demonstrate
- substantial improvement on a clinically significant endpoint compared to available therapy.
FDA laid out the qualifying criteria, features and submission and response timelines for the breakthrough program in its June draft guidance on expedited pathways (“FDA Expedited Programs Guidance: “Available Therapies” Depends On U.S. Standard Of Care” — “The Pink Sheet,” Jul. 1, 2013).
Drugs that receive the breakthrough designation
- also qualify for fast-track status, and
- sponsors will receive intensive guidance on an efficient drug development program and
- an organizational commitment that involves FDA senior managers.
The program, which is just a year old, has proven immensely popular with sponsors thus far. As of Sept. 6, the Center for Drug Evaluation and Research had received 85 requests for breakthrough designation, granting 27 and denying 35. The Center for Biologics Evaluation and Research had received 10 requests as of Aug. 31, granting none and denying eight. Many of the drugs for which breakthrough designation has been granted include a companion diagnostic, according to agency officials.
1.4 Diagnostics-Drugs Pairings Advance Personalized Medicine
C&N 2012; 90(30): 10-13. http://cen.acs.org/articles/90/i30/Diagnostics-Drugs-Pairings-Advance-Personalized.html
Within two weeks last August, the U.S. Food & Drug Administration approved two new cancer drugs: Roche’s Zelboraf, also called vemurafenib, for metastatic melanoma and Pfizer’s Xalkori, or crizotinib, for non-small-cell lung cancer. What sets those drugs apart from other cancer treatments is
- the requirement that physicians use a diagnostic test to determine whether each one is right for a patient.
In the year since their approval, the two drugs have become
- poster children for personalized medicine and
- for what the industry calls “companion diagnostics.”
FDA defines companion diagnostics as
- analytical tests that are required for the safe and effective use of a drug.
These assays measure a biomarker such as DNA or a protein
- to stratify the patient population and
- identify those individuals most likely to benefit or experience side effects from a drug.
On the basis of test results, patients can be
- included in or excluded from clinical trials and prescriptions.
Such tests become necessary when there’s no other way to identify appropriate patients.
Xalkori and Zelboraf are not the first drugs to use companion diagnostics. That honor goes to Herceptin, Genentech’s drug for HER2 (human epidermal growth factor receptor 2)-positive breast cancer, which was approved in 1998. What was once unusual is now poised to become mainstream. On July 6, FDA approved a companion diagnostic from Qiagen to identify appropriate patients for a new indication for the cancer drug cetuximab, which is comarketed by Eli Lilly & Co. and Bristol-Myers Squibb as Erbitux.
Last summer, FDA released a draft guidance document on companion diagnostics for industry and FDA staff to clarify
- when such tests will be required for regulatory approval and
- to outline the regulatory process.
But “we’re not pushing anybody in this direction,” says Elizabeth A. Mansfield, director of personalized medicine in the Office of In Vitro Diagnostic Device Evaluation & Safety in FDA’s Center for Devices & Radiological Health. However, some companies are designing clinical trials this way, she points out, because “they want to be able
- to select a population that’s going to respond in a robust enough way
- to convince everyone that their drug actually works.”
Pharmaceutical companies are embracing codevelopment of drugs and diagnostics
- because that’s where the science is leading them.
“We’ve been treating all tumors alike,” says Richard E. Buller, vice president for translational oncology at Pfizer,
- “but we have data that says they’re not all alike.”
For example, a receptor
- might be expressed in one type of breast cancer but not in another.
On the basis of the receptor’s expression, breast cancer can be divided into multiple subtypes, including HER2 positive or estrogen receptor positive, among others. By correctly identifying patients, pharma companies can target these different subtypes with different drugs.
“We’re following the scientific understanding of disease into the clinic,” says Garret M. Hampton, Genentech’s senior director of oncology biomarker development.
1,5 Diagnostics and Biomarkers: Novel Genomics Industry Trends vs Present Market Conditions and Historical Scientific Leaders Memoirs
Larry H Bernstein, MD, FCAP, Author and Curator
This article has two parts:
- 1.5.1: Novel Genomics Industry Trends in Diagnostics and Biomarkers vs
Present Market Transient Conditions
and
- 2: Historical Scientific Leaders Memoirs
1.5.1: Novel Genomics Industry Trends in Diagnostics and Biomarkers vs
Present Market Transient Conditions
Based on “Forging a path from companion diagnostics to holistic decision
support”, L.E.K.
Executive Insights, 2013;14(12). http://www.LEK.com
The problem of the current format is a “one test/one drug” match, but decision support
may require a combination of validated biomakers obtained on a small biopsy sample
(technically manageable) with confusing results.
While HER2 negative patients are more likely to be pre-menopausal with a more
aggressive tumor than postmenopausal,
- the HER2 negative designation does not preclude treatment with Herceptin.
So the Herceptin would be given in combination, but
- with what other drug in a non-candidate?
The point that L.E.K. makes is that
- providing highly validated biomarkers linked to approved therapies, it is necessary
- to pursue more holistic decision support tests that interrogate multiple biomarkers
(panels of companion diagnostic markers) and - discovery of signatures for treatments that are also used with a broad range of information,
such as,
- traditional tests,
- imaging,
- clinical trials,
- outcomes data,
- EMR data,
Companion diagnostics and their companion therapies is defined here as a method
enabling
- LIKELY responders to therapies that are specific for patients with specific molecular profile.
The result is that the diagnostics permitted to specific patient types gives access to
- novel therapies that may otherwise not be approved or reimbursed in other,
perhaps “similar” patients - who lack a matching identification of the key identifier(s) needed to permit the therapy,
- thus, entailing a poor expected response.
The concept is new because:
(1) The diagnoses may be closely related by classical criteria, but at the same time
they are not alike with respect to efficacy of treatment with a standard therapy.
(2) The companion diagnostics is restricted to dealing with a targeted drug-specific
question without regard to other clinical issues.
(3) The efficacy issue it clarifies is reliant on a deep molecular/metabolic insight
that is not available, except through
emergent genomic/proteomic analysis that has become available and which has
rapidly declining cost to obtain.
- The limitation example given is HER2 testing for use of Herceptinin therapy for non-candidates (HER2 negative patients).
- reimbursement and coverage data.
A comprehensive solution of this nature appears to be a distance from realization.
However, is this the direction that will
- lead to tomorrows treatment decision support approaches?
Surveying the Decision Support Testing Landscape
As a starting point, L.E.K. characterized the landscape of available tests in the U.S.
that inform treatment decisions
- compiled from ~50 leading diagnostics companies operating in the U.S.
between 2004-2011.
L.E.K. identified more than 200 decision support tests that were classified
- by test purpose, and more specifically,
- whether tests inform treatment decisions for a single drug/class
(e.g., companion diagnostics) vs. - more holistic treatment decisions across multiple drugs/classes
(i.e., multiagent response tests).
Treatment Decision Support Tests
Companion Diagnostics
Single drug/class
Predict response/safety or guide dosing of a single drug or class
HercepTest Dako
- Determines HER2 protein overexpression
for Herceptin treatment selection
Multiple drugs/classes
Vysis ALK Break
Apart FISH Abbott Labs
- Predicts the NSCLC patient response to Xalkori
Other Decision Support
Provide prognostic and predictive information on the benefit of treatment
Oncotype Dx Genomic Health, Inc.
- Predicts both recurrence of breast cancer and
potential patient benefit to chemotherapy regimens
PML-RARα Clarient, Inc.
- Predicts response to all-trans retinoic acid (ATRA)
and other chemotherapy agents
TRUGENE Siemens
- Measures resistence to multiple HIV-1 anti-retroviral agents
Multi-agent Response
Inform targeted therapy class selection by interrogating a panel of biomarkers
Target Now Caris Life Sciences
- Examines tumor’s molecular profile to tailor treatment options
Response DX: Lung Response Genetics, Inc.
- Examines multiple biomarkers to guide therapeutic treatment
decisions for NSCLC patients
Source: L.E.K. Analysis
Includes IVD and LDT tests from
- top-15 IVD test suppliers,
- top-four large reference labs,
- top-five AP labs, and
- top-20 specialty reference labs.
For descriptive purposes only, may not map to exact regulatory labeling
Most tests are companion diagnostics and other decision support tests
- that provide guidance on single drug/class therapy decisions.
However, holistic decision support tests (e.g., multi-agent response) are
- growing the fastest at 56% CAGR.
The emergence of multi-agent response tests suggests diagnostics companies
are already seeing the need
- to aggregate individual tests (e.g., companion diagnostics)
- into panels of appropriate markers addressing a given clinical decision need
L.E.K. believes this trend is likely to continue as
- increasing numbers of biomarkers become validated for diseases
and multiplexing tools - enabling the aggregation of multiple biomarker interrogations
into a single test
to become deployed in the clinic.
Personalized Medicine Partnerships
L.E.K. also completed an assessment of publicly available
- personalized medicine partnership activity from 2009-2011 for
~150 leading organizations operating in the U.S. - to look at broader decision support trends and
- emergence of more holistic solutions beyond diagnostic tests.
Survey of partnerships deals was conducted for
- top-10 academic medical centers research institutions,
- top-25 biopharma,
- top-four healthcare IT companies,
- top-three healthcare imaging companies,
- top-20 IVD manufacturers,
- top-20 laboratories,
- top-10 payers/PBMs,
- top-15 personalized healthcare companies,
- top-10 regulatory/guideline entities, and
- top-20 tools vendors for the period of 01/01/2009 – 12/31/2011.
Source: Company websites, GenomeWeb, L.E.K. analysis
Across the sample we identified 189 publicly announced partnerships of which
- ~65% focused on more traditional areas
(biomarker discovery, companion diagnostics and targeted therapies). - ~30% included elements geared towards creating more
holistic decision support models.
Partnerships categorized as holistic decision support by L.E.K. were focused on
- mining large patient datasets (e.g., from payers or providers),
- molecular profiling (e.g., deploying next-generation sequencing),
- creating information technology (IT) infrastructure needed to enable
holistic decision support models and - integrating various datasets to create richer decision support solutions.
Interestingly, holistic decision support partnerships often included stakeholders
outside of biopharma and diagnostics –
- research tools,
- payers/PBMs,
- healthcare IT companies as well as
- emerging personalized healthcare (PHC) companies
(e.g., Knome, Foundation Medicine and 23andMe).
This finding suggests that these new stakeholders will be increasingly important
in influencing care decisions going forward.
Holistic Treatment Decision Support
Holistic Decision Support Focus |
Technology Provider Partners Stakeholder Deploying the Solution |
Holistic Decision Support Activities |
|
Molecular Profiling | Life Technologies | TGEN/US Oncology |
Sequencing of triple-negative breast cancer patients to identify potential treatment strategies |
Foundation Medicine | Novartis | Deployment of cancer genomics analysis platform to support Novartis clinical research efforts |
|
Predictive genomics | Clarient, Inc. (GE Healthcare) |
Acorn Research |
Biomarker profiling of patients within Acorn’s network of providers to support clinical research efforts |
Genome Quest |
Beth Israel Deaconess MC |
Whole genome analysis and to guide patient management |
|
Outcomes Data Mining |
AstraZeneca | WellPoint | Evaluate comparative effectiveness of selected marketed therapies |
23andMe | NIH | Leverage information linking drug response and CYP2C9/CYP2C19 variation |
|
Pfizer | Medco | Leverage patient genotype, phenotype and outcome for treatment decisions and target therapeutics |
|
Healthcare IT Infrastructure |
IBM | WellPoint | Deploy IBM’s Watson-based solution to evidence- based healthcare decision-making support |
Oracle | Moffitt Cancer Center | Deploy Oracle’s informatics platform to store and manage patient medical information |
|
Data Integration | Siemens Diagnostics |
Susquehanna Health |
Integration of imaging and laboratory diagnostics |
Cernostics | Geisinger Health |
Integration of advanced tissue diagnostics, digital pathology, annotated biorepository and EMR to create solutions |
|
next-gen treatment solutions |
CardioDx | GE Healthcare | Integration of genomics with imaging data in CVD |
Implications
L.E.K. believes the likely debate won’t center on which models and companies
will prevail. It appears that
- the industry is now moving along the continuum to a truly holistic capability.
- the mainstay of personalized medicine today will become integrated and
enhanced by other data.
The companies that succeed will be able to capture vast amounts of information
- and synthesize it for personalized care.
Holistic models will be powered by increasingly larger datasets and
sophisticated decision-making algorithms.
This will require the participation of an increasingly broad range of participants
to provide the
- science, technologies, infrastructure and tools necessary for deployment.
There are a number of questions posed by this study, but only some are of
interest to this discussion:
Group A. Pharmaceuticals and Devices
- How will holistic decision support impact the landscape ?
(e.g., treatment /testing algorithms, decision making, clinical trials)
Group B. Diagnostics and Decision Support
- What components will be required to build out holistic solutions?
– Testing technologies
– Information (e.g., associations, outcomes, trial databases, records)
– IT infrastructure for data integration and management, simulation and reporting
- How can various components be brought together to build seamless holistic
decision support solutions?
Group C. Providers and Payers
- In which areas should models be deployed over time?
- Where are clinical and economic arguments most compelling?
2: Historical Scientific Leaders Memoirs – Realtime Clinical Expert Support
Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald
Coifman, in the Yale University Applied Mathematics Program,
A software system that is the equivalent of an intelligent Electronic Health
Records Dashboard that
- provides empirical medical reference and
- suggests quantitative diagnostics options.
The current design of the Electronic Medical Record (EMR) is a linear
presentation of portions of the record
- by services
- by diagnostic method, and
- by date, to cite examples.
This allows perusal through a graphical user interface (GUI) that partitions the
information or necessary reports
- in a workstation entered by keying to icons.
This requires that the medical practitioner finds the
- history,
- medications,
- laboratory reports,
- cardiac imaging and
- EKGs, and
- radiology in different workspaces.
The introduction of a DASHBOARD has allowed a presentation of
- drug reactions
- allergies
- primary and secondary diagnoses, and
- critical information
about any patient the care giver needing access to the record.
The advantage of this innovation is obvious. The startup problem is
- what information is presented and
- how it is displayed,
- which is a source of variability and a key to its success.
We are proposing an innovation that supercedes
- the main design elements of a DASHBOARD and utilizes
- the conjoined syndromic features of the disparate data elements.
So the important determinant of the success of this endeavor is that
- it facilitates both the workflow and the decision-making process
with a reduction of medical error.
Continuing work is in progress in extending the capabilities with model
datasets, and sufficient data because
- the extraction of data from disparate sources will, in the long run,
further improve this process.
For instance, the finding of both ST depression on EKG coincident
- with an elevated cardiac biomarker (troponin),
- particularly in the absence of substantially reduced renal function.
The conversion of hematology based data into useful clinical information
- requires the establishment of problem-solving constructs based on
the measured data.
The most commonly ordered test used for managing patients worldwide is
- the hemogram that often incorporates
- the review of a peripheral smear.
While the hemogram has undergone progressive modification of the measured
features over time
- the subsequent expansion of the panel of tests has provided a window into
the cellular changes in the
- production
- release
- or suppression
of the formed elements from the blood-forming organ into the circulation.
In the hemogram one can view
- data reflecting the characteristics of a broad spectrum of medical conditions.
Progressive modification of the measured features of the hemogram has
- delineated characteristics expressed as measurements of
- size
- density, and
- concentration,
resulting in many characteristic features of classification. In the diagnosis
of hematological disorders
- proliferation of marrow precursors
- domination of a cell line, and
- features of suppression of hematopoiesis
provide a two dimensional model. Other dimensions are created by considering
- the maturity of the circulating cells.
The application of rules-based, automated problem solving should provide a
valid approach to
- the classification and interpretation of the data used to determine a
knowledge-based clinical opinion.
The exponential growth of knowledge since the mapping of the human
genome enabled
- by parallel advances in applied mathematics that have not been a part of
traditional clinical problem solving.
As the complexity of statistical models has increased
- the dependencies have become less clear to the individual.
Contemporary statistical modeling has a primary goal of
- finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can
- substantially interpret the data at hand and
- convert it in real time to a “knowledge-based opinion”
- could improve clinical decision-making by incorporating
- multiple complex clinical features as well as duration of
onset into the model.
An example of a difficult area for clinical problem solving is found in the
- diagnosis of SIRS and associated sepsis. SIRS
(and associated sepsis) is a costly diagnosis in hospitalized patients.
Failure to diagnose sepsis in a timely manner creates
- a potential financial and safety hazard.
The early diagnosis of SIRS/sepsis is made by the application of
defined criteria by the clinician.
- temperature
- heart rate
- respiratory rate and
- WBC count
The application of those clinical criteria, however, defines the condition
after it has developed and
- has not provided a reliable method for the early diagnosis of SIRS.
The early diagnosis of SIRS may possibly be enhanced
- by the measurement of proteomic biomarkers, including
- transthyretin
- C-reactive protein
- procalcitonin
- mean arterial pressure
Immature granulocyte (IG) measurement has been proposed as a
- readily available indicator of the presence of granulocyte precursors (left shift).
The use of such markers, obtained by automated systems
- in conjunction with innovative statistical modeling, provides
- a promising approach to enhance workflow and decision making.
Such a system utilizes the conjoined syndromic features of
- disparate data elements with an anticipated reduction of medical error.
How we frame our expectations is so important that it determines
- the data we collect to examine the process.
In the absence of data to support an assumed benefit, there is no
proof of validity at whatever cost. This has meaning for
- hospital operations,
- for nonhospital laboratory operations,
- for companies in the diagnostic business, and
- for planning of health systems.
The problem stated by LL WEED in “Idols of the Mind” (Dec 13, 2006):
“ a root cause of a major defect in the health care system is that, while we
- falsely admire and extol the intellectual powers of highly educated physicians,
- we do not search for the external aids their minds require”.
HIT use has been
- focused on information retrieval, leaving
- the unaided mind burdened with information processing.
We deal with problems in the interpretation of data
- presented to the physician, and how
- through better design of the software that presents this data
- the situation could be improved.
The computer architecture that the physician uses to view the results
is more often than not presented
- as the designer would prefer, and not as the end-user would like.
In order to optimize the interface for physician,
- the system would have a “front-to-back” design, with
- the call up for any patient ideally consisting of a dashboard design
- that presents the crucial information that the physician
- would likely act on in an easily accessible manner.
The key point is that each item used has to be closely related to a
- corresponding criterion needed for a decision.
Feature Extraction.
This further breakdown in the modern era is determined by
- genetically characteristic gene sequences
- that are transcribed into what we measure.
Eugene Rypka contributed greatly to clarifying the
- extraction of features in a series of articles, which
- set the groundwork for the methods used today in clinical microbiology.
The method he describes is termed S-clustering, and
- will have a significant bearing on how we can view laboratory data.
He describes S-clustering as extracting features from endogenous data that
- amplify or maximize structural information to create distinctive classes.
The method classifies by taking the number of features
- with sufficient variety to map into a theoretic standard.
The mapping is done by
- a truth table, and each variable is scaled
- to assign values for each: message choice.
The number of messages and the number of choices forms an N-by N table.
He points out that the message
- choice in an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.
Even though there may be a large number of measured values, the variety
- is reduced by this compression, even though
- there is risk of loss of information.
Yet the real issue is how
- a combination of variables falls into a table with meaningful information.
We are concerned with accurate assignment into
- uniquely variable groups by information in test relationships.
One determines the effectiveness of each variable by
- its contribution to information gain in the system.
The reference or null set is the class having no information.
Uncertainty in assigning to a classification is
- only relieved by providing sufficient information.
The possibility for realizing a good model for approximating
- the effects of factors supported by data used
- for inference owes much to the discovery of Kullback-Liebler distance
or “information”, and - Akaike found a simple relationship between K-L information and
Fisher’s maximized log-likelihood function.
In the last 60 years the application of entropy comparable to
- the entropy of physics, information, noise, and signal processing,
- has been fully developed by Shannon, Kullback, and others, and
- has been integrated with modern statistics,
- as a result of the seminal work of Akaike, Leo Goodman, Magidson
and Vermunt, and work by Coifman.
Gil David et al. introduced an AUTOMATED processing of the data available
to the ordering physician and
- can anticipate an enormous impact in diagnosis and treatment of
perhaps half of the top 20 most common - causes of hospital admission that carry a high cost and morbidity.
For example: anemias (iron deficiency, vitamin B12 and folate deficiency, and
hemolytic anemia or myelodysplastic syndrome); pneumonia; systemic
inflammatory response syndrome (SIRS) with or without bacteremia; multiple
organ failure and hemodynamic shock; electrolyte/acid base balance disorders;
acute and chronic liver disease; acute and chronic renal disease; diabetes
mellitus; protein-energy malnutrition; acute respiratory distress of the newborn;
acute coronary syndrome; congestive heart failure; disordered bone mineral
metabolism; hemostatic disorders; leukemia and lymphoma; malabsorption
syndromes; and cancer(s) [breast, prostate, colorectal, pancreas, stomach,
liver, esophagus, thyroid, and parathyroid].
Realtime Clinical Expert Support and validation System
We have developed a software system that is the equivalent of an intelligent
Electronic Health Records Dashboard that provides empirical medical reference
and suggests quantitative diagnostics options.
The primary purpose is to
- gather medical information,
- generate metrics,
- analyze them in realtime and
- provide a differential diagnosis,
- meeting the highest standard of accuracy.
The system builds its unique characterization and provides a list of other patients
that share this unique profile, therefore utilizing the vast aggregated knowledge
(diagnosis, analysis, treatment, etc.) of the medical community. The
- main mathematical breakthroughs are provided by accurate patient profiling
and inference methodologies - in which anomalous subprofiles are extracted and compared to potentially
relevant cases.
As the model grows and its knowledge database is extended, the diagnostic and
the prognostic become more accurate and precise. We anticipate that the effect of
implementing this diagnostic amplifier would result in
- higher physician productivity at a time of great human resource limitations,
- safer prescribing practices,
- rapid identification of unusual patients,
- better assignment of patients to observation, inpatient beds,
intensive care, or referral to clinic, - shortened length of patients ICU and bed days.
The main benefit is a real time assessment as well as diagnostic options based on
- comparable cases,
- flags for risk and potential problems
as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by
our system with severe SIRS at a grade of 0.61 .
The patient was treated for SIRS and the blood tests were repeated during the
following week. The full combined record of our system’s assessment of the patient,
as derived from the further hematology tests, is illustrated below. The yellow line
shows the diagnosis that corresponds to the first blood test (as also shown in
the image above). The red line shows the next diagnosis that was performed a
week later.
Related article in Pharmaceutical Intelligence:
Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of
myocardial infarction. Clin Chem 1988;34:2031-2038.
Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of
myocardial infarction: integration of serum markers and clinical descriptors using
information theory. Yale J Biol Med 1999; 72: 5-13.
Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.;
Reed R.G.; Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress
Syndrome using the Abbott FLMII amniotic fluid assay. The National Academy of
Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.
Clin Chim Acta 2002; 326(8): 61-68.
Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical
ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction
utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.
Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for
classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.
Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform
Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical
Engineering 1994; 33(7):2170–2174.
Coifman and N. Saito. Constructions of local orthonormal bases for classification
and regression.C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.
W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms.
Dutch norm data for 13 semantic categories and 338 exemplars. Behavior
Research Methods, Instruments,& Computers 2004; 36 (3): 506–515.
De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, W Voorspoels,
and G Storms.Exemplar by feature applicability matrices and other Dutch
normative data for semantic concepts.Behavior Research Methods 2008;
40 (4): 1030-1048
Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually
presented single words:A reaction time analysis [Technical memorandum].
Murray Hill, NJ: Bell Laboratories..
Weed L. Automation of the problem oriented medical record. NCHSR Research
Digest Series DHEW. 1977;(HRA)77-3177.
Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.
The Automated Second Opinion Generator Larry H Bernstein
http://pharmaceuticalintelligence.com/2012/08/13/the-automated-second
-opinion-generator/
The electronic health record: How far we have travelled and where is journeys end
Larry H Bernstein
The potential contribution of informatics to healthcare is more than currently estimated.
Larry H Bernstein
Sheila Nirenberg/Cornell and Chethan Pandarinath/Stanford,
“Retinal prosthetic strategy with the capacity to restore normal vision,”
Proc Natl Acad Sci
Cadiovascular–decision-support-systems-for-disease-management-decision-making
Larry H Bernstein
Demonstration of a diagnostic clinical laboratory neural network applied to three
laboratory data conditioning problems
Larry H Bernstein
Vinod Khosla: 20 doctor included speculations, musings of a technology optimist
or technology will replace 80 percent of what doctors do
Aviva Lev-Ari
Bioengineering of vascular and tissue models
Larry H Bernstein and Aviva Lev-Ari
http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-
vascular-and-tissue-models/
The Heart: Vasculature Protection – A Concept-based Pharmacological
Therapy including THYMOSIN Aviva Lev-Ari, PhD, RN 2/28/2013
http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature
-protection-a-concept-based-pharmacological-therapy-including-thymosin/
FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology
Aviva Lev-Ari, PhD, RN 1/28/2013
PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma
Fibrinogen not Platelet Reactivity
Aviva Lev-Ari, PhD, RN 1/10/2013
The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI,
GRACE, SYNTAX, and Clinical SYNTAX
Aviva Lev-Ari, PhD, RN 1/3/2013
Coronary artery disease in symptomatic patients referred for coronary angiography:
Predicted by Serum Protein Profiles
Aviva Lev-Ari, PhD, RN 12/29/2012
http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-
symptomatic-patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/
New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging
Ischemia
Aviva Lev-Ari, PhD, RN 8/27/2012
Related articles
- Hospital EHRs Inadequate for Big Data; Need for Specialized -Omics
Systems (labsoftnews.typepad.com) - Apple Inc. (AAPL), QUALCOMM, Inc. (QCOM): Disruptions Needed
(insidermonkey.com) - Netsmart Names Dr. Ian Chuang Senior Vice President, Healthcare Informatics
and Chief Medical Officer (prweb.com) - Strategic partnership signals new age of stratified medicine (prweb.com)
- Personalized breast cancer therapeutic with companion diagnostic poised
for clinical trials in H2 (medcitynews.com)
Part 3: Biomarkers in Service of Medical Diagnosis
3.1 Related articles in Leaders in Pharmaceutical Intelligence
Cracking the code of human life: the birth of bioinformatics and computational genomics
Larry H Bernstein
Genetics of conduction disease atrioventricular AV conduction disease block
gene mutations transcription excitability and energy homeostasis
Aviva Lev-Ari
Identification of biomarkers that are related to the actin cytoskeleton
Larry H bernstein
Regression: A richly textured method for comparison of predictor variables
Larry H Bernstein
Diagnostic evaluation od SIRS by immature granulocytes
Larry H Bernstein
Big data in genomic medicine
Larry H Bernstein
http://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/
Automated inferential diagnosis of SIRS, sepsis, septic shock
Larry H Bernstein
http://pharmaceuticalintelligence.com/2012/08/12/1815/
http://pharmaceuticalintelligence.com/2012/08/15/1946/
3.2 Related literature
Journal of Molecular Biomarkers & Diagnosis
http://omicsonline.org/molecular-biomarkers-diagnosis.php
Biomarker is an indicator of a particular disease or physiological
state of an organism.
The Cytoskeleton as Biomarker: Angiosarcoma- Cytoskeleton
Shifalika Tangutoori
Samira shabani, Sara Samanian, Rezvan Mirzaei, Bahar Mahjoubi and
Frouzandeh Mahjoubi
Francesco Travaglino, Gerardo Salerno, Veronica Russo, Mariateresa Corsetti,
Rosaria D’Urso, Patrizia Cardelli, Maria Rosaria Torrisi, Vincenzo Visco and
Salvatore Di Somma
Osteochondrosis-Related Gene Expression in Equine Leukocytes Differs
among Affected Joints in Foals
Serteyn D, Piquemal D, Mendoza L, Caudron I, Noguier F, Bruno R,
Sandersen C and Lejeune JP
Junseong Park, JiEun Park, Jungsul Lee and Chulhee Choi
Hanan El-Bassat, Lobna Abo Ali, Sahar El Yamany, Hanan Al Shenawy,
Rasha A Al Din and Atef Taha
Lamiss Mohamed Abd Elaziz Sad, Samar Galal Younis and Hala Mohamed Nagi
Prognostic Value of Bone Marker Beta-Crosslaps in Patients with Breast Carcinoma
Nicole Zulauf, Ingo Marzi and Gerhard M Oremek
Emma Tabe Eko Niba, Van Khanh Tran, Le Anh Tuan-Pham, Dung Chi Vu,
Ngoc Khanh Nguyen, Thinh Huy Tran, Van Thanh Ta, Tomoko Lee, Yasuhiro
Takeshima and Masafumi Matsuo
Jing Yang, Wenlin Bai, Zhongxin Xiao, Yujiao Chen, Li Chen, Lefeng Zhang,
Junxiang Ma, Lin Tian and Ai Gao
The Free Alpha-Hemoglobin: A Promising Biomarker for β-Thalassemia
Uday YH Abdullah, Ahmed GF Al-Attraqchi, Hishamshah M Ibrahim, Zilfalil
Bin Alwi, Atif A Baig, Lekhsan Othman, Noraesah B Mahmud, Rosline B
Hassan, Nor Hedayah A Bakar and Alawiyah B A Abd Rahman
3.1 Honing in on Cancer Biomarkers
http//pharmaceuticalintelligence.com/2013-11-27/larryhbern/Cancer
Biomarkers for Companion Diagnostics
Caitlin Smith GEN 15 Nov 2013; 33(20)
Introduction and Goals
Some of the newest cancer treatments aim to
- individualize the therapy to the specific type of cancer and patient.
The large and growing number of different genetic alterations that researchers
observe in cancer cells
- have made it unfeasible to test for only a handful of targets.
- clinical testing is moving toward testing for many targets simultaneously.
multiplexed tumor genotyping allows for
- the simultaneous evaluation of a broad range of common and
rare tumor alterations,
This expanding the application of targeted therapy
- across a greater number of patients who undergo testing, and
- directs those patients into the most relevant clinical trials.
Dr. Borger and colleagues are uncovering “molecular signatures of tumors,” or
- collections of targets present in specific tumor types.
A molecular signature of a tumor is in essence,
- a map of the abnormalities within a particular tumor that are
- thought to be critical in driving the disease process,
- each tumor will have a unique combination of genetic alterations.
- the more comprehensive the tumor profiling,
- the more detailed the roadmap
- we can draw for directing that patient’s care,” Dr. Borger said.
Uncovering the molecular signatures of tumors has another important role —
- to better understand the differences among cells within the same tumor.
- because tumor heterogeneity is an important mechanism
of emerging drug resistance - . “Broad-based tumor profiling and the use of sensitive testing platforms
are essential - in identifying these potential mechanisms of disease resistance,
- so that targeted approaches can be aimed at circumventing those mechanisms.”
Target Signaling
Also working to help physicians figure out which treatments among many
might work best for individual patients is Selventa. Focusing on gene expression
biomarkers, Selventa researchers
- correlate gene expression patterns from patient data with
- changes in target signaling mechanisms.
“We operate on the hypothesis that patients with high or low levels of
target (or downstream target)
- pathway signaling correspond to potential responders or nonresponders
to target therapy, respectively,” said Renée Deehan Kenney, Ph.D.,
VP of Research.
“If we know who responded and who did not respond to treatment,
- we can use that information to hone the biomarker
Selventa is using its Systems Diagnostics (SysDx) platform
- to identify biomarkers used in diagnosing immune disorders
such as rheumatoid arthritis (RA).
Their product Clarify-RA is based on the SysDx approach using a blood
biomarker. It is designed to aid clinicians in
- matching RA patients with those RA drugs that will be most beneficial to them.
Such matching is valuable because RA is a heterogeneous disease, but
- different patients respond differently to the over 15 RA drugs that are available.
Moreover, RA is a debilitating disease that
- cannot wait for a trial-and-error treatment approach.
.“To compound this clinical challenge, drugs approved for RA offer
- about 50% improvement for only 40% of the patients,” said Dr. Deehan Kenney.
One biomarker Selventa found can identify RA patients
- who are likely to respond to anti-TNF therapy.
Similarly, Selventa’s SysDx approach also found
- a biomarker from tumor biopsy tissue that
- identifies ER+ breast cancer patients whose cancer
- tends to progress with tamoxifen treatment.
IHC-Based Testing
President and CEO of Precision Biologics, Philip Arlen, M.D., discussed
his company’s research on a new
- monoclonal antibody (NPC-1C), which targets tumors in both
pancreatic and colorectal cancer.
The antibody’s target is specific to tumors, and the antibody has
negligible reactions with normal tissue.
Precision Biologics took an unconventional tack to making NPC-1C,
- using a cancer vaccine that had been developed from colorectal cancer tissue
removed from patients with - varying stages of disease, and screened for
- antibodies that were specific for tumors, but nonreactive with normal tissue.
In both cell cultures and in animal models, they found that
- NPC-1C destroyed pancreatic cancer cells.
Phase I/IIa data demonstrating prolongation in overall survival in patients
- that had exhausted all standards of therapy,” said Dr. Arlen.
Precision Biologics has developed an immunohistochemistry-based
diagnostic test for expression of NPC-1C’s target. “Patients’ tumors are tested, and
- if the target is present, the patients can receive treatment with NPC-1C,” .
Dr. Arlen said: “We are also developing a diagnostic assay with NPC-1C for
- early detection and prognosis of colorectal and pancreatic cancer.”
NMR Technology
LipoScience researchers using NMR technology to find cancer biomarkers that are
- panels of metabolites coveringa range biochemical processes
They produced these 1H NMR spectra of unprocessed serum focusing on
(A) macromolecular signals and
(B) the small molecule metabolome.
LipoScience is also developing new ways to search for biomarkers. Specifically,
to find biomarkers of clinical value, they are using NMR technology. “We take
advantage of two of the key features of the NMR platform,” explained Thomas
O’Connell, Ph.D., senior director of research and development. “These are the
- lack of required sample preparation for routine biofluids and
- the inherently quantitative signals.”
This means that they can profile large sample sets very quickly.
LipoScience researchers are now using NMR to look for cancer biomarkers.
“Given the heterogeneity of most cancers,
- it is not likely that a single biomarker will provide the clinical performance,”
- “so we are examining panels of metabolites that cover a range
of biochemical processes, including - lipid and lipoprotein metabolism, energy perturbations, inflammatory
processes, and others.” said Dr. O’Connell,
They plan to use NMR and metabolomic profiling
- to develop clinical assays that help to choose patient-specific therapies.
“We are hopeful that one day in the near future, panels of biomarkers
could provide clinicians with
- much more objective, quantifiable, and personalized information
- regarding the diagnosis and management of their patients,”
Single Molecule Arrays
The Simoa (for single molecule array) instrument from Quanterix uses
- a digital ELISA technique,
- trapping fluorescent reaction product in individual wells,
to speed blood testing for HIV.
Researchers at Quanterix have developed a method of testing for a
different type of biomarker—
- faster, cheaper, and more sensitive than previous tests.
- that indicates the early and acute (and most contagious) stage of HIV infection.
Previously, the gold standard HIV test with the highest sensitivity was nucleic acid
testing. Simoa works by preventing the sensitivity loss that can occur in conventional ELISAs
- because of the dilution of reaction product into the reaction volume.
Simoa essentially miniaturizes the ELISA principle,
- trapping fluorescent reaction product in individual wells to prevent dilution.
“The technology basically supercharges a standard ELISA to give 1,000-times
greater sensitivity, Due to this extreme sensitivity of Simoa to enzyme label,
label molecules can be reduced, which
- lowers nonspecific interactions and improves signal background.
This drives the sensitivity of Simoa digital immunoassays down to the
level of nucleic acid testing.”
“A key need in many blood banking centers is high throughput,” Dr. Wilson said.
“Blood units are screened for a number of pathogens, so effective throughput
is measured in number of units processed in a given period of time.”
Simoa immunoassays can be multiplexed to test for up to 10 different target
proteins simultaneously, which may benefit blood banks. However, blood
banking is highly regulated, so introducing Simoa assays may take time.
“As with any new test used to ensure a blood unit is pathogen-free,” explained Dr. Wilson, “a substantial amount of data is needed
- to prove to regulatory bodies that the test exhibits the claimed
performance, and that - the manufacturing processes are fully validated and controlled.”
3.2 Personalized Medicine: Why We Are So Excited
http://www.genengnews.com/bioperspectives/personalized-medicine-
why-we-are-so-excited/4767
In this first chapter of a five-part series, learn about new scientific developments
and recent treatment successes.
Detlef Niese, M.D., Ph.D.
Same Symptoms: Same Disease
Patients diagnosed with the same disease may react very differently to the same treatment. This observation led Sir William Osler (1849–1919) to the statement: “If it would not be about the variability among individuals
medicine could well be a science and not an art”. We are brought up with the idea
that
- patients who have the same symptoms and whose course of disease follows similar patterns
- are likely to suffer from the same disease.
Scientific disciplines such as anatomy, developmental anatomy, physiology and
biochemistry,
- helped us to get a better understanding of how a normal organism functions.
Pathophysiology, micro and macro pathology and clinical chemistry
- helped to describe, classify and understand diseases.
However, few of these scientific advances led to a
- full understanding of the causes of diseases on a molecular, cellular or genetic basis.
Today’s medical text books still describe and define diseases on the basis of
combinations of objective and subjective symptoms, macroscopic, microscopic,
pathophysiological or biochemical findings, and the presence or absence of
specific pathogenic organisms or substances. Most diseases are also classified
according to the affected organ or organ system.
In consequence, patients diagnosed with the same disease
- are considered to suffer from the same underlying pathology.
They are also expected to respond to the same treatments, although clinical
experience tells us that this is only true in a limited number of situations. What is
causing the symptoms on a molecular level, remains often unclear.
Effective Therapies Were Educated Guesses
We have to admit that our current knowledge of disease mechanisms unfortunately
has not always resulted in the development of highly effective medicines yet.
The effective therapies we have, were developed intuitively or empirically, or were
even discovered by chance. Examples include antibiotics, alkaloids, calcium channel blockers, immunosuppressants and cytotoxic agents.
There are many examples of medicines, which were developed on the basis of wrong assumptions, and some examples of medicines, which proved to be effective in an
unexpected condition. But there are also few examples of medicines, like oral
contraceptives or modern targeted medicines in oncology and rheumatology, of which the development is based on an exact understanding of the relevant mechanisms.
Different Symptoms: Same Underlying Disease
In reality patients diagnosed with the same disease based on the same diagnostic
criteria
- show very different courses of the disease, and
- respond very differently to the same treatment. In clinical practice,
- doctors have no other choice than to optimize the treatment by trial and error.
We now know that these patients are probably suffering from different diseases.
In other words:
- the diseases have a different underlying pathology.
While the scientific progress in basic scientific disciplines such as anatomy,
physiology and biochemistry
- has been substantial over the last centuries,
- we still have a very limited understanding of the molecular processes in diseases.
- we know even less about the factors, which may determine whether a person has an increased risk
- of developing a particular disease.
Is “type-2 diabetes” a single homogeneous disease, for instance?
New Understanding of Diseases
Over the last decades molecular research on sick and healthy cells
- gave new insights in cellular signalling pathways: mechanisms involving proteins and nuclear receptors,
- which are essential for the normal functioning of cells
- but under specific circumstances may lead to diseases.
These discoveries have helped us to understand and classify diseases in a new way.
We have learned a lot from rare monogenetic disorders, which allow us to study the
impact of specific genetic mutations on the clinical signs and symptoms of the disease.
This research allows us to gain insight in the
- underlying genetic alterations and
- the consequences of the presence and functioning of critical proteins.
The answer to the question ‘what causes diseases’ is revealing itself little by little
- since we are able to decipher the genetic code of a person.
3.3 Personalized Medicine: Translation of Concepts
http://www.genengnews.com/bioperspectives/personalized-medicine-translation-of-concepts/4783
This second chapter of a five-part series shows how the search for predictive biomarkers could change drug
development research.
Lasse Tengbjerg Hansen , Adam Heathfield, Ph.D.
Biomarker research activities should continue
- in parallel with the clinical development program,
- so that biomarkers can be improved by applying data
- derived from early clinical trials to later stages of the program. [© taraki – Fotolia.com]
The previous chapter discussed why scientific developments and recent treatment
successes have made many of us so excited about personalised medicine. This
chapter describes how
- the search for predictive biomarkers is likely to change drug development research.
It may lead to a reduction of costs and an increase in productivity of research &
development projects.
Predictive biomarkers will change the way clinical trials are performed.
We Need Predictive Biomarkers
Traditionally, biomarkers answer
- critical questions that arise during various stages of drug development— questions such as:
- Does the drug reach the target?
- Does it have the desired biological effect?
- Does it have an influence on other expected or unexpected targets?
- Does the drug affect characteristics that predict desired or undesired effects?
However, the concept of personalised medicine centres around predictive:
- biomarkers that can help select a patient population
- with a higher chance for a favourable response
- to a specific kind of medicine.
In today’s drug development programmes, the identification and qualification of
biomarkers are integrated.
If we are able to determine the
- biologically relevant dose, range and selection of the optimal target population by means of biomarkers,
- we are convinced that biomarkers will increase R&D productivity by reducing development timelines and
- will prevent costly late stage attrition.
Unfortunately, predictive biomarkers are often not applied until rather
- late in the clinical development programme, when clinical data show
- that an optimal benefit-risk profile is only achieved in a subpopulation of patients.
In such situations,
- the attention is suddenly turned to other available research data
- that might explain the underlying biological nature of the research results.
Over the last years we gathered examples of biomarkers, which
- have proven to predict clinical response better
- because they are linked to the mode of actions of the compounds in question.
Examples include Her-2/neu (Trastuzumab/Lapatinib), KRAS (Cetuximab/Panitumomab), BRAF
(Vemurafenib), and CCR5 (Maraviroc).
We Lack Model Systems to Predict Drug Response
While it is crucial to identify and qualify biomarkers that predict clinical response
- we lack good model systems that predict drug response.
Since efficacy is established in animal models, despite convincing data from animal models,
- the translation from animals to humans is not always successful.
- alternative models or an alternative technology are needed, and
- this had been attempted with ex-vivo systems in areas such as rheumatoid arthritis and Crohn’s disease.
These assays have proven to be valuable for the evaluation of novel therapeutic
targets.While the application of such assays seems promising, it remains to be fully investigated.
A caveat of ex-ivo systems may be that
- the generated biomarker signal derives from a local tissue environment, which
- might be difficult to capture in peripheral blood samples.
The oncology field has the best access to tissue biopsies. That is one of the reasons why oncology is a front runner in personalized medicine with several tissue based
companion diagnostics. On the other hand, still very little is known
- about the drug resistance as encountered in the oncology field.
This asks for biomarkers that elucidate more clearly what is going on in individual
tumors.
New Kind of Trials
Studying targeted medicines in traditionally defined patient populations is quite problematic. For example, if we want to test a medicine like gefitinib,
- we do not need patients who are clinically diagnosed with non-small cell lung cancer,
- but patients with a specific EGFR mutation.
If we were to test the medicine on a population which is
- not preselected for the presence of the mechanism in question,
- we might conclude that the medicine does not have an appropriate benefit-risk ratio.
- but if tested on patients with the underlying mechanism we might come to the opposite conclusion.
Gefitinib is therefore indicated for patients with tumors showing specific EGFR
mutations, while
- it is irrelevant whether the tumour is located in the lungs.
In order to develop targeted medicines, patients will have to be selected
- based on presence or absence of specific biomarkers.
In some situations, that may be a
- relatively small subset of the traditionally defined patient population.
For instance: traditionally we would test a drug on patients with breast cancer.
But now we would select patients with a
- similar disease pathway: e.g. Her-2 positive.
Eventually, the patient population may be extended: for instance –
- that we can treat patients with malignant tumours in other organs with the same medicine,
- assuming the tumours are caused by the same molecular mechanisms (e.g. m-TOR mutations).
With this approach, patient populations participating in clinical trials would be
- more homogenous, and response to treatment will be more consistent and predictable as well.
Integrating Biomarker Research with Target Evaluation
We think continuous biomarker research
- ideally starts at least two to four years prior to first-in-man clinical trials,
- we should ideally have identified a broad panel of biomarker candidates that
- subsequently can be further qualified in appropriate in-vitro, ex-vivo, in-vivo, or in-silico model systems.
Bioanalytical assays for the selected candidates can then be established and
validated
- prior to implementation in clinical trials.
In order to be fully operational, this strategy requires competencies within three main areas:
- biomarker research,
- biomarker assay development and
- clinical biomarker implementation.
These areas should collaborate to guarantee the quality of the biomarker data derived from clinical trials. This will strengthen the basis on which a
- personalised medicine program can be evaluated.
- it is also advisable to continue the biomarker research activities
- in parallel with the clinical development program.
In this way it is possible to improve the biomarkers by
- applying data derived from early clinical trials to later stages of the program.
This approach clinically validates pre-selected biomarker candidates
- but may also identify potential other and better biomarker candidates correlating with clinical outcomes.
Identifying Biomarkers is a Collaborative Effort
It is time consuming and costly to identify and qualify biomarkers for other uses,
- such as predicting clinical outcomes.
A biomarker needs sufficient evidence of broader clinical utility and validation
- in multiple independent studies,
- across different cohorts,
- ethnic groups and
- clinical subgroups.
This is in line with recent guidance for biomarker qualification from the Food
& Drug Administration (FDA) and the European Medicines Agency (EMA).
In recent years, large consortia have succeeded in
- the qualification and validation of various types of biomarkers.
Examples of collaborative biomarker onsortia are the Biomarker Consortium (US) and Innovative Medicines Initiative (EU).They are supported by representatives from
regulators, pharmaceutical industry, biotech industry and diagnostic industry,
academia and governmental organisations.
3.4 Personalized Medicine: From Biomarkers to Companion Diagnostics
This third chapter of a five-part series talks about coordinating diagnostic test kit development between
pharmaceutical companies and diagnostic manufacturers.
Peter Collins, Ph.D.
The previous chapter described how the search for predictive biomarkers
- is likely to change drug development research.
This chapter talks about turning biomarkers into companion diagnostics.
Once biomarker tests have been turned into diagnostic test kits, we would
- be able to routinely classify patients in clinics.
But the development of those diagnostic test kits has to be coordinated between two entities—pharmaceutical companies and diagnostic manufacturers—with rather different regulation.
As we have seen in the previous chapters, biomarkers can play a critical role in
- classifying patients into subpopulations.
A biomarker can be used as the basis for
- creating a routine diagnostic test for clinical use,
after it has been approved as an in-vitro diagnostic (IVD) test: i.e.
- a certified product, reagent, application or other tool that analyses human material and helps to diagnose patients.
There is a tendency for regulators to require that
- drugs and companion diagnostics are developed in tandem.
A companion diagnostic test is essentially a biomarker test
- that enables better decision making on the use of a therapy.
In other words: it is a diagnostic test that is
- specifically linked to a therapeutic drug.
The goal here is to increase the safety and the efficacy of the drug.
Pharmaceutical and biotechnology companies are trying to change their tack now and are working hard
- on integrating the co-development concept but
- true co-evelopment has been a rare phenomenon
There are two reasons for this.
- clinically useful biomarkers are usually established late in the drug validation process.
- the worlds of drug development and diagnostics, although both part of health care, are parallel universes
In the next paragraphs we will elaborate on these issues.
Clinically Useful Biomarkers Found Late in The Drug Validation Process
It is quite difficult to find clinically useful predictive biomarkers early on in a drug
development program, simply because
- they can only be determined on the basis of the patients’ responses to the drug.
A number of biomarkers, such as KRAS and EGFR mutations,
- could only be established after a sufficient number of patients—
- well beyond the usual number for a Phase III trial—
- had elicited better understanding of differential drug response.
If the method of clinical trials changes consistent with our new understanding
of diseases and classifying patients (as we described in chapter one
and two), we might be able to find these biomarkers at an earlier stage. But
- as long as still 30% of the drugs fail during Phase III,
- diagnostic manufacturers will not be able to afford huge investments
- in the development of companion diagnostics.
Parallel Universes
The worlds of drugs and diagnostics are parallel universes: in general
- they have different development timelines, product lifecycles, return on investment, customers, and regulations.
- Drugs are valued and reimbursed as products, typically of high value.
- Diagnostics are valued and paid for as services,typically at a much lower value.
- Relatively few if any models exist for valuing a drug diagnostic combination.
This issue will be discussed in the next chapter.
Drugs are protected by patents, but in the companion diagnostic arena,
- biomarkers are considered to be within the public domain and there is less emphasis on intellectual property.
It is even debated whether biomarkers should be patented at all, because
- biomarkers are not invented but already exist in cells.
Drugs and diagnostics are also regulated differently. It takes many years and
- a large amount of data from expensive clinical trials to get a drug approved.
Under current rules, however in the current IVD Directive
- there is little mention of clinical evidence.
However, the EU IVD Directive regulating in-vitro diagnostics is under revision.
The current IVD classification
- does not take scientific and technological evolution into account
- which is expected to be changed towards a new risk-based approach.
A revised IVD Directive (which may even become a Regulation)
- will improve safety and efficacy, but will also raise difficulties.
It is likely to state that an IVD assay must have the
- performance characteristics required for fulfilling a clinical purpose.
Moreover, requirements may be added on
- how to demonstrate clinical validity, possibly proportionately with
- the risk level of the test.
The intention to increase safety will inevitably
- lead to higher costs and greater efforts for manufacturers.
It will take IVD manufacturers years and unusually high
investments to take an IVD past the regulators. Undoubtedly,
- Phase III trials would benefit from well validated biomarker tests.
But it is almost impossible to comply
- with regulations that require a clinically validated diagnostic test,
- simply because clinical evidence will only be provided
- by a clinical trial itself, and this is unlikely to be required for many IVDs.
Another point is that more regulation can severely impact innovation. The larger
investments that are needed might limit the ability
- of (especially small) innovative companies to discover and develop biomarkers.
The Quality of Diagnostic Tests
In order to gain more insight into the consequences of stricter regulation, we must
- dive into the universe of diagnostic practice in the EU a little deeper, because there is another challenging reality.
In the diagnostic arena there are, basically, two types of tests in use:
- manufactured diagnostic products, which are regulated in the EU, and
- lab developed tests (LDT’s), which are not regulated.
Thus, in the EU, there is a broad range of methodologies, often driven by
- the individual laboratory’s capabilities, access to diagnostic testing platforms,
reimbursement and preferences of the individual team.
How this situation works in practice was pointed out by a multicenter study. Various laboratories were asked to select patients
- with a positive EGFR or KRAS biomarker.
- Some of the tests were in close agreement but, others were not.
- Due to the poor quality of so-called ‘home brew tests’ or inadequately validated tests in hospital laboratories,
- patients with a positive EGFR biomarker might be excluded from an effective therapy on false grounds.
The result of this industry dynamic is a lack of standardization of testing and,
- inconsistent patient selection for therapy.
- Improper patient selections will lead to poor therapy outcomes, or worse, to adverse reactions to drugs.
This situation is not likely to change under a new EU IVD Regulation. IVD
manufacturers may have to make greater efforts in the future, including financially, to get an IVD past the regulators. However,
- there is no legally enforced requirement
- to use the companion diagnostic test approved in the clinical trial.
So, for as long as regulatory bodies do not sanction laboratories that
- do not conduct tests in line with pharmaceutical clinical trials,
- the quality of the testing will not improve.
Diagnostic partners cannot justify large investments in bringing products onto a market, in which there are
- no rules for laboratories to perform substitute testing.
Regulation for testing laboratories will be a more effective first step
- towards increasing quality and safety than sharpening the rules for IVD manufacturers.
Discussion
The industry needs time to bring
- diagnostic and therapeutic research together.
The regulation should leave room
- to develop companion diagnostics in various business models,
- instead of only requiring co-development.
Let’s not forget: many drugs
- have proven to be effective without companion diagnostic tests.
We would not want to rule those out.
Regulation to increase safety is in itself a good thing, but
- it is also important to provide opportunities for
- small innovative enterprises to enter the market.
3.5 BioPerspectives
Apr 3, 2013
Personalized Medicine: Health Economic Aspects
This fourth chapter of a five-part series discusses personalized medicine’s
financial impact.
Alexander Roediger
http://www.genengnews.com/bioperspectives/personalized-medicine-health-economic-aspects/4824
There is a widespread skepticism about the financial impact of personalized medicine. [AlienForce – Fotolia.com]
From the previous discussion of turning biomarkers into companion diagnostics,
- we now shift to a discussion of healthcare.
Personalized medicine has an impact on the healthcare budgets.
Are the tools that healthcare economists and pricing and reimbursement authorities
currently use
- suitable for analysing personalized medicine?
In short, the concept of personalized medicine revolves around the idea of the
selection of patients who are likely to
- respond the medicine, so that the treatment will offer better outcomes.
According to the recent Quintiles report “biopharma and managed care executives are
- optimistic that personalised medicine will improve efficacy, safety and public health”.
However, there is a widespread skepticism about the financial impact of personalized medicine. According to the same report
- 56% of managed care executives feel that personalized medicine will increase cost of prescription medicines.
Consequently, pricing and reimbursement of personalized medicines
- need careful consideration and a balanced view
- on cost-effectiveness and incentives innovation.
Another Economic Model is Needed
Much of our experience with economic evaluations of medicines is based on
- medicines designed to treat the whole patient population.
Evaluations of the economic value of a particular drug
- usually involve comparisons with treatment alternatives or palliative care.
Such comparative effectiveness are typically assessed
- on the basis of data collected in
- randomized controlled trials across a broad population of patients.
There have been attempts to
- identify sub-sets of patients that benefit most noticeably from the medicine, i.e.
patients over or under a certain age or patients judged to have a ‘severe’ form of a
certain disease.
But in many cases these evaluations were
- not supported by robust evidence, and the sub-groups were not well characterized
- nor were they statistically well-sampled in the clinical data.
Testing May Be Economically Viable
In the personalized medicine concept,
- a sub-group of responders is selected or screened out, which
- raises the hope that economic evaluations will be more straightforward and positive.
This is particularly true for the cheapest use of costly molecular targeted agents.
Biomarkers can improve the ability
- to identify responders and non-responders,
- and insure that the information is used to make better treatment choices.
As a consequence, the respective drug gains a more favorable risk-benefit ratio,
- patients are expected to have better health outcomes and better quality of life and
- healthcare resources are likely to be used more efficiently.
Much of the efficiency gains of personalized medicines depends on the testing—
- “testing before treating may be economically viable
- if the savings gained by avoiding ineffective treatment and adverse events are greater than the cost of testing.”
Studies on the cost-effectiveness of personalized medicines are still on the way
but promising results are available. The introduction of
- a companion diagnostic strategy in advanced non-small cell lung cancer
- reduced overall treatment costs by more than € 800 compared to current treatment.
A cost-effectiveness analysis in the field of chronic myeloid leukemia showed that
- the use of the FISH test reduced treatment costs by €12’500.6 In addition, it increased the life quality.
Reduction of R&D Costs
Personalized medicine may provide more value for money—
- not only because of improved drug effectiveness and reduced toxicity, but
- also decrease the average research and development costs for new medicines.
Clinical trials are the most expensive part of R&D (nearly 50% of the investment).
The costs of clinical trials seem
- to have risen by one third between 2005 and 2007 due to
- increasing regulatory and other requirements.
Biomarkers may enhance the efficacy of clinical trials of new drugs
- by investing more heavily in early research to identify key biomarkers,
- and in targeting relevant sub-groups of patients.
Smaller (and maybe even shorter) clinical trials are likely to reduce
development costs.
This sounds promising, but since personalized medicine is in its early stages,
- efficiency gains may occur only in the long run. Furthermore,
- companion diagnostics are likely to increase the additional costs and the complexity of
- the risky process of drug discovery and development.
In oncology, where personalized medicine is currently progressing most rapidly,
- the late stage failure rate of new compounds has historically been higher than in any other area.
A well-known argument against personalized medicines is, that
- they will lead to smaller groups of eligible patients and
- therefore lead to higher unit prices in order to deliver competitive return on investment.
Additional value, therefore, “needs to be captured in terms of
- premium pricing, faster adoption or longer effective patent life for a portfolio of targeted drugs,
- to offset the reduction in potential revenues from market stratification”.
However, personalized medicines do not only diminish groups of eligible patients,
- they enlarge them as well (see chapter 2).
The following problems still have to be solved:
- The costs of diagnostics are not easy to describe
- The evaluation of cost-effectiveness is difficult
- The regulatory pathway is fragmented
Costs of Diagnostics Are Not Easy to Describe
Implementing personalized medicines in healthcare is potentially a costly investment:
- it requires testing a whole patient population to identify groups of responding patients
- or to screen out patients likely to suffer adverse events or who need different dosing.
Some evaluations have attempted to tie the diagnostic
- to the use of a specific new medicine (“co-dependent technologies”) and
- have in some cases passed the costs of diagnosis on to the medicines developer.
But more and more multiple tests and multiple personalized medicines for particular
diseases become available. For example, in colorectal cancer and non-small cell
lung cancer,
- a set of parallel tests are to be performed on a number of molecular biomarkers
- to decide between a range of personalized medicines.
But more complex diagnostic and treatment pathways make it
- less easy to ascribe costs of diagnosis to the use of a particular medicine.
Evaluation of Cost-Effectiveness is Difficult
To evaluate the cost-effectiveness of particular molecular diagnostic approaches
is also problematic. Diagnostics are normally supported by
- analytical performance data and rarely by clinical or outcome data,
- but in order to perform a health technology assessment you need the latter.
According to a systematic review, only eight studies evaluated the clinical validity, andnone of the studies was a prospective evaluation of a test’s clinical utility.
In addition, personalized medicines may also require
- a different view on the clinical evidence available when a medicine is launched.
In some examples, personalized medicines have been approved on the basis of
- a retrospective analysis of clinical data identifying the responsive sub-population.
In other cases, personalised medicines have been launched under
- conditional approval mechanisms on the basis of phase 2 data alone (smaller studies, often
without overall survival end points).
Regulatory Pathway is Fragmented
Discussion
Current health economic research shows that personalization of treatment, i.e.
identifying responders and non-responders, has
- the potential to improve effectiveness and reduce costs.
In addition, biomarker testing may lead to more successful R&D projects.
However, the value of so-called tailor-made therapies depends to a large
extent on the quality of the tailor. In other words,
- adaptations to both regulatory structures and market structures
- are necessary to encourage the development of personalized medicine.
Part 4: Biomarkers as Diagnostics
There has been consolidation going on for over a decade in both the pharmaceutical and in the diagnostics industry, and at the same time the page is being rewritten for health care delivery. I shall try to work through a clear picture of these not coincidental events.
Key notables:
- A growing segment of the US population is reaching Medicare age
- There is also a large underserved population in both metropolitan and nonurban areas and a fragmentation of the middle class after a growth slowdown in the economy since the 2008 deep recession.
- The deep recession affecting worldwide economies was only buffered by availability of oil or natural gas.
- In addition, there was a self-destructive strategy to cut spending on national scales that withdrew the support that would bolster support for infrastrucrue renewl.
- There has been a dramatic success in the clinical diagnostics industry, with a long history of being viewed as a loss leader, and this has been recently followed by the pharmaceutical industry faced with inability to introduce new products, leading to more competition in off-patent medications.
- The introduction of the Accountable Care Act has opened the opportunities for improved care, despite political opposition, and has probably sustained opportunity in the healthcare market.
Let’s take a look at this three headed serpent. – Pharma, Diagnostics, New Entity
? The patient ?
? Insurance ?
? Physician ?
4 Biomarkers in Medical Practice
4.1 Case in Point 1.
A Solid Prognostic Biomarker
HDL-C: Target of Therapy or Fuggedaboutit?
Steven E. Nissen, MD, MACC, Peter Libby, MD
Steven E. Nissen, MD, MACC: I am Steve Nissen, chairman of the Department of Cardiovascular Medicine at the Cleveland Clinic. I am here with Dr Peter Libby, chief of cardiology at the Brigham and Women’s Hospital and professor of medicine at Harvard Medical School. We are going to discuss high-density lipoprotein cholesterol (HDL-C), a topic that has been very controversial recently. Peter, HDL-C has been a pretty good biomarker. The question is whether it is a good target.
Peter Libby, MD: Since the early days in Berkley, when they were doing ultracentrifugation, and when it was reinforced and put on the map by the Framingham Study,[1] we have known that HDL-C is an extremely good biomarker of prospective cardiovascular risk with an inverse relationship with all kinds of cardiovascular events. That is as solid a finding as you can get in observational epidemiology. It is a very reliable prospective marker. It’s natural that the pharmaceutical industry and those of us who are interested in risk reduction would focus on HDL-C as a target. That is where the controversies come in.
Dr Nissen: It has been difficult. My view is that the trials that have attempted to modulate HDL-C or the drugs they used have been flawed. Although the results have not been promising, the jury is yet out. Torcetrapib, the cholesteryl ester transfer protein (CETP) inhibitor developed by Pfizer, had anoff-target toxicity.[2] Niacin is not very effective, and there are a lot of downsides to the drug. That has been an issue, but people are still working on this. We have done some studies. We did our ApoA-1 Milano infusion study[3]about a decade ago, which showed very promising results with respect to shrinking plaques in coronary arteries. I remain open to the possibility that the right drug in the right trial will work.
Dr Libby: What do you do with the genetic data that have come out in the past couple of years? Sekar Kathiresan masterminded and organized an enormous collaboration[4] in which they looked, with contemporary genetics, at whether HDL had the genetic markers of being a causal risk factor. They came up empty-handed.
Dr Nissen: I am cautious about interpreting those data, like I am cautious about interpreting animal studies of atherosclerosis. We have both lived through this problem in which something works extremely well in animals but doesn’t work in humans, or it doesn’t work in animals but it works in humans. The genetic studies don’t seal the fate of HDL. I have an open mind about this. Drugs are complex. They work by complex mechanisms. It is my belief that what we have to do is test these hypotheses in well-designed clinical trials, which are rigorously performed with drugs that are clean—unlike torcetrapib—and don’t have off-target toxicities.
An Unmet Need: High Lp(a) Levels
Dr Nissen: I’m going to push back on that and make a couple of points. The HPS2-THRIVE study was flawed. They studied the wrong people. It was not a good study, and AIM-HIGH[8] was underpowered. I am not putting people on niacin. What do you do with a patient whose Lp(a) is 200 mg/dL?
Dr Libby: I’m waiting for the results of the PCSK9 and anacetrapib studies. You can tell me about evacetrapib.[9]Reducing Lp(a) is an unmet medical need. We both care for kindreds with high Lp(a) levels and premature coronary artery disease. We have no idea what to do with them other than to treat them with statins and lower their LDL-C levels.
Dr Nissen: I have taken a more cautious approach with respect to taking people off of niacin. If I have patients who are doing well and tolerating it (depending on why it was started), I am discontinuing niacin in some people. I am starting very few people on the drug, but I worry about the quality of the trial.
Dr Libby: So you are of the “don’t start don’t stop” school?
Dr Nissen: Yes. It’s difficult when the trial is fatally flawed. There were 11,000 patients from China in this study. I have known for years that if you give niacin to people of Asiatic ethnic descent, they have terrible flushing and they won’t continue the drug. One question is, what was the adherence? The adverse events would have been tolerable had there been efficacy. The concern here is that this study was destined to fail because they studied a low LDL/high HDL population, a group of people for whom niacin just isn’t used.
Triglycerides and HDL: Do We Have It Backwards?
Dr Libby: What about the recent genetic[10] and epidemiologic data that support triglycerides, and apolipoprotein C3 in particular as a causal risk factor? Have we been misled through all of the generations in whom we have been adjusting triglycerides for HDL-C and saying that triglycerides are not a causal risk factor because once we adjust for HDL, the risk goes away? Do you think we got it backwards?
Dr Nissen: The tricky factor here is that because of this intimate inverse relationship between triglycerides and HDL, we may be talking about the same phenomenon. That is one of the reasons that I am not certain we are not going to be able to find a therapy. What if you had a therapy that lowered triglycerides and raised HDL-C? Could that work? Could that combination be favorable? I want answers from rigorous, well-designed clinical trials that ask the right questions in the right populations. I am disappointed, just as I have been disappointed by the fibrate trials.[11,12] There is a class of drugs that raises HDL-C a little and lowers triglycerides a lot.
Dr Nissen: But the gemfibrozil studies (VA-HIT[13] and Helsinki Heart[14]) showed benefit.
The Dyslipidemia Bar Has Been Raised
Dr Libby: Those studies were from the pre-statin era. We both were involved in trials in which patients were on high-dose statins at baseline. Do you think that this is too high a bar?
Dr Nissen: The bar has been raised, and for the pharmaceutical industry, the studies that we need to find out whether lowering triglycerides or raising HDL is beneficial are going to be large. We are doing a study with evacetrapib. It has 12,000 patients. It’s fully enrolled. Evacetrapib is a very clean-looking drug. It doesn’t have such a long biological half-life as anacetrapib, so I am very encouraged that it won’t have that baggage of being around for 2-4 years. We’ve got a couple of shots on goal here. Don’t forget that we have multiple ongoing studies of HDL-C infusion therapies that are still under development. Those have some promise too. The jury is still out.
Dr Libby: We agree on the need to do rigorous, large-scale endpoint trials. Do the biomarker studies, but don’t wait to start the endpoint trial because that’s the proof in the pudding.
Dr Nissen: Exactly. We have had a little controversy about HDL-C. We often agree, but not always, and we may have a different perspective. Thanks for joining me in this interesting discussion of what will continue to be a controversial topic for the next several years until we get the results of the current ongoing trials.
4.2 Case in Point 2.
NSTEMI? Honesty in Coding and Communication?
Melissa Walton-Shirley
The complaint at ER triage: Weakness, fatigue, near syncope of several days’ duration, vomiting, and decreased sensorium.
The findings: O2sat: 88% on room air. BP: 88 systolic. Telemetry: Sinus tachycardia 120 bpm. Blood sugar: 500 mg/dL. Chest X ray: atelectasis. Urinalysis: pyuria. ECG: T-wave-inversion anterior leads. Echocardiography: normal left ventricular ejection fraction (LVEF) and wall motion. Troponin I: 0.3 ng/mL. CT angiography: negative for pulmonary embolism (PE). White blood cell count: 20K with left shift. Blood cultures: positive for Gram-negative rods.
The treatment: Intravenous fluids and IV levofloxacin—changed to ciprofloxacin.
The communication at discharge: “You had a severe urinary-tract infection and grew bacteria in your bloodstream. Also, you’ve had a slight heart attack. See your cardiologist immediately upon discharge-no more than 5 days from now.”
The diagnoses coded at discharge: Urosepsis and non-ST segment elevation MI (NSTEMI) 410.1.
One year earlier: This moderately obese patient was referred to our practice for a preoperative risk assessment. The surgery planned was a technically simple procedure, but due to the need for precise instrumentation, general endotracheal anesthesia (GETA) was being considered. The patient was diabetic, overweight, and short of air. A stress exam was equivocal for CAD due to poor exercise tolerance and suboptimal imaging. Upon further discussion, symptoms were progressive; therefore, cardiac cath was recommended, revealing angiographically normal coronaries and a predictably elevated left ventricular end diastolic pressure (LVEDP) in the mid-20s range. The patient was given a diagnosis of diastolic dysfunction, a prescription for better hypertension control, and in-depth discussion on exercise and the Mediterranean and DASH diets for weight loss. Symptoms improved with a low dose of diuretic. The surgery was completed without difficulty. Upon follow-up visit, the patient felt well, had lost a few pounds, and blood pressure was well controlled.
Five days after ER workup: While out of town, the patient developed profound weakness and went to the ER as described above. Fast forward to our office visit in the designated time frame of “no longer than 5 days’ postdischarge,” where the patient and family asked me about the “slight heart attack” that literally came on the heels of a normal coronary angiogram.
But the patient really didn’t have a “heart attack,” did they? The cardiologist aptly stated that it was likely nonspecific troponin I leak in his progress notes. Yet the hospitalist framed the diagnosis of NSTEMI as item number 2 in the final diagnoses.
The motivations on behalf of personnel who code charts are largely innocent and likely a direct result of the lack of understanding of the coding system on behalf of us as healthcare providers. I have a feeling, though, that hospitals aren’t anxious to correct this misperception, due to an opportunity for increased reimbursement. I contacted a director of a coding department for a large hospital who prefers to remain anonymous. She explained that NSTEMI ICD9 code 410.1 falls in DRG 282 with a weight of .7562. The diagnosis of “demand ischemia,” code 411.89, a slightly less inappropriate code for a nonspecific troponin I leak, falls in DRG 311 with a weight of .5662. To determine reimbursement, one must multiply the weight by the average hospital Medicare base rate of $5370. Keep in mind that each hospital’s base rate and corresponding payment will vary. The difference in reimbursement for a large hospital bill between these two choices for coding is substantial, at over $1000 difference ($4060 vs $3040).
Although hospitals that are already reeling from shrinking revenues will make more money on the front end by coding the troponin leak incorrectly as an NSTEMI, when multiple unnecessary tests are generated to follow up on a nondiagnostic troponin leak, the amount of available Centers for Medicare & Medicaid Services (CMS) reimbursement pie shrinks in the long run. Furthermore, this inappropriate categorization generates extreme concern on behalf of patients and family members that is often never laid to rest. The emotional toll of a “heart-attack” diagnosis has an impact on work fitness, quality of life, cost of medication, and the cost of future testing. If the patient lived for another 100 years, they will likely still list a “heart attack” in their medical history.
As a cardiologist, I resent the loose utilization of one of “my” heart-attack codes when it wasn’t that at all. At discharge, we need to develop a better way of communicating what exactly did happen. Equally important, we need to communicate what exactly didn’t happen as well.
Blood Markers Predict CKD Heart Failure
Published: Oct 3, 2014 | Updated: Oct 3, 2014
Elevated levels of high-sensitivity troponin T (hsTnT) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) strongly predicted heart failure in patients with chronic kidney disease followed for a median of close to 6 years, researchers reported.
Compared with patients with the lowest blood levels of hsTnT, those with the highest had a nearly five-fold higher risk for developing heart failure and the risk was 10-fold higher in patients with the highest NT-proBNP levels compared with those with the lowest levels of the protein, researcher Nisha Bansal, MD, of the University of Washington in Seattle, and colleagues wrote online in the Journal of the American Society of Nephrology.
A separate study, published online in theJournal of the American Medical Association earlier in the week, also examined the comorbid conditions of heart and kidney disease, finding no benefit to the practice of treating cardiac surgery patients who developed acute kidney injury with infusions of the antihypertensive drug fenoldopam.
The study, reported by researcher Giovanni Landoni, MD, of the IRCCS San Raffaele Scientific Institute, Milan, Italy, and colleagues, was stopped early “for futility,” according to the authors, and the incidence of hypotension during drug infusion was significantly higher in patients infused with fenoldopam than placebo (26% vs. 15%; P=0.001).
Blood Markers Predict CKD Heart Failure
The study in patients with mild to moderate chronic kidney disease (CKD) was conducted to determine if blood markers could help identify patients at high risk for developing heart failure.
Heart failure is the most common cardiovascular complication among people with renal disease, occurring in about a quarter of CKD patients.
The two markers, hsTnT and NT-proBNP, are associated with overworked cardiac myocytes and have been shown to predict heart failure in the general population.
However, Bansal and colleagues noted, the markers have not been widely used in diagnosing heart failure among patients with CKD due to concerns that reduced renal excretion may raise levels of these markers, and therefore do not reflect an actual increase in heart muscle strain.
To better understand the importance of elevated concentrations of hsTnT and NT-proBNP in CKD patients, the researchers examined their association with incident heart failure events in 3,483 participants in the ongoing observational Chronic Renal Insufficiency Cohort (CRIC) study.
All participants were recruited from June 2003 to August 2008, and all were free of heart failure at baseline. The researchers used Cox regression to examine the association of baseline levels of hsTnT and NT-proBNP with incident heart failure after adjustment for demographic influences, traditional cardiovascular risk factors, makers of kidney disease, pertinent medication use, and mineral metabolism markers.
At baseline, hsTnT levels ranged from ≤5.0 to 378.7 pg/mL and NT-proBNP levels ranged from ≤5 to 35,000 pg/mL. Compared with patients who had undetectable hsTnT, those in the highest quartile (>26.5 ng/mL) had a significantly higher rate of heart failure (hazard ratio 4.77; 95% CI 2.49-9.14).
Compared with those in the lowest NT-proBNP quintile (<47.6 ng/mL), patients in the highest quintile (>433.0 ng/mL) experienced an almost 10-fold increase in heart failure risk (HR 9.57; 95% CI 4.40-20.83).
The researchers noted that these associations remained robust after adjustment for potential confounders and for the other biomarker, suggesting that while hsTnT and NT-proBNP are complementary, they may be indicative of distinct biological pathways for heart failure.
Even Modest Increases in NP-proBNP Linked to Heart Failure
The findings are consistent with an earlier analysis that included 8,000 patients with albuminuria in the Prevention of REnal and Vascular ENd-stage Disease (PREVEND) study, which showed that hsTnT was associated with incident cardiovascular events, even after adjustment for eGFR and severity of albuminuria.
“Among participants in the CRIC study, those with the highest quartile of detectable hsTnT had a twofold higher odds of left ventricular hypertrophy compared with those in the lowest quartile,” Bansal and colleagues wrote, adding that the findings were similar after excluding participants with any cardiovascular disease at baseline.
Even modest elevations in NT-proBNP were associated with significantly increased rates of heart failure, including in subgroups stratified by eGFR, proteinuria, and diabetic status.
“NT-proBNP regulates blood pressure and body fluid volume by its natriuretic and diuretic actions, arterial dilation, and inhibition of the renin-aldosterone-angiotensin system and increased levels of this marker likely reflect myocardial stress induced by subclinical changes in volume or pressure, even in persons without clinical disease,” the researchers wrote.
The researchers concluded that further studies are needed to develop and validate risk prediction tools for clinical heart failure in patients with CKD, and to determine the potential role of these two biomarkers in a heart failure risk prediction and prevention strategy.
Fenoldopam ‘Widely Promoted’ in AKI Cardiac Surgery Setting
The JAMA study examined whether the selective dopamine receptor D agonist fenoldopam mesylate can reduce the need for dialysis in cardiac surgery patients who develop acute kidney injury (AKI).
Fenoldopam induces vasodilation of the renal, mesenteric, peripheral, and coronary arteries, and, unlike dopamine, it has no significant affinity for D2 receptors, meaning that it theoretically induces greater vasodilation in the renal medulla than in the cortex, the researchers wrote.
“Because of these hemodynamic effects, fenoldopam has been widely promoted for the prevention and therapy of AKI in the United States and many other countries with apparent favorable results in cardiac surgery and other settings,” Landoni and colleagues wrote.
The drug was approved in 1997 by the FDA for the indication of in-hospital, short-term management of severe hypertension. It has not been approved for renal indications, but is commonly used off-label in cardiac surgery patients who develop AKI.
Although a meta analysis of randomized trials, conducted by the researchers, indicated a reduction in the incidence and progression of AKI associated with the treatment, Landoni and colleagues wrote that the absence of a definitive trial “leaves clinicians uncertain as to whether fenoldopam should be prescribed after cardiac surgery to prevent deterioration in renal function.”
To address this uncertainty, the researchers conducted a prospective, randomized, parallel-group trial in 667 patients treated at 19 hospitals in Italy from March 2008 to April 2013.
All patients had been admitted to ICUs after cardiac surgery with early acute kidney injury (≥50% increase of serum creatinine level from baseline or low output of urine for ≥6 hours). A total of 338 received fenoldopam by continuous intravenous infusion for a total of 96 hours or until ICU discharge, while 329 patients received saline infusions.
The primary end point was the rate of renal replacement therapy, and secondary end points included mortality (intensive care unit and 30-day mortality) and the rate of hypotension during study drug infusion.
Study Showed No Benefit, Was Stopped Early
Part 5: Reality of the Diagnostics Industry in the US: Post Human Genome sequencing
5.1 Illimina and Roche
When Illumina Buys Roche: The Dawning Of The Era Of Diagnostics Dominance
Robert J. Easton, Alain J. Gilbert, Olivier Lesueur, Rachel Laing, and Mark Ratner
http://PharmaMedtechBI.com | IN VIVO: The Business & Medicine Report Jul/Aug 2014; 32(7).
- With current technology and resources, a well-funded IVD company can create and pursue a strategy of information gathering and informatics application to create medical knowledge, enabling it to assume the risk and manage certain segments of patients
- We see the first step in the process as the emergence of new specialty therapy companies coming from an IVD legacy, most likely focused in cancer, infection, or critical care
When Illumina Inc. acquired the regulatory consulting firm Myraqa, a specialist in in vitro diagnostics (IVD), in July, the press release announcement characterized the deal as one that would bolster illumina’s in-house capabilities for clinical readiness and help prepare for its next growth phase in regulated markets. That’s not surprising given the US Food and Drug Administration’s (FDA) approval a year and a half ago of its MiSeq next-generation sequencer for clinical use. But the deal could also suggest illumina is beginning to move along the path toward taking on clinical risk – that is, eventually
- advising physicians and patients, which would mean facing regulators directly
Such a move – by illumina, another life sciences tools firm, or an information specialist from the high-tech universe – is inevitable given
- the emerging power of diagnostics and traditional health care players’ reluctance to themselves take on such risk.
Alternatively, we believe that a well-funded diagnostics company could establish this position. either way, such a champion would establish dominion over and earn higher valuation than less-aggressive players who
- only supply compartmentalized drug and device solutions.
Diagnostics companies have long been dogged by a fundamental issue:
- they are viewed and valued more along the lines of a commodity business than as firms that deliver a unique product or service
- diagnostics companies are in position to do just that today because they are now advantaged by having access to more data points.
- if they were to cobble together the right capabilities, diagnostics companies would have the ability to turn information into true medical knowledge
Example: PathGEN PathChip
nucleic-acid-based platform detects 296 viruses, bacteria, fungi & parasites
http://ow.ly/d/2GvQhttp://ow.ly/DSORV
This puts the diagnostics player in an unfamiliar realm where it can ask the question of what value they offer compared with a therapeutic. The key is that diagnostics can now offer unique information and potentially unique tools to capture that information. In order to do so, it has to create information from the data it generates, and then to supply that knowledge to users who will value and act on that knowledge. Complex genomic tests, as much as physical examination, may be the first meaningful touch point for physicians’ classification of disease.
Even if lab tests are more expensive, it is a cheaper means for deciding what to do first for a patient than the trial and error of prescribing medication without adequate information. Information is gaining in value as the amount of treatment data available on genomically characterizable subpopulations increases. In such a circumstance
it is the ability to perform that advisory function that will add tremendous value above what any test provides, the leverage of being able to apply a proprietary diagnostics platform – and importantly, the data it generates. It is the ability to perform that advisory function that will add tremendous value above what any test provides.
Integrated Diagnostics Inc. and Biodesix Inc. with mass spectrometry has the tools for unraveling disease processes, and numerous players are quite visibly in or are getting into the business of providing medical knowledge and clinical decision support in pursuit of a huge payout for those who actually solve important disease mysteries. Of course one has to ask whether MS/MS is sufficient for the assigned task, and also whether the technology is ready for the kind of workload experienced in a clinical service compared to a research vehicle. My impression (as a reviewer) is that it is not now the time to take this seriously.
Roche has not realized its intent with Ventana: failing to deliver on the promise of boosting Roche’s pipeline, which was a significant factor in the high price Roche paid. The combined company was to be “uniquely positioned to further expand Ventana’s business globally and together develop more cost-efficient, differentiated, and targeted medicines. On the other hand, Biodesix decided to use Veristrat to look back and analyze important trial data to try to ascertain which patients would benefit from ficlatuzumab (subset). The predictive effect for the otherwise unimpressive trial results was observed in both progression-free survival and overall survival endpoints, and encouraged the companies to conduct a proof-of-concept study of ficlatuzumab in combination with Tarceva in advanced Non Small Cell Lung Cancer Patients (NSCLC) selected using the Veristrat test.
A second phase of IVD evolution will be far more challenging to pharma, when the most accomplished companies begin to assemble and integrate much broader data
sets, thereby gaining knowledge sufficient to actually manage patients and dictate therapy, including drug selection. No individual physician has or will have access to all of this information on thousands of patients, combined with the informatics to tease out from trillions of data points the optimal personalized medical approach. When the IVD-origin knowledge integrator amasses enough data and understanding to guide therapy decisions in large categories, particularly drug choices, it will become more valuable than any of the drug suppliers.
This is an apparent reversal of fortune. The pharmaceutical industry has been considered the valued provider, while the IVD manufacturer has been the low valued cousin. Now, it is by an ability to make kore accurate the drug administration that the IVD company can control the drug bill, to the detriment of drug developers, by finding algorithms that generate equal-to-innovative-drug outcomes using generics for most of the patients, thereby limiting the margins of drug suppliers and the upsides for new drug discovery/development.
It is here that there appears to be a misunderstanding of the whole picture of the development of the healthcare industry. The pharmaceutical industry had a high value added only insofar it could replace market leaders for treatment before or at the time of patent expiration, which largely depended either introducing a new class of drug, or by relieving the current drug in its class of undesired toxicities or “side effects”. Otherwise, the drug armamentarium was time limited to the expiration date. In other words, the value was dependent on a window of no competition. In addition, as the regulation of healthcare costs were tightening under managed care, the introduction of new products that were deemed to be only marginally better, could be substitued by “off-patent” drug products.
The other misunderstanding is related to the IVD sector. Laboratory tests in the 1950’s were manual, and they could be done by “technicians” who might not have completed a specialized training in clinical laboratory sciences. The first sign of progress was the introduction of continuous flow chemistry, with a sampling probe, tubing to bring the reacting reagents into a photocell, and the timing of the reaction controlled by a coiled glass tubing before introducing the colored product into a uv-visible photometer. In perhaps a decade, the Technicon SMA 12 and 6 instruments were introduced that could do up to 18 tests from a single sample.
5.2 My Cancer Genome from Vanderbilt University: Matching Tumor Mutations to Therapies & Clinical Trials
Reporter: Aviva Lev-Ari, PhD, RN
http://pharmaceuticalintelligence.com/2014/11/05/my-cancer-genome-from-vanderbilt-university-matching-tumor-mutations-to-therapies-clinical-trials/
GenomOncology and Vanderbilt-Ingram Cancer Center (VICC) today announced a partnership for the exclusive commercial development of a decision support tool based on My Cancer Genome™, an online precision cancer medicine knowledge resource for physicians, patients, caregivers and researchers.
Through this collaboration, GenomOncology and VICC will enhance My Cancer Genome through the development of a new genomics content management tool. The MyCancerGenome.org website will remain free and open to the public. In addition, GenomOncology will develop a decision support tool based on My Cancer Genome™ data that will enable automated interpretation of mutations in the genome of a patient’s tumor, providing actionable results in hours versus days.
Vanderbilt-Ingram Cancer Center (VICC) launched My Cancer Genome™ in January 2011 as an integral part of their Personalized Cancer Medicine Initiative that helps physicians and researchers track the latest developments in precision cancer medicine and connect with clinical research trials. This web-based information tool is designed to quickly educate clinicians on the rapidly expanding list of genetic mutations that impact cancers and enable the research of treatment options based on specific mutations. For more information on My Cancer Genome™visit www.mycancergenome.org/about/what-is-my-cancer-genome.
Therapies based on the specific genetic alterations that underlie a patient’s cancer not only result in better outcomes but often have less adverse reactions
Up front fee
Nominal fee covers installation support, configuring the Workbench to your specification, designing and developing custom report(s) and training your team.
Per sample fee
GenomOncology is paid on signed-out clinical reports. This philosophy aligns GenomOncology with your Laboratory as we are incentivized to offer world-class support and solutions to differentiate your clinical NGS program. There is no annual license fee.
5.3 Clinical Trial Services: Foundation Medicine & EmergingMed to Partner
Reporter: Aviva Lev-Ari, PhD, RN
http://pharmaceuticalintelligence.com/2014/11/03/clinical-trial-services-foundation-medicine-emergingmed-to-partner/
Foundation Medicine and EmergingMed said today that they will partner to offer clinical trial navigation services for health care providers and their patients who have received one of Foundation Medicine’s tumor genomic profiling tests.
The firms will provide concierge services to help physicians
- identify appropriate clinical trials for patients
- based on the results of FoundationOne or FoundationOne Heme.
“By providing clinical trial navigation services, we aim to facilitate
- timely and accurate clinical trial information and enrollment support services for physicians and patients,
- enabling greater access to treatment options based on the unique genomic profile of a patient’s cancer
Currently, there are over 800 candidate therapies that target genomic alterations in clinical trials,
- but “patients and physicians must identify and act on relevant options
- when the patient’s clinical profile is aligned with the often short enrollment window for each trial.
These investigational therapies are an opportunity to engage patients with cancer whose cancer has progressed or returned following standard treatment in a most favorable second option after relapse. The new service is unique in notifying when new clinical trials emerge that match a patient’s genomic and clinical profile.
Google signs on to Foundation Medicine cancer Dx by offering tests to employees
By Emily Wasserman
Diagnostics luminary Foundation Medicine ($FMI) is generating some upward momentum, fueled by growing revenues and the success of its clinical tests. Tech giant Google ($GOOG) has taken note and is signing onto the company’s cancer diagnostics by offering them to employees.
Foundation Medicine CEO Michael Pellini said during the company’s Q3 earnings call that Google will start covering its DNA tests for employees and their family members suffering from cancer as part of its health benefits portfolio, Reuters reports.
Both sides stand to benefit from the deal, as Google looks to keep a leg up on Silicon Valley competitors and Foundation Medicine expands its cancer diagnostics platform. Last month, Apple ($AAPL) and Facebook ($FB) announced that they would begin covering the cost of egg freezing for female employees. A diagnostics partnership and attractive health benefits could work wonders for Google’s employee retention rates and bottom line.
In the meantime, Cambridge, MA-based Foundation Medicine is charging full speed ahead with its cancer diagnostics platform after filing for an IPO in September 2013. The company chalked up 6,428 clinical tests during Q3 2014, an eye-popping 149% increase year over year, and brought in total revenue for the quarter of $16.4 million–a 100% leap from last year. Foundation Medicine credits the promising numbers in part to new diagnostic partnerships and extended coverage for its tests.
In January, the company teamed up with Novartis ($NVS) to help the drugmaker evaluate potential candidates for its cancer therapies. In April, Foundation Medicine announced that it would develop a companion diagnostic test for a Clovis Oncology ($CLVS) drug under development to treat patients with ovarian cancer, building on an ongoing collaboration between the two companies.
Foundation Medicine also has its sights set on China’s growing diagnostics market, inking a deal in October with WuXi PharmaTech ($WX) that allows the company to perform lab testing for its FoundationOne assay at WuXi’s Shanghai-based Genome Center.
Foundation Medicine teams with MD Anderson for new trial of cancer Dx
Second study to see if targeted therapy can change patient outcomes
August 15, 2014 | By Joseph Keenan FierceDiagnostics
Foundation Medicine ($FMI) is teaming up with the MD Anderson Cancer Center in Texas for a new trial of the the Cambridge, MA-based company’s molecular diagnostic cancer test that targets therapies matched to individual patients.
The study is called IMPACT2 (Initiative for Molecular Profiling and Advanced Cancer Therapy) and is designed to build on results from the the first IMPACT study that found
- 40% of the 1,144 patients enrolled had an identifiable genomic alteration.
The company said that
- by matching specific gene alterations to therapies,
- 27% of patients in the first study responded versus
- 5% with an unmatched treatment, and
- “progression-free survival” was longer in the matched group.
The FoundationOne molecular diagnostic test
- combines genetic sequencing and data gathering
- to help oncologists choose the best treatment for individual patients.
Costing $5,800 per test, FoundationOne’s technology can uncover a large number of genetic alterations for 200 cancer-related genes,
- blending genomic sequencing, information and clinical practice.
“Based on the IMPACT1 data, a validated, comprehensive profiling approach has already been adopted by many academic and community-based oncology practices,” Vincent Miller, chief medical officer of Foundation Medicine, said in a release. “This study has the potential to yield sufficient evidence necessary to support broader adoption across most newly diagnosed metastatic tumors.”
The company got a boost last month when the New York State Department of Health approved Foundation Medicine’s two initial cancer tests: the FoundationOne test and FoundationOne Heme, which creates a genetic profile for blood cancers. Typically,
- diagnostics companies struggle to win insurance approval for their tests
- even after they gain a regulatory approval, leaving revenue growth relatively flat.
However, Foundation Medicine reported earlier this week its Q2 revenue reached $14.5 million compared to $5.9 million for the same period a year ago. Still,
- net losses continue to soar as the company ramps up
- its commercial and business development operation,
- hitting $13.7 million versus a $10.1 million deficit in the second quarter of 2013.
Oncology
There has been a remarkable transformation in our understanding of
- the molecular genetic basis of cancer and its treatment during the past decade or so.
In depth genetic and genomic analysis of cancers has revealed that
- each cancer type can be sub-classified into many groups based on the genetic profiles and
- this information can be used to develop new targeted therapies and treatment options for cancer patients.
This panel will explore the technologies that are facilitating our understanding of cancer, and
- how this information is being used in novel approaches for clinical development and treatment.
Opening Speaker & Moderator:
Lynda Chin, M.D.
Department Chair, Department of Genomic Medicine
MD Anderson Cancer Center
Panelists:
Roy Herbst, M.D., Ph.D.
Ensign Professor of Medicine and Professor of Pharmacology;
Chief of Medical Oncology, Yale Cancer Center and Smilow Cancer Hospital
Development new drugs to match patient, disease and drug – finding the right patient for the right Clinical Trial
- match patient to drugs
- partnerships: out of 100 screened patients, 10 had the gene, 5 were able to attend the trial — without the biomarker — all 100 patients would participate for the WRONG drug for them (except the 5)
- patients want to participate in trials next to home NOT to have to travel — now it is in the protocol
- Annotated Databases – clinical Trial informed consent – adaptive design of Clinical Trial vs protocol
- even Academic MD can’t read the reports on Genomics
- patients are treated in the community — more training to MDs
- Five companies collaborating – comparison og 6 drugs in the same class
- if drug exist and you have the patient — you must apply PM
Summary and Perspective:
The current changes in Biotechnology have been reviewed with an open question about the relationship of In Vitro Diagnostics to Biopharmaceuticals switching, with the potential, particularly in cancer and infectious diseases, to added value in targeted therapy by matching patients to the best potential treatment for a favorable outcome.
This reviewer does not see the movement of the major diagnostics leaders entering into the domain of direct patient care, even though there are signals in that direction. The Roche example is perhaps the most interesting because Roche already became the elephant in the room after the introduction of Valium, subsequently bought out Boehringer Mannheim Diagnostics to gain entry into the IVD market, and established a huge presence in Molecular Diagnostics early. If it did anything to gain a foothold in the treatment realm, it would more likely forge a relationship with Foundation Medicine. Abbott Laboratories more than a decade ago was overextended, and it had become the leader in IVD as a result of the specialty tests, but it fell into difficulties with quality control of its products in the high volume testing market, and acceeded to Olympus, Roche, and in the mid volume market to Beckman and Siemens. Of course, Dupont and Kodak, pioneering companies in IVD, both left the market.
The biggest challenge in the long run is identified by the ability to eliminate many treatments that would be failures for a large number of patients. That has already met the proof of concept. However, when you look at the size of the subgroups, we are not anywhere near a large scale endeavor. In addition, there is a lot that has to be worked out that is not related to genomic expression by the “classic” model, but has to take into account the emrging knowledge and greater understanding of regulation of cell metabolism, not only in cancer, but also in chronic inflammatory diseases.
Leave a Reply