Feeds:
Posts
Comments

Posts Tagged ‘Clinical decision support system’

NCCN Shares Latest Expert Recommendations for Prostate Cancer in Spanish and Portuguese

Reporter: Stephen J. Williams, Ph.D.

Currently many biomedical texts and US government agency guidelines are only offered in English or only offered in different languages upon request. However Spanish is spoken in a majority of countries worldwide and medical text in that language would serve as an under-served need. In addition, Portuguese is the main language in the largest country in South America, Brazil.

The LPBI Group and others have noticed this need for medical translation to other languages. Currently LPBI Group is translating their medical e-book offerings into Spanish (for more details see https://pharmaceuticalintelligence.com/vision/)

Below is an article on The National Comprehensive Cancer Network’s decision to offer their cancer treatment guidelines in Spanish and Portuguese.

Source: https://www.nccn.org/home/news/newsdetails?NewsId=2871

PLYMOUTH MEETING, PA [8 September, 2021] — The National Comprehensive Cancer Network® (NCCN®)—a nonprofit alliance of leading cancer centers in the United States—announces recently-updated versions of evidence- and expert consensus-based guidelines for treating prostate cancer, translated into Spanish and Portuguese. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) feature frequently updated cancer treatment recommendations from multidisciplinary panels of experts across NCCN Member Institutions. Independent studies have repeatedly found that following these recommendations correlates with better outcomes and longer survival.

“Everyone with prostate cancer should have access to care that is based on current and reliable evidence,” said Robert W. Carlson, MD, Chief Executive Officer, NCCN. “These updated translations—along with all of our other translated and adapted resources—help us to define and advance high-quality, high-value, patient-centered cancer care globally, so patients everywhere can live better lives.”

Prostate cancer is the second most commonly occurring cancer in men, impacting more than a million people worldwide every year.[1] In 2020, the NCCN Guidelines® for Prostate Cancer were downloaded more than 200,000 times by people outside of the United States. Approximately 47 percent of registered users for NCCN.org are located outside the U.S., with Brazil, Spain, and Mexico among the top ten countries represented.

“NCCN Guidelines are incredibly helpful resources in the work we do to ensure cancer care across Latin America meets the highest standards,” said Diogo Bastos, MD, and Andrey Soares, MD, Chair and Scientific Director of the Genitourinary Group of The Latin American Cooperative Oncology Group (LACOG). The organization has worked with NCCN in the past to develop Latin American editions of the NCCN Guidelines for Breast Cancer, Colon Cancer, Non-Small Cell Lung Cancer, Prostate Cancer, Multiple Myeloma, and Rectal Cancer, and co-hosted a webinar on “Management of Prostate Cancer for Latin America” earlier this year. “We appreciate all of NCCN’s efforts to make sure these gold-standard recommendations are accessible to non-English speakers and applicable for varying circumstances.”

NCCN also publishes NCCN Guidelines for Patients®, containing the same treatment information in non-medical terms, intended for patients and caregivers. The NCCN Guidelines for Patients: Prostate Cancer were found to be among the most trustworthy sources of information online according to a recent international study. These patient guidelines have been divided into two books, covering early and advanced prostate cancer; both have been translated into Spanish and Portuguese as well.

NCCN collaborates with organizations across the globe on resources based on the NCCN Guidelines that account for local accessibility, consideration of metabolic differences in populations, and regional regulatory variation. They can be downloaded free-of-charge for non-commercial use at NCCN.org/global or via the Virtual Library of NCCN Guidelines App. Learn more and join the conversation with the hashtag #NCCNGlobal.


[1] Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global Cancer Statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin, in press. The online GLOBOCAN 2018 database is accessible at http://gco.iarc.fr/, as part of IARC’s Global Cancer Observatory.

About the National Comprehensive Cancer Network

The National Comprehensive Cancer Network® (NCCN®) is a not-for-profit alliance of leading cancer centers devoted to patient care, research, and education. NCCN is dedicated to improving and facilitating quality, effective, efficient, and accessible cancer care so patients can live better lives. The NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) provide transparent, evidence-based, expert consensus recommendations for cancer treatment, prevention, and supportive services; they are the recognized standard for clinical direction and policy in cancer management and the most thorough and frequently-updated clinical practice guidelines available in any area of medicine. The NCCN Guidelines for Patients® provide expert cancer treatment information to inform and empower patients and caregivers, through support from the NCCN Foundation®. NCCN also advances continuing educationglobal initiativespolicy, and research collaboration and publication in oncology. Visit NCCN.org for more information and follow NCCN on Facebook @NCCNorg, Instagram @NCCNorg, and Twitter @NCCN.

Please see LPBI Group’s efforts in medical text translation and Natural Language Processing of Medical Text at

Read Full Post »

These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade.

Reporter: Gail S. Thornton, M.A.

This article is excerpted from Health IT Analytics, April 11, 2019.

 By Jennifer Bresnick

3.4.14   These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 3: AI in Medicine

April 11, 2019 – There’s no question that artificial intelligence is moving quickly in the healthcare industry.  Even just a few months ago, AI was still a dream for the next generation: something that would start to enter regular care delivery in a couple of decades – maybe ten or fifteen years for the most advanced health systems.

Even Partners HealthCare, the Boston-based giant on the very cutting edge of research and reform, set a ten-year timeframe for artificial intelligence during its 2018 World Medical Innovation Forum, identifying a dozen AI technologies that had the potential to revolutionize patient care within the decade.

But over the past twelve months, research has progressed so rapidly that Partners has blown up that timeline. 

Instead of viewing AI as something still lingering on the distant horizon, this year’s Disruptive Dozen panel was tasked with assessing which AI innovations will be ready to fundamentally alter the delivery of care by 2020 – now less than a year away.

Sixty members of the Partners faculty participated in nominating and narrowing down the tools they think will have an almost immediate benefit for patients and providers, explained Erica Shenoy, MD, PhD, an infectious disease specialist at Massachusetts General Hospital (MGH).

“These are innovations that have a strong potential to make significant advancement in the field, and they are also technologies that are pretty close to making it to market,” she said.

The results include everything from mental healthcare and clinical decision support to coding and communication, offering patients and their providers a more efficient, effective, and cost-conscious ecosystem for improving long-term outcomes.

In order from least to greatest potential impact, here are the twelve artificial intelligence innovations poised to become integral components of the next decade’s data-driven care delivery system.

NARROWING THE GAPS IN MENTAL HEALTHCARE

Nearly twenty percent of US patients struggle with a mental health disorder, yet treatment is often difficult to access and expensive to use regularly.  Reducing barriers to access for mental and behavioral healthcare, especially during the opioid abuse crisis, requires a new approach to connecting patients with services.

AI-driven applications and therapy programs will be a significant part of the answer.

“The promise and potential for digital behavioral solutions and apps is enormous to address the gaps in mental healthcare in the US and across the world,” said David Ahern, PhD, a clinical psychologist at Brigham & Women’s Hospital (BWH). 

Smartphone-based cognitive behavioral therapy and integrated group therapy are showing promise for treating conditions such as depression, eating disorders, and substance abuse.

While patients and providers need to be wary of commercially available applications that have not been rigorously validated and tested, more and more researchers are developing AI-based tools that have the backing of randomized clinical trials and are showing good results.

A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.
A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.

Source: Partners HealthCare

STREAMLINING WORKFLOWS WITH VOICE-FIRST TECHNOLOGY

Natural language processing is already a routine part of many behind-the-scenes clinical workflows, but voice-first tools are expected to make their way into the patient-provider encounter in a new way. 

Smart speakers in the clinic are prepping to relieve clinicians of their EHR burdens, capturing free-form conversations and translating the content into structured documentation.  Physicians and nurses will be able to collect and retrieve information more quickly while spending more time looking patients in the eye.

Patients may benefit from similar technologies at home as the consumer market for virtual assistants continues to grow.  With companies like Amazon achieving HIPAA compliance for their consumer-facing products, individuals may soon have more robust options for voice-first chronic disease management and patient engagement.

IDENTIFYING INDIVIDUALS AT HIGH RISK OF DOMESTIC VIOLENCE

Underreporting makes it difficult to know just how many people suffer from intimate partner violence (IPV), says Bharti Khurana, MD, an emergency radiologist at BWH.  But the symptoms are often hiding in plain sight for radiologists.

Using artificial intelligence to flag worrisome injury patterns or mismatches between patient-reported histories and the types of fractures present on x-rays can alert providers to when an exploratory conversation is called for.

“As a radiologist, I’m very excited because this will enable me to provide even more value to the patient instead of simply evaluating their injuries.  It’s a powerful tool for clinicians and social workers that will allow them to approach patients with confidence and with less worry about offending the patient or the spouse,” said Khurana.

REVOLUTIONIZING ACUTE STROKE CARE

Every second counts when a patient experiences a stroke.  In far-flung regions of the United States and in the developing world, access to skilled stroke care can take hours, drastically increasing the likelihood of significant long-term disability or death.

Artificial intelligence has the potential to close the gaps in access to high-quality imaging studies that can identify the type of stroke and the location of the clot or bleed.  Research teams are currently working on AI-driven tools that can automate the detection of stroke and support decision-making around the appropriate treatment for the individual’s needs.  

In rural or low-resource care settings, these algorithms can compensate for the lack of a specialist on-site and ensure that every stroke patient has the best possible chance of treatment and recovery.

AI revolutionizing stroke care

Source: Getty Images

REDUCING ADMINISTRATIVE BURDENS FOR PROVIDERS

The costs of healthcare administration are off the charts.  Recent data from the Center for American progress states that providers spend about $282 billion per year on insurance and medical billing, and the burdens are only going to keep getting bigger.

Medical coding and billing is a perfect use case for natural language processing and machine learning.  NLP is well-suited to translating free-text notes into standardized codes, which can move the task off the plates of physicians and reduce the time and effort spent on complying with convoluted regulations.

“The ultimate goal is to help reduce the complexity of the coding and billing process through automation, thereby reducing the number of mistakes – and, in turn, minimizing the need for such intense regulatory oversight,” Partners says.

NLP is already in relatively wide use for this task, and healthcare organizations are expected to continue adopting this strategy as a way to control costs and speed up their billing cycles.

UNLEASHING HEALTH DATA THROUGH INFORMATION EXCHANGE

AI will combine with another game-changing technology, known as FHIR, to unlock siloes of health data and support broader access to health information.

Patients, providers, and researchers will all benefit from a more fluid health information exchange environment, especially since artificial intelligence models are extremely data-hungry.

Stakeholders will need to pay close attention to maintaining the privacy and security of data as it moves across disparate systems, but the benefits have the potential to outweigh the risks.

“It completely depends on how everyone in the medical community advocates for, builds, and demands open interfaces and open business models,” said Samuel Aronson, Executive Director of IT at Partners Personalized Medicine.

“If we all row in the same direction, there’s a real possibility that we will see fundamental improvements to the healthcare system in 3 to 5 years.”

OFFERING NEW APPROACHES FOR EYE HEALTH AND DISEASE

Image-heavy disciplines have started to see early benefits from artificial intelligence since computers are particularly adept at analyzing patterns in pixels.  Ophthalmology is one area that could see major changes as AI algorithms become more accurate and more robust.

From glaucoma to diabetic retinopathy, millions of patients experience diseases that can lead to irreversible vision loss every year.  Employing AI for clinical decision support can extend access to eye health services in low-resource areas while giving human providers more accurate tools for catching diseases sooner.

REAL-TIME MONITORING OF BRAIN HEALTH

The brain is still the body’s most mysterious organ, but scientists and clinicians are making swift progress unlocking the secrets of cognitive function and neurological disease.  Artificial intelligence is accelerating discovery by helping providers interpret the incredibly complex data that the brain produces.

From predicting seizures by reading EEG tests to identifying the beginnings of dementia earlier than any human, artificial intelligence is allowing providers to access more detailed, continuous measurements – and helping patients improve their quality of life.

Seizures can happen in patients with other serious illnesses, such as kidney or liver failure, explained, Bandon Westover, MD, PhD, executive director of the Clinical Data Animation Center at MGH, but many providers simply don’t know about it.

“Right now, we mostly ignore the brain unless there’s a special need for suspicion,” he said.  “In a year’s time, we’ll be catching a lot more seizures and we’ll be doing it with algorithms that can monitor patients continuously and identify more ambiguous patterns of dysfunction that can damage the brain in a similar manner to seizures.”

AUTOMATING MALARIA DETECTION IN DEVELOPING REGIONS

Malaria is a daily threat for approximately half the world’s population.  Nearly half a million people died from the mosquito-borne disease in 2017, according to the World Health Organization, and the majority of the victims are children under the age of five.

Deep learning tools can automate the process of quantifying malaria parasites in blood samples, a challenging task for providers working without pathologist partners.  One such tool achieved 90 percent accuracy and specificity, putting it on par with pathology experts.

This type of software can be run on a smartphone hooked up to a camera on a microscope, dramatically expanding access to expert-level diagnosis and monitoring.

AI for diagnosing and detecting malaria

Source: Getty Images

AUGMENTING DIAGNOSTICS AND DECISION-MAKING

Artificial intelligence has made especially swift progress in diagnostic specialties, including pathology. AI will continue to speed down the road to maturity in this area, predicts Annette Kim, MD, PhD, associate professor of pathology at BWH and Harvard Medical School.

“Pathology is at the center of diagnosis, and diagnosis underpins a huge percentage of all patient care.  We’re integrating a huge amount of data that funnels through us to come to a diagnosis.  As the number of data points increases, it negatively impacts the time we have to synthesize the information,” she said.

AI can help automate routine, high-volume tasks, prioritize and triage cases to ensure patients are getting speedy access to the right care, and make sure that pathologists don’t miss key information hidden in the enormous volumes of clinical and test data they must comb through every day.

“This is where AI can have a huge impact on practice by allowing us to use our limited time in the most meaningful manner,” Kim stressed.

PREDICTING THE RISK OF SUICIDE AND SELF-HARM

Suicide is the tenth leading cause of death in the United States, claiming 45,000 lives in 2016.  Suicide rates are on the rise due to a number of complex socioeconomic and mental health factors, and identifying patients at the highest risk of self-harm is a difficult and imprecise science.

Natural language processing and other AI methodologies may help providers identify high-risk patients earlier and more reliably.  AI can comb through social media posts, electronic health record notes, and other free-text documents to flag words or concepts associated with the risk of harm.

Researchers also hope to develop AI-driven apps to provide support and therapy to individuals likely to harm themselves, especially teenagers who commit suicide at higher rates than other age groups.

Connecting patients with mental health resources before they reach a time of crisis could save thousands of lives every year.

REIMAGINING THE WORLD OF MEDICAL IMAGING

Radiology is already one of AI’s early beneficiaries, but providers are just at the beginning of what they will be able to accomplish in the next few years as machine learning explodes into the imaging realm.

AI is predicted to bring earlier detection, more accurate assessment of complex images, and less expensive testing for patients across a huge number of clinical areas.

But as leaders in the AI revolution, radiologists also have a significant responsibility to develop and deploy best practices in terms of trustworthiness, workflow, and data protection.

“We certainly feel the onus on the radiology community to make sure we do deliver and translate this into improved care,” said Alexandra Golby, MD, a neurosurgeon and radiologist at BWH and Harvard Medical School.

“Can radiology live up to the expectations?  There are certainly some challenges, including trust and understanding of what the algorithms are delivering.  But we desperately need it, and we want to equalize care across the world.”

Radiologists have been among the first to overcome their trepidation about the role of AI in a changing clinical world, and are eagerly embracing the possibilities of this transformative approach to augmenting human skills.”

“All of the imaging societies have opened their doors to the AI adventure,” Golby said.  “The community very anxious to learn, codevelop, and work with all of the industry partners to turn this technology into truly valuable tools. We’re very optimistic and very excited, and we look forward to learning more about how AI can improve care.”

Source:

https://healthitanalytics.com/news/top-12-artificial-intelligence-innovations-disrupting-healthcare-by-2020

 

Read Full Post »

10:15AM 11/13/2014 – 10th Annual Personalized Medicine Conference at the Harvard Medical School, Boston

Reporter: Aviva Lev-Ari, PhD, RN

 

REAL TIME Coverage of this Conference by Dr. Aviva Lev-Ari, PhD, RN – Director and Founder of LEADERS in PHARMACEUTICAL BUSINESS INTELLIGENCE, Boston http://pharmaceuticalintelligence.com

10:15 a.m. Panel Discussion — IT/Big Data

IT/Big Data

The human genome is composed of 6 billion nucleotides (using the genetic alphabet of T, C, G and A). As the cost of sequencing the human genome is decreasing at a rapid rate, it might not be too far into the future that every human being will be sequenced at least once in their lifetime. The sequence data together with the clinical data are going to be used more and more frequently to make clinical decisions. If that is true, we need to have secure methods of storing, retrieving and analyzing all of these data.  Some people argue that this is a tsunami of data that we are not ready to handle. The panel will discuss the types and volumes of data that are being generated and how to deal with it.

IT/Big Data

   Moderator:

Amy Abernethy, M.D.
Chief Medical Officer, Flatiron

Role of Informatics, SW and HW in PM. Big data and Healthcare

How Lab and Clinics can be connected. Oncologist, Hematologist use labs in clinical setting, Role of IT and Technology in the environment of the Clinicians

Compare Stanford Medical Center and Harvard Medical Center and Duke Medical Center — THREE different models in Healthcare data management

Create novel solutions: Capture the voice of the patient for integration of component: Volume, Veracity, Value

Decisions need to be made in short time frame, documentation added after the fact

No system can be perfect in all aspects

Understanding clinical record for conversion into data bases – keeping quality of data collected

Key Topics

Panelists:

Stephen Eck, M.D., Ph.D.
Vice President, Global Head of Oncology Medical Sciences,
Astellas, Inc.

Small data expert, great advantage to small data. Populations data allows for longitudinal studies,

Big Mac Big Data – Big is Good — Is data been collected suitable for what is it used, is it robust, limitations, of what the data analysis mean

Data analysis in Chemical Libraries – now annotated

Diversity data in NOTED by MDs, nuances are very great, Using Medical Records for building Billing Systems

Cases when the data needed is not known or not available — use data that is available — limits the scope of what Valuable solution can be arrived at

In Clinical Trial: needs of researchers, billing clinicians — in one system

Translation of data on disease to data object

Signal to Noise Problem — Thus Big data provided validity and power

 

J. Michael Gaziano, M.D., M.P.H., F.R.C.P.
Scientific Director, Massachusetts Veterans Epidemiology Research
and Information Center (MAVERIC), VA Boston Healthcare System;
Chief Division of Aging, Brigham and Women’s Hospital;
Professor of Medicine, Harvard Medical School

at BWH since 1987 at 75% – push forward the Genomics Agenda, VA system 25% – VA is horizontally data integrated embed research and knowledge — baseline questionnaire 200,000 phenotypes – questionnaire and Genomics data to be integrated, Data hierarchical way to be curated, Simple phenotypes, validate phenotypes, Probability to have susceptibility for actual disease, Genomics Medicine will benefit Clinicians

Data must be of visible quality, collect data via Telephone VA – on Med compliance study, on Ability to tolerate medication

–>>Annotation assisted in building a tool for Neurologist on Alzheimer’s Disease (AlzSWAN knowledge base) (see also Genotator , a Disease-Agnostic Tool for Annotation)

–>>Curation of data is very different than statistical analysis of Clinical Trial Data

–>>Integration of data at VA and at BWH are tow different models of SUCCESSFUL data integration models, accessing the data is also using a different model

–>>Data extraction from the Big data — an issue

–>>Where the answers are in the data, build algorithms that will pick up causes of disease: Alzheimer’s – very difficult to do

–>>system around all stakeholders: investment in connectivity, moving data, individual silo, HR, FIN, Clinical Research

–>>Biobank data and data quality

 

Krishna Yeshwant, M.D.
General Partner, Google Ventures;
Physician, Brigham and Women’s Hospital

Computer Scientist and Medical Student. Were the technology is going?

Messy situation, interaction IT and HC, Boston and Silicon Valley are focusing on Consumers, Google Engineers interested in developing Medical and HC applications — HUGE interest. Application or Wearable – new companies in this space, from Computer Science world to Medicine – Enterprise level – EMR or Consumer level – Wearable — both areas are very active in Silicon Valley

IT stuff in the hospital HARDER that IT in any other environment, great progress in last 5 years, security of data, privacy. Sequencing data cost of big data management with highest security

Constrained data vs non-constrained data

Opportunities for Government cooperation as a Lead needed for standardization of data objects

 

Questions from the Podium:

  • Where is the Truth: do we have all the tools or we don’t for Genomic data usage
  • Question on Interoperability
  • Big Valuable data — vs Big data
  • quality, uniform, large cohort, comprehensive Cancer Centers
  • Volume of data can compensate quality of data
  • Data from Imaging – Quality and interpretation – THREE radiologist will read cancer screening

 

 

 

– See more at: http://personalizedmedicine.partners.org/Education/Personalized-Medicine-Conference/Program.aspx#sthash.qGbGZXXf.dpuf

 

@HarvardPMConf

#PMConf

@SachsAssociates

@Duke_Medicine

@AstellasUS

@GoogleVentures

@harvardmed

@BrighamWomens

@kyeshwant

Read Full Post »

Treatment, Prevention and Cost of Cardiovascular Disease: Current & Predicted Cost of Care and the Potential for Improved Individualized Care Using Clinical Decision Support Systems

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC

Author and Curator: Larry H Bernstein, MD, FACP

and

Curator: Aviva Lev-Ari, PhD, RN

This article has the following FIVE parts:

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

4. Arterial Elasticity in Quest for a Drug Stabilizer: Isolated Systolic Hypertension
caused by Arterial Stiffening Ineffectively Treated by Vasodilatation Antihypertensives

5. Clinical Decision Support Systems: Realtime Clinical Expert Support — Biomarkers of Cardiovascular Disease : Molecular Basis and Practical Considerations

 

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

PA Heidenreich, NM Albert, LA Allen, DA Bluemke, J Butler, et al. Circulation: Heart Failure 2013;6.
Print ISSN: 1941-3289, Online ISSN: 1941-3297.

Heart failure (HF) poses a major burden on productivity and cost of national healthcare expenditures

  • among older Americans, more are hospitalized for HF than for any other medical condition.

As the population ages, the prevalence of HF is expected to increase.

The purpose of this report is to

  • provide an in-depth look at how the changing demographics in the United States will impact the prevalence and cost of care for HF for different US populations.

 Projections of HF Prevalence

Prevalence estimates for HF were determined from

 Projections of the US Population With HF From 2010 to 2030 for Different Age Groups

Year

All ages

18-44 y

45-64 y

65-79 y

> 80

2012 5 813 262 396 578 1 907 141 2 192 233 1 317 310
2015 6 190 606 402 926 1 949 669 2 483 853 1 354 158
2020 6 859 623 417 600 1 974 585 3 004 002 1 463 436
2025 7 644 674 434 635 1 969 852 3 526 347 1 713 840
2030 8 489 428 450 275 2 000 896 3 857 729 2 180 528

Future Costs of HF

The future costs of HF were estimated by methods developed by the American Heart Association

  • project the prevalence and costs of HF from 2012 to 2030
  • factor out  the costs attributable to comorbid conditions.

The model does this by assuming that

(1) HF prevalence percentages will remain constant by age, sex, and race/ethnicity;

(2) the costs of technological innovation will rise at the current rate.

HF prevalence and costs (direct and indirect) were projected using the following steps:

1. HF prevalence and average cost per person were estimated by age group (18–44, 45–64, 65–79, ≥80 years), gender (male, female), and race/ethnicity (white non-Hispanic, white Hispanic, black, other) [32]. The initial HF cost per person and rate of increase in cost was determined for each demographic group, as a percentage of total healthcare expeditures.

2. Inflation is separately addressed by correcting dollar values from Medical Expenditure Panel Survey (MEPS) to 2010 dollars.

3. Nursing home spending triggered an adjustment. The estimates project the incremental cost of care attributable to heart failure (HF).

4. Total HF population prevalence and costs were projected by multiplying the US Census–projected population of each demographic group by the percentage prevalence and average cost

5. The total work loss and home productivity loss costs were generated by multiplying per capita work days lost attributable to HF by (1) prevalence of HF, (2) the probability of employment given HF (for work loss costs only), (3) mean per capita daily earnings, and (4) US Census population projection counts.

Projections of Indirect Costs

Indirect costs of lost productivity from morbidity and premature mortality were estimated as detailed below.
Morbidity costs represent the value of lost earnings attributable to HF and include loss of work among

  • currently employed individuals and those too sick to work, as well as
  • home productivity loss, which is the value of household services performed by household members who do not receive pay for the services.

Total Costs Attributable to Heart Failure (HF)

Projections of Total Cost of Care ($ Billions) for HF for Different Age Groups of the US Population

Year All 18–44 45–64 65–79 ≥ 80
2012
Medical 20.9 0.33 3.67 8.46 8.42
Indirect: Morbidity 5.42 0.52 1.92 2.05 0.93
Indirect: Mortality 4.35 0.66 2.53 0.98 0.18
Total 30.7 1.51 8.12 11.5 9.53
2020
Medical 31.1 0.43 4.58 14.2 11.8
Indirect: Morbidity 7.09 0.66 2.20 3.11 1.12
Indirect: Mortality 5.39 0.79 2.89 1.49 0.22
Total 43.6 1.88 9.67 18.8 13.2
2030
Medical 53.1 0.59 5.86 23.3 23.4
Indirect: Morbidity 9.80 0.91 2.54 4.48 1.87
Indirect: Mortality 6.84 0.98 3.32 2.16 0.37
Total 69.7 2.48 11.7 29.9 25.6

Excludes HF care costs that have been attributed to comorbid conditions.

Cost of Care

Total medical costs are projected to increase from $20.9 billion in 2012 to $53.1 billion in 2030, a 2.5-fold increase. Assuming continuation of current hospitalization practices, the majority (80%) of the costs stem from

  • hospitalization. Also, the majority of increase is from directs costs. Indirect costs are expected to rise as well, but at a lower rate, from $9.8 billion to $16.6 billion, an increase of 69%.

Direct costs (cost of medical care) are expected to increase at a faster rate than indirect costs because of premature deaths and lost productivity.

The total cost of HF (direct and indirect costs) is expected to increase in 2030 from the current $30.7 billion to at least $69.8 billion. This will amount to $244 for every US adult in 2030.

Thus the burden of HF for the US healthcare system will grow substantially during the next 18 years if current trends continue.

It is estimated that

  • by 2030, the prevalence of HF in the United States will increase by 25%, to 3.0%.
  • >8 million people in the US (1 in every 33) will have HF by 2030.
  • the projected total direct medical costs of HF between 2012 and 2030 (in 2010 dollars) will increase from $21 billion to $53 billion.
  • Total costs, including indirect costs for HF, are estimated to increase from $31 billion in 2012 to $70 billion in 2030.
  • If one assumes all costs of cardiac care for HF patients are attributable to HF
    (no cost attribution to comorbid conditions), the 2030 projected cost estimates of treating patients with HF will be 3-fold higher ($160 billion in direct costs).

Projections can be lowered if action is taken to reduce the health and economic burden of HF. Strategies, plans, and implementation to prevent HF and improve the efficiency of care are needed.

Causes and Stages of HF

If the projections for accelerating HF costs are to be avoided, attention to the different causes of HF and their risk factors is warranted.
HF is a clinical syndrome that results from a variety of cardiac disorders

  1. idiopathic dilated cardiomyopathy
  2. cardiac valvular disease
  3. pericarditis or pericardial effusion
  4. ischemic heart disease
  5. primary or secondary hypertension
  6. renovascular disease
  7. advanced liver disease with decreased venous return
  8. pulmonary hypertension
  9. prolonged hypoalbuminemia with generalized interstitial edema
  10. diabetic nephropathy
  11. heart muscle infiltration disease such as primary or secondary amyloidosis
  12. myocarditis
  13. rhythm disorders
  14. congenital diseases
  15. accidental trauma (war, chest trauma)
  16. toxicities (methamphetamine, cocaine, heavy metals, chemotherapy)

HF generally causes symptoms:

  • shortness of breath
  • fatigue
  • swelling (edema)
  • inability to lay flat (orthopnea, paroxysmal nocturnal dyspnea)
  • possibly cough, wheezing

In the Western world the predominant causes of HF are:

  • coronary artery disease
  • valvular disease
  • hypertension
  • viral, alcohol, methamphetamine or other drug  toxicity cardiomyopathy
  • stress (catechol toxicity, takotsubo “broken heart” cardiomyopathy)
  • atrial fibrillation/rapid heart rates
  • thyroid disease

In 2001, the American College of Cardiology and AHA practice guidelines for chronic HF promoted a classification system that encompasses 4 stages of HF.

  • Stage A: Patients at high risk for developing HF in the future but no functional or structural heart disorder.
  • Stage B: a structural heart disorder but no symptoms.
  • Stage C: previous or current symptoms of heart failure, manageable with medical treatment.
  • Stage D: advanced disease requiring hospital-based support, a heart transplant or palliative care.

Stages A and B are considered precursors to the clinical HF and are meant

  1. to alert healthcare providers to known risk factors for HF and
  2. the available therapies aimed at mitigating disease progression.

Stage A patients have risk factors for HF hypertension, atherosclerotic heart disease, and/or diabetes mellitus.

Patients with stage B are asymptomatic patients who have  developed structural heart disease from a variety of potential insults to the heart muscle such as myocardial infarction or valvular heart disease.

Stages C and D represent the symptomatic phases of HF, with stage C manageable and stage D failing medical management, resulting in marked symptoms at rest or with minimal activity despite optimal medical therapy.

Therapeutic interventions include:

  • dietary salt restriction and diuretics
  • medications known to prolong survival (beta blockers, ACE inhibitors, aldosterone inhibitors)
  • implantable devices such as pacemakers and defibrillators
  • stoppage of tobacco, toxic drugs, excess alcohol

Classic demographic risk factors for the development of HF include

  • older age, male gender, ethnicity, and low socioeconomic status.
  • comorbid disease states contribute to the development of HF
    • Ischemic heart disease
    • Hypertension

Diabetes mellitus, insulin resistance, and obesity are also linked to HF development,

  • with diabetes mellitus increasing the risk of HF by ≈2-fold in men and up to 5-fold in women.

Smoking remains the single largest preventable cause of disease and premature death in the United States.

Translation of Scientific Evidence into Clinical Practice

In multiple studies, failures to apply evidence-based management strategies are blamed for avoidable hospitalizations and/or deaths from HF

Improved implementation of guidelines can delay, mitigate or prevent the onset of HF, and improve survival. Performance improvement programs have facilitated the implementation of evidence-based therapies in both hospital and ambulatory care settings.

Care transition programs by hospitals have become more widespread

  • in an effort to reduce avoidable readmissions.

The interventions used by these programs include

  • initiating discharge planning early in the course of hospital care,
  • actively involving patients and families or caregivers in the plan of care,
  • providing new processes and systems that ensure patient understanding of the plan of care before discharge from the hospital, and
  • improving quality of care by continually monitoring adherence to national evidence-based guidelines with appropriate adaptations for individual differences in needs and responses.

In multiple studies,adherence to the HF plan of care was associated with reduced all-cause mortality as well as HF hospitalization.

It is anticipated that care transition programs may increase appropriate admissions while decreasing inappropriate admissions

This would have a potentially benenficial impact on the 30-day all-cause readmission rate that has become

  • a focus of public reporting in pay for performance.

More than a quarter of Medicare spending occurs in the last year of life, and

  • the costs of care during the last 6 months for a patient with HF have been increasing (11% from 2000 to 2007).

Improving end-of-life care cost effectiveness for patients with stage D HF will require ongoing

  • improved prediction of outcomes
  • integration of multiple aspects of care
  • educated examination of alternatives and priorities
  • improved decision-making
  • unbiased allocation of resources and coverage for this process rather than unbalanced coverage favoring catastrophic care

Palliative care, including formal hospice care, is increasingly advocated for patients with advanced HF.
Offering palliative care to patients with HF may lead to

  • more conservative (and less expensive) treatment
  • consistent with many patients’ goals for care

The use of hospice services is growing among the HF population,

  • HF now the second most common reason for entering hospice
  • but hospice declaration may impose automated restrictions on care that can impose an impediment to election of hospice

A recent study of patients in hospice care found that

  • patients with HF were more likely than patients with cancer to use hospice services longer than 6 months or to be discharged from hospice care alive.

Highlights:

1. Increasing incidence and costs of care for heart failure projected from 2012 to 2030

2. Direct costs rising at greater rate than indirect costs

3. American Heart Association has defined 4 stages of HF, the last 2 of which are advanced

4. Stages C & D are clinically overt and contribute to rehospitalization

5. Stage D accounts for a significant use of end-of-life hospice care

6. There are evidence-based guidelines for the provision of coordinated care that are not widely applied at present

Basic questions raised:

1. If stages A & B are under the radar, then what measures can best trigger the use of evidence-based guidelines for care?
2. Why are evidence-based guidelines commonly not deployed?

  • Flaws in the “evidence” due to bias, design errors, limted ability to extrapolate to the patients it should address
  • Delays in education, convincing of caretakers, and deployment
  • Inadequate resources
  • Financial or other disincentives

The arguments for introducing coordinated care and for evidence-based guidelines is strong.

Arguments AGAINST slavish imposition of evidence based medicine include genetic individuality (what is best on average is not necessarily best for each genetically and behaviorly distinct individual). Strict adherence to evidence-based guidelines also stifles innovative explorations. None-the-less, deviations from evidence-based plans should be cautious, well-documented, and well-informed, not due to mal-aligned incentives, ignorance, carelessness or error.

The question of when and how to intervene most cost effectively is unanswered. If some patients are salt-sensitive as a contribution to the prevalence of hypertension and heart failure, should EVERYONE be salt restricted or should there be a more concerted effort to define who is salt sensitive? What if it proved more cost-effective to restrict salt intake for everyone, even though many might be fine with high sodium intake, and some might even benefit from or require high sodium intake? Is it reasonable to impose costs, hurdles, even possible harm on some as a cheaper way to achieve “greater good”?
These issues are highly relevant to the proposed emphasis on holistic solutions.

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

By GINA KOLATA   2013.05.13  New York Times

Scientists are studying the genetic makeup of the Del Sontro family for

  • telltale mutations or aberrations in the DNA.

Robin Ashwood, one of Mr. Del Sontro’s sisters, found out she had extensive heart disease even though her electrocardiograms was normal. Six of her seven siblings also have heart disease, despite not having any of the traditional risk factors. Then, after a sister, just 47 years old, found out she had advanced heart disease, Mr. Del Sontro, then 43, went to a cardiologist. An X-ray of his arteries revealed the truth. Like his grand-father, his mother, his four brothers and two sisters, he had heart disease.

Now he and his extended family have joined an extraordinary federal research project that is using genetic sequencing to find factors that increase the risk of heart disease beyond the usual suspects — high cholesterol, high blood pressure, smoking and diabetes.“We don’t know yet how many pathways there are to heart disease,” said Dr. Leslie Biesecker, who directs the study Mr. Del Sontro joined. “That’s the power of genetics. To try and dissect that.”

“I had bought the dream: if you just do the right things and eat the right things, you will be O.K.,” said Mr. Del Sontro, whose cholesterol and blood pressure are reassuringly low.

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

GF Mitchell, Shih-Jen Hwang, RS Vasan, MG Larson.

Circulation. 2010;121:505-511.  http://circ.ahajournals.org/content/121/4/505
http://dx.doi.org/10.1161/CIRCULATIONAHA.109.886655

Various measures of arterial stiffness and wave reflection have been proposed as cardiovascular risk markers.
Prior studies have not assessed relations of a comprehensive panel of stiffness measures to prognosis.
First-onset major cardiovascular disease events in relation to arterial stiffness

  • pulse wave velocity [PWV]
  • wave reflection
    • augmentation index
    • carotid-brachial pressure amplification)
  • central pulse pressure

were analyzed  in 2232 participants (mean age, 63 years; 58% women) in the Framingham Heart Study by a proportional hazards model. During median follow-up of 7.8 (range, 0.2 to 8.9) years,

  • 151 of 2232 participants (6.8%) experienced an event.

In multivariable models adjusted for

  • age
  • sex
  • systolic blood pressure
  • use of antihypertensive therapy
  • total and high-density lipoprotein cholesterol concentrations
  • smoking
  • presence of diabetes mellitus

higher aortic PWV was associated with a 48% increase in cardiovascular disease risk (95% confidence interval, 1.16 to 1.91 per SD; P 0.002).

After PWV was added to a standard risk factor model, integrated discrimination improvement was 0.7% (95% confidence interval, 0.05% to 1.3%; P 0.05).

In contrast,

  • augmentation index,
  • central pulse pressure, and
  • pulse pressure amplification

were not related to cardiovascular disease outcomes in multivariable models.

Higher aortic stiffness assessed by PWV

  • is associated with increased risk for a first cardiovascular event.

Aortic PWV improves risk prediction when added to standard risk factors and may represent

  • a valuable biomarker of cardiovascular disease risk

We shall here visit a recent article by Justin D. Pearlman and Aviva Lev-Ari, PhD, RN, on

Pros and Cons of Drug Stabilizers for Arterial  Elasticity as an Alternative or Adjunct to Diuretics and Vasodilators in the Management of Hypertension, titled

4. Hypertension and Vascular Compliance: 2013 Thought Frontier – An Arterial Elasticity Focus

http://pharmaceuticalintelligence.com/2013/05/11/arterial-elasticity-in-quest-for-a-drug-stabilizer-isolated-systolic-hypertension-caused-by-arterial-stiffening-ineffectively-treated-by-vasodilatation-antihypertensives/

Speaking at the 2013 International Conference on Prehypertension and Cardiometabolic Syndrome, meeting cochair Dr Reuven Zimlichman (Tel Aviv University, Israel) argued that there is a growing number of patients for whom the conventional methods are inappropriate for

  • the definitions of hypertension
  • the risk-factor tables used to guide treatment

Most antihypertensives today work by producing vasodilation or decreasing blood volume which may be

  • ineffective treatments for patients in whom average arterial diameter and circulating volume are not the causes of hypertension and as targets of therapy may promote decompensation

In the future, he predicts, “we will have to start looking for a totally different medication that will aim to

  • improve or at least to stabilize arterial elasticity: medication that might affect factors that determine the stiffness of the arteries, like collagen, like fibroblasts.

Those are not the aim of any group of antihypertensive medications today.”

Zimlichman believes existing databases could be used to develop algorithms that focus on

  • inelasticity as a mechanism of hypertensive disease

He also points out that

  • ambulatory blood-pressure-monitoring devices can measure elasticity

http://www.theheart.org/article/1502067.do

A related article was published on the relationship between arterial stiffening and primary hypertension.

Arterial stiffening provides sufficient explanation for primary hypertension.

KH Pettersen, SM Bugenhagen, J Nauman, DA Beard, SW Omholt.

By use of empirically well-constrained computer models describing the coupled function of the baroreceptor reflex and mechanics of the circulatory system, we demonstrate quantitatively that

  • arterial stiffening seems sufficient to explain age-related emergence of hypertension.

Specifically,

  • the empirically observed chronic changes in pulse pressure with age
  • the capacity of hypertensive individuals to regulate short-term changes in blood pressure becomes impaired

The results suggest that a major target for treating chronic hypertension in the elderly  may include

  • the reestablishment of a proper baroreflex response.

http://arxiv.org/abs/1305.0727v2?goback=%2Egde_4346921_member_240018699

5. Clinical Decision Support Systems: Realtime Clinical Expert Support: Biomarkers of Cardiovascular Disease — Molecular Basis and Practical Considerations

RS Vasan.  Circulation. 2006;113:2335-2362

http://dx.doi.org/10.1161/CIRCULATIONAHA.104.482570

http://circ.ahajournals.org/content/113/19/2335

Substantial data indicate that CVD is a life course disease that begins with the evolution of risk factors that contribute to

  • subclinical atherosclerosis.

Subclinical disease culminates in overt CVD. The onset of CVD itself portends an adverse prognosis with greater

  • risks of recurrent adverse cardiovascular events, morbidity, and mortality.

Clinical assessment alone has limitations. Clinicians have used additional tools to aid clinical assessment and to enhance their ability to identify the “vulnerable” patient at risk for CVD, as suggested by a recent National Institutes of Health (NIH) panel.

Biomarkers are one such tool to better identify high-risk individuals, to diagnose disease conditions promptly for diagnosis, prognosis, and treatment guidance.

Biological marker (biomarker): A laboratory test value that is objectively measured and evaluated as an indicator of

  1. normal biological processes,
  2. pathogenic processes, or
  3. pharmacological responses to a therapeutic intervention.

Type 0 biomarker: A marker of the natural history of a disease

  • Type 0 correlates longitudinally with known clinical indices/predicts outcomes.

Type I biomarker: A marker that captures the effects of a therapeutic intervention

  • Type I assesses an aspect of treatment mechanism of action.

Type 2 biomarker (surrogate end point):  A marker intended to predict outcomes on the basis of

  • epidemiologic
  • therapeutic
  • pathophysiologic or
  • other scientific evidence.

With biomarkers monitoring disease progression or response to therapy, the patient can serve as  his or her own control (follow-up values may be compared to baseline  values).

Costs may be less important for prognostic markers when they are largely restricted to people with disease (total cost=cost per person x number to be tested, plus down-stream costs). Some biomarkers (e.g., an exercise stress test) may be used for both diagnostic and prognostic purposes.

Generally there are cost differences in establishing a prognostic value versus diagnostic value of a biomarker:

  • prognostic utility typically requires a large sample and a prospective design, whereas
  • diagnostic value often can be determined with a smaller sample in a cross-sectional design

Regardless of the intended use, it is important to remember that biomarkers that do not change disease management

  • cannot affect patient outcome and therefore
  • are unlikely to be cost-effective (judged in terms of quality-adjusted life-years gained).

Typically, for a biomarker to change management, it is important to have evidence that risk reduction strategies should vary with biomarker levels, and/or biomarker-guided management achieves advantages over a management scheme that ignores the biomarker levels.

Typically it means that biomarker levels should be modifiable by therapy.

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that

  • provides empirical medical reference and
  • suggests quantitative diagnostics options.

The current design of the Electronic Medical Record (EMR) is a
linear presentation of portions of the record

  • by services
  • by diagnostic method, and
  • by date

to cite examples.

This allows perusal through a graphical user interface (GUI) that

  • partitions the information or necessary reports in a workstation entered by keying to icons.
  • presents decision support

Examples of data partitions include:

  • history
  • medications
  • laboratory reports
  • imaging
  • EKGs

The introduction of a DASHBOARD adds presentation of

  • drug reactions
  • allergies
  • primary and secondary diagnoses, and
  • critical information

about any patient the care giver needing access to the record.

A basic issue for such a tool is what information is presented and how it is displayed.

A determinant of the success of this endeavor is if it

  • facilitates workflow
  • facilitates decision-making process
  • reduces medical error.

Continuing work is in progress in extending the capabilities with model datasets, and sufficient data based on the assumption that computer extraction of data from disparate sources will, in the long run, further improve this process.

For instance, there is synergistic value in finding coincidence of:

  • ST shift on EKG
  • elevated cardiac biomarker (troponin)
  • in the absence of substantially reduced renal function.

Similarly, the conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates

  • morphologic review of a peripheral smear
  • descriptive statistics

While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the

  • production
  • release
  • or suppression

of the formed elements from the blood-forming organ into the circulation. In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of

  • size
  • density, and
  • concentration

resulting in many characteristic features of classification. In the diagnosis of hematological disorders

  • proliferation of marrow precursors
  • domination of a cell line
  • suppression of hematopoiesis

Other dimensions are created by considering

  • the maturity and size of the circulating cells.

The application of rules-based, automated problem solving should provide a valid approach to

  • the classification and interpretation of the data used to determine a knowledge-based clinical opinion.

The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.

As the complexity of statistical models has increased

  • the dependencies have become less clear to the individual.

Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can substantially interpret the data at hand and

  • convert it in real time to a “knowledge-based opinion”

could improve clinical decision-making by incorporating into the model

  • multiple complex clinical features as well as onset and duration .

An example of a difficult area for clinical problem solving is found in the diagnosis of Systemic Inflammatory Response Syndrome (SIRS) and associated sepsis. SIRS is a costly diagnosis in hospitalized patients.   Failure to diagnose it in a timely manner increases the financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria by the clinician.

  • temperature
  • heartrate
  • respiratory rate and
  • WBC count

The application of those clinical criteria, however, defines the condition after it has developed, leaving unanswered the hope for

  • a reliable method for earlier diagnosis of SIRS.

The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including

  • transthyretin
  • C-reactive protein
  • procalcitonin
  • mean arterial pressure

Immature granulocyte (IG) measurement has been proposed as a

  • readily available indicator of the presence of granulocyte precursors (left shift).

The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, provides

  • a promising support to early accurate decision making.

Such a system aims to reduce medical error by utilizing

  • the conjoined syndromic features of disparate data elements .

How we frame our expectations is important. It determines

  • the data we collect to examine the process.

In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.

Potential arenas of benefit include:

  • hospital operations
  • nonhospital laboratory studies
  • companies in the diagnostic business
  • planners of health systems

The problem stated by LL  WEED in “Idols of the Mind” (Dec 13, 2006):
“ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require.” Hospital information technology (HIT) use has been focused on information retrieval, leaving

  • the unaided mind burdened with information processing.

We deal with problems in the interpretation of data presented to the physician, and how the situation could be improved through better

  • design of the software that presents data .

The computer architecture that the physician uses to view the results is more often than not presented

  • as the designer would prefer, and not as the end-user would like.

In order to optimize the interface for physician, the system could have a “front-to-back” design, with the call up for any patient

  • A dashboard design that presents the crucial information that the physician would likely act on in an easily accessible manner
  • Each item used has to be closely related to a corresponding criterion needed for a decision.

Feature Extraction.

Eugene Rypka contributed greatly to clarifying the extraction of features in a series of articles, which

  • set the groundwork for the methods used today in clinical microbiology.

The method he describes is termed S-clustering, and

  • will have a significant bearing on how we can view laboratory data.

He describes S-clustering as extracting features from endogenous data that

  • amplify or maximize structural information to create distinctive classes.

The method classifies by taking the number of features with sufficient variety to generate maps.

The mapping is done by

  • a truth table NxN of messages and choices
  • each variable is scaled to assign values for each message choice.

For example, the message for an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced by this compression, even though it may represent less information.

The main issue is

  • how a combination of variables falls into a table to convey meaningful information.

We are concerned with

  • accurate assignment into uniquely variable groups by information in test relationships.

One determines the effectiveness of each variable by its contribution to information gain in the system. The reference or null set is the class having no information.  Uncertainty in assigning to a classification can be countered by providing sufficient information.

One determines the effectiveness of each variable by its contribution to information gain in the system. The possibility for realizing a good model for approximating the effects of factors supported by data used

  • for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike
  • found a simple relationship between K-L information and Fisher’s maximized log-likelihood function.

In the last 60 years the application of entropy comparable to

  • the entropy of physics, information, noise, and signal processing,
  • developed by Shannon, Kullback, and others
  • integrated with modern statistics,
  • as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and work by Coifman

Akaike pioneered recognition that the choice of model influence results in a measurable manner. In particular, a larger number of variables promotes further explanations of variance, such that a model selection criterion is important that penalizes for the number of variables when success is measured by explanation of variance.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and

  • can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common
  • causes of hospital admission that carry a high cost and morbidity.

For example:

  1. anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome);
  2. pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia;
  3. multiple organ failure and hemodynamic shock;
  4. electrolyte/acid base balance disorders;
  5. acute and chronic liver disease;
  6. acute and chronic renal disease;
  7. diabetes mellitus;
  8. protein-energy malnutrition;
  9. acute respiratory distress of the newborn;
  10. acute coronary syndrome;
  11. congestive heart failure;
  12. hypertension
  13. disordered bone mineral metabolism;
  14. hemostatic disorders;
  15. leukemia and lymphoma;
  16. malabsorption syndromes; and
  17. cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].
  18. endocrine disorders
  19. prenatal and perinatal diseases

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of
myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy
malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the
Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB)
Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method
(GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with
chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression.
C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options. The primary purpose is to gather medical information, generate metrics, analyze them in realtime and provide a differential diagnosis, meeting the highest standard of accuracy. The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore

  • utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community.
  • The main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies
  • in which anomalous subprofiles are extracted and compared to potentially relevant cases.

As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise.
We anticipate that the effect of implementing this diagnostic amplifier would result in

  • higher physician productivity at a time of great human resource limitations,
  • safer prescribing practices,
  • rapid identification of unusual patients,
  • better assignment of patients to observation, inpatient beds,
    intensive care, or referral to clinic,
  • shortened length of patients ICU and bed days.

The main benefit is a

  1. real time assessment as well as
  2. diagnostic options based on comparable cases,
  3. flags for risk and potential problems

as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Graphical presentation of patient status

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

Progression changes in patient ICU stay with SIRS

The MISSIVE(c) system, by Justin Pearlman, is an alternative approach that includes not only automated data retrieval and reformatting of data for decision support, but also an integrated set of tools to speed up analysis, structured for quality and error reduction, couplled to facilitated report generation, incorporation of just-in-time knowledge and group expertise, standards of care, evidence-based planning, and both physician and patient instruction.

See also in Pharmaceutical Intelligence:

The Cost Burden of Disease: U.S. and Michigan.CHRT Brief. January 2010. @www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor WickerhauserAdapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising.
Optical Engineering 1994; 33(7):2170–2174.

R. Coifman and N. SaitoConstructions of local orthonormal bases for classification and regressionC. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments,
& Computers 2004; 36 (3): 506–515.

De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms.  Exemplar by feature applicability matrices and other Dutch normative data for semantic
concepts.
  Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories.
Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

Sheila Nirenberg/Cornell and Chethan Pandarinath/Stanford, “Retinal prosthetic strategy with the capacity to restore normal vision,” Proceedings of the National Academy of Sciences.

Other related articles published in this Open Access Online Scientific Journal include the following:

http://pharmaceuticalintelligence.com/2012/08/13/the-automated-second-opinion-generator/

http://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-
have-travelled-and-where-is-journeys-end/

http://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-
informatics-to-healthcare-is-more-than-currently-estimated/

http://pharmaceuticalintelligence.com/2013/05/04/cardiovascular-diseases-decision-support-
systems-for-disease-management-decision-making/?goback=%2Egde_4346921_member_239739196

http://pharmaceuticalintelligence.com/2012/08/13/demonstration-of-a-diagnostic-clinical-
laboratory-neural-network-agent-applied-to-three-laboratory-data-conditioning-problems/

http://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/

http://pharmaceuticalintelligence.com/2013/02/13/cracking-the-code-of-human-life-
the-birth-of-bioinformatics-and-computational-genomics/

http://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-
atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-
and-energy-homeostasis/

http://pharmaceuticalintelligence.com/2012/12/10/identification-of-biomarkers-that-
are-relatedto-the-actin-cytoskeleton/

http://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-
for-comparison-and-classification-of-predictor-variables/

http://pharmaceuticalintelligence.com/2012/08/02/diagnostic-evaluation-of-sirs-by-
immature-granulocytes/

http://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-
of-sirs-sepsis-septic-shock/

http://pharmaceuticalintelligence.com/2012/08/12/1815/

http://pharmaceuticalintelligence.com/2012/08/15/1946/

http://pharmaceuticalintelligence.com/2013/05/13/vinod-khosla-20-doctor-included-speculations-
musings-of-a-technology-optimist-or-technology-will-replace-80-of-what-doctors-do/

http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN
Aviva Lev-Ari, PhD, RN 2/28/2013
http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-
based-pharmacological-therapy-including-thymosin/

FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology
Aviva Lev-Ari, PhD, RN 1/28/2013
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-
cardiovascular-imaging-technology/

PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not
Platelet Reactivity    Aviva Lev-Ari, PhD, RN 1/10/2013
http://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-
associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX,
and Clinical SYNTAX   Aviva Lev-Ari, PhD, RN 1/3/2013
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-
established-risk-scores-timi-grace-syntax-and-clinical-syntax/

Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by
Serum Protein Profiles    Aviva Lev-Ari, PhD, RN 12/29/2012
http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-
patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/

New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia
Aviva Lev-Ari, PhD, RN 8/27/2012
http://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-
fractional-flow-reserve-ffrct-for-tagging-ischemia/

Herceptin Fab (antibody) - light and heavy chains

Herceptin Fab (antibody) – light and heavy chains (Photo credit: Wikipedia)

Personalized Medicine

Personalized Medicine (Photo credit: Wikipedia)

Diagnostic of pathogenic mutations. A diagnost...

Diagnostic of pathogenic mutations. A diagnostic complex is a dsDNA molecule resembling a short part of the gene of interest, in which one of the strands is intact (diagnostic signal) and the other bears the mutation to be detected (mutation signal). In case of a pathogenic mutation, the transcribed mRNA pairs to the mutation signal and triggers the release of the diagnostic signal (Photo credit: Wikipedia)

Read Full Post »

Diagnostics and Biomarkers: Novel Genomics Industry Trends vs Present Market Conditions and Historical Scientific Leaders Memoirs

Larry H Bernstein, MD, FCAP, Author and Curator

This article has two parts:

  • Part 1: Novel Genomics Industry Trends in Diagnostics and Biomarkers vs Present Market Transient Conditions

and

  • Part 2: Historical Scientific Leaders Memoirs

 

Part 1: Novel Genomics Industry Trends in Diagnostics and Biomarkers vs Present Market Transient Conditions

 

Based on “Forging a path from companion diagnostics to holistic decision support”, L.E.K.

Executive Insights, 2013;14(12). http://www.LEK.com

Companion diagnostics and their companion therapies is defined here as a method enabling

  • LIKELY responders to therapies that are specific for patients with ma specific molecular profile.

The result of this statement is that the diagnostics permitted to specific patient types gives access to

  • novel therapies that may otherwise not be approve or reimbursed in other, perhaps “similar” patients
  • who lack a matching identification of the key identifier(s) needed to permit that therapy,
  • thus, entailing a poor expected response.

The concept is new because:

(1) The diagnoses may be closely related by classical criteria, but at the same time they are
not alike with respect to efficacy of treatment with a standard therapy.
(2) The companion diagnostics is restricted to dealing with a targeted drug-specific question
without regard to other clinical issues.
(3) The efficacy issue it clarifies is reliant on a deep molecular/metabolic insight that is not available, except through
emergent genomic/proteomic analysis that has become available and which has rapidly declining cost to obtain.

The limitation example given is HER2 testing for use of Herceptin in therapy for non-candidates (HER2 negative patients).
The problem is that the current format is a “one test/one drug” match, but decision support  may require a combination of

  • validated biomakers obtained on a small biopsy sample (technically manageable) with confusing results.

While HER2 negative patients are more likely to be pre-menopausal with a more aggressive tumor than postmenopausal,

  • the HER2 negative designation does not preclude treatment with Herceptin.

So the Herceptin would be given in combination, but with what other drug in a non-candidate?

The point that L.E.K. makes is that providing highly validated biomarkers linked to approved therapies, it is necessary to pursue more holistic decision support tests that interrogate multiple biomarkers (panels of companion diagnostic markers) and discovery of signatures for treatments that are also used with a broad range of information, such as,

  • traditional tests,
  • imaging,
  • clinical trials,
  • outcomes data,
  • EMR data,
  • reimbursement and coverage data.

A comprehensive solution of this nature appears to be a distance from realization.  However, is this the direction that will lead to tomorrows treatment decision support approaches?

 Surveying the Decision Support Testing Landscape

As a starting point, L.E.K. characterized the landscape of available tests in the U.S. that inform treatment decisions compiled from ~50 leading diagnostics companies operating in the U.S. between 2004-2011. L.E.K. identified more than 200 decision support tests that were classified by test purpose, and more specifically,  whether tests inform treatment decisions for a single drug/class (e.g., companion diagnostics) vs. more holistic treatment decisions across multiple drugs/classes (i.e., multiagent response tests).

 Treatment Decision Support Tests

Companion Diagnostics
Single drug/class
Predict response/safety or guide dosing of a single drug or class

HercepTest   Dako
Determines HER2 protein overexpression for Herceptin treatment selection

Multiple drugs/classes

Vysis ALK Break
Apart FISH
Abbott Labs Predicts the NSCLC patient response to Xalkori

Other Decision Support
Provide prognostic and predictive information on the benefit of treatment

Oncotype Dx    Genomic Health, Inc.
Predicts both recurrence of breast cancer and potential patient benefit to chemotherapy regimens

PML-RARα     Clarient, Inc.
Predicts response to all-trans retinoic acid (ATRA) and other chemotherapy agents

TRUGENE    Siemens
Measures resistence to multiple  HIV-1 anti-retroviral agents

Multi-agent Response

Inform targeted therapy class selection by interrogating a panel of biomarkers
Target Now  Caris Life Sciences
Examines tumor’s molecular profile to tailor treatment options

ResponseDX: Lung    Response Genetics, Inc.
Examines multiple biomarkers to guide therapeutic treatment decisions for NSCLC patients

Source: L.E.K. Analysis

Includes IVD and LDT tests from

  1. top-15 IVD test suppliers,
  2. top-four large reference labs,
  3. top-five AP labs, and
  4. top-20 specialty reference labs.

For descriptive purposes only, may not map to exact regulatory labeling

Most tests are companion diagnostics and other decision support tests that provide guidance on

  • single drug/class therapy decisions.

However, holistic decision support tests (e.g., multi-agent response) are growing the fastest at 56% CAGR.
The emergence of multi-agent response tests suggests diagnostics companies are already seeing the need to aggregate individual tests (e.g., companion diagnostics) into panels of appropriate markers addressing a given clinical decision need. L.E.K. believes this trend is likely to continue as

  • increasing numbers of  biomarkers become validated for diseases and multiplexing tools
  • enabling the aggregation of multiple biomarker interrogations into a single test

to become deployed in the clinic.

Personalized Medicine Partnerships

L.E.K. also completed an assessment of publicly available personalized medicine partnership activity from 2009-2011 for ~150 leading organizations operating in the U.S. to look at broader decision support trends and emergence of more holistic solutions beyond diagnostic tests.

Survey of partnerships deals was conducted for

  • top-10 academic medical centers research institutions,
  • top-25 biopharma,
  • top-four healthcare IT companies,
  • top-three healthcare imaging companies,
  • top-20 IVD manufacturers,
  • top-20 laboratories,
  • top-10 payers/PBMs,
  • top-15 personalized healthcare companies,
  • top-10 regulatory/guideline entities, and
  • top-20 tools vendors for the period of 01/01/2009 – 12/31/2011.
    Source: Company websites, GenomeWeb, L.E.K. analysis

Across the sample we identified 189 publicly announced partnerships of which ~65% focused on more traditional areas (biomarker discovery, companion diagnostics and targeted therapies). However, a significant portion (~30%) included elements geared towards creating more holistic decision support models.

Partnerships categorized as holistic decision support by L.E.K. were focused on

  • mining large patient datasets (e.g., from payers or providers),
  • molecular profiling (e.g., deploying next-generation sequencing),
  • creating information technology (IT) infrastructure needed to enable holistic decision support models and
  • integrating various datasets to create richer decision support solutions.

Interestingly, holistic decision support partnerships often included stakeholders outside of biopharma and diagnostics such as

  • research tools,
  • payers/PBMs,
  • healthcare IT companies as well as
  • emerging personalized healthcare (PHC) companies (e.g., Knome, Foundation Medicine and 23andMe).

This finding suggests that these new stakeholders will be increasingly important in influencing care decisions going forward.

Holistic Treatment Decision Support

Holistic Decision   Support Focus

Technology Provider Partners
Stakeholder Deploying the Solution

Holistic Decision
Support Activities
Molecular Profiling

Life Technologies

TGEN/US
Oncology

Sequencing of triple-negative breast  cancer patients to identify potential treatment strategies

Foundation Medicine

Novartis

Deployment of cancer genomics analysis platform to support Novartis clinical research efforts
Predictive genomics

Clarient, Inc.
(GE Healthcare)

Acorn
Research

Biomarker profiling of patients within Acorn’s network of providers to support clinical research efforts

GenomeQuest

Beth Israel Deaconess
Medical Center

Whole genome analysis and to guide patient management
Outcomes Data Mining

AstraZeneca

WellPoint

Evaluate comparative effectiveness of selected marketed therapies

23andMe

NIH

Leverage information linking drug response and CYP2C9/CYP2C19 variation

Pfizer

Medco

Leverage patient genotype, phenotype and outcome for treatment decisions and target therapeutics
Healthcare IT Infrastructure

IBM

WellPoint

Deploy IBM’s Watson-based solution to evidence-based healthcare decision-making support

Oracle

Moffitt Cancer Center

Deploy Oracle’s informatics platform to store and manage patient medical information
Data Integration

Siemens Diagnostics

Susquehanna Health

Integration of imaging and laboratory diagnostics

Cernostics

Geisinger
Health

Integration of advanced tissue diagnostics, digital pathology, annotated biorepository and EMR
to create solutions
next-generation treatment decision support solutions

CardioDx

GE Healthcare

Integration of genomics with imaging data in CVD

Implications

L.E.K. believes the likely debate won’t center on which models and companies will prevail. It appears that the industry is now moving along the continuum to a truly holistic capability.
The mainstay of personalized medicine today will become integrated and enhanced by other data.

The companies that succeed will be able to capture vast amounts of information

  • and synthesize it for personalized care.

Holistic models will be powered by increasingly larger datasets and sophisticated decision-making algorithms.
This will require the participation of an increasingly broad range of participants to provide the

  • science, technologies, infrastructure and tools necessary for deployment.

There are a number of questions posed by this study, but only some are of interest to this discussion:

Group A.    Pharmaceuticals and Devices

  •  How will holistic decision support impact the landscape ?
    (e.g., treatment /testing algorithms, decision making, clinical trials)

Group B.     Diagnostics and   Decision Support

  •   What components will be required to build out holistic solutions?

– Testing technologies

– Information (e.g., associations, outcomes, trial databases, records)

– IT infrastructure for data integration and management, simulation and reporting

  •  How can various components be brought together to build seamless holistic  decision support solutions?

Group C.      Providers and Payers

  •  In which areas should models be deployed over time?
  • Where are clinical and economic arguments  most compelling?

Part 2: Historical Scientific Leaders Memoirs – Realtime Clinical Expert Support

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman,
in the Yale University Applied Mathematics Program,

A software system that is the equivalent of an intelligent Electronic Health Records Dashboard that

  • provides empirical medical reference and
  • suggests quantitative diagnostics options.

The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record

  • by services
  • by diagnostic method, and
  • by date, to cite examples.

This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports

  • in a workstation entered by keying to icons.

This requires that the medical practitioner finds the

  • history,
  • medications,
  • laboratory reports,
  • cardiac imaging and
  • EKGs, and
  • radiology in different workspaces.

The introduction of a DASHBOARD has allowed a presentation of

  • drug reactions
  • allergies
  • primary and secondary diagnoses, and
  • critical information

about any patient the care giver needing access to the record.

The advantage of this innovation is obvious.  The startup problem is what information is presented and

  • how it is displayed, which is a source of variability and a key to its success.

We are proposing an innovation that supercedes the main design elements of a DASHBOARD and utilizes

  • the conjoined syndromic features of the disparate data elements.

So the important determinant of the success of this endeavor is that

  • it facilitates both the workflow and the decision-making process with a reduction of medical error.

Continuing work is in progress in extending the capabilities with model datasets, and sufficient data because

  • the extraction of data from disparate sources will, in the long run, further improve this process.

For instance, the finding of  both ST depression on EKG coincident with an elevated cardiac biomarker (troponin), particularly in the absence of substantially reduced renal function. The conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates

  • the review of a peripheral smear.

While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the

  • production
  • release
  • or suppression

of the formed elements from the blood-forming organ into the circulation. In the hemogram one can view

  • data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of

  • size
  • density, and
  • concentration,

resulting in many characteristic features of classification. In the diagnosis of hematological disorders

  • proliferation of marrow precursors, the
  • domination of a cell line, and features of
  • suppression of hematopoiesis

provide a two dimensional model.  Other dimensions are created by considering

  • the maturity of the circulating cells.

The application of rules-based, automated problem solving should provide a valid approach to

  • the classification and interpretation of the data used to determine a knowledge-based clinical opinion.

The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.

As the complexity of statistical models has increased

  • the dependencies have become less clear to the individual.

Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can substantially interpret the data at hand and

  • convert it in real time to a “knowledge-based opinion”

could improve clinical decision-making by incorporating

  • multiple complex clinical features as well as duration of onset into the model.

An example of a difficult area for clinical problem solving is found in the diagnosis of SIRS and associated sepsis. SIRS (and associated sepsis) is a costly diagnosis in hospitalized patients.   Failure to diagnose sepsis in a timely manner creates a potential financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria by the clinician.

  • temperature
  • heart rate
  • respiratory rate and
  • WBC count

The application of those clinical criteria, however, defines the condition after it has developed and

  • has not provided a reliable method for the early diagnosis of SIRS.

The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including

  • transthyretin
  • C-reactive protein
  • procalcitonin
  • mean arterial pressure

Immature granulocyte (IG) measurement has been proposed as a

  • readily available indicator of the presence of granulocyte precursors (left shift).

The use of such markers, obtained by automated systems

  • in conjunction with innovative statistical modeling, provides
  • a promising approach to enhance workflow and decision making.

Such a system utilizes the conjoined syndromic features of

  • disparate data elements with an anticipated reduction of medical error.

How we frame our expectations is so important that it determines

  • the data we collect to examine the process.

In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.
This has meaning for

  • hospital operations,
  • for nonhospital laboratory operations,
  • for companies in the diagnostic business, and
  • for planning of health systems.

The problem stated by LL  WEED in “Idols of the Mind” (Dec 13, 2006): “ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require”.  HIT use has been

  • focused on information retrieval, leaving
  • the unaided mind burdened with information processing.

We deal with problems in the interpretation of data presented to the physician, and how through better

  • design of the software that presents this data the situation could be improved.

The computer architecture that the physician uses to view the results is more often than not presented

  • as the designer would prefer, and not as the end-user would like.

In order to optimize the interface for physician, the system would have a “front-to-back” design, with
the call up for any patient ideally consisting of a dashboard design that presents the crucial information

  • that the physician would likely act on in an easily accessible manner.

The key point is that each item used has to be closely related to a corresponding criterion needed for a decision.

Feature Extraction.

This further breakdown in the modern era is determined by genetically characteristic gene sequences
that are transcribed into what we measure.  Eugene Rypka contributed greatly to clarifying the extraction
of features in a series of articles, which

  • set the groundwork for the methods used today in clinical microbiology.

The method he describes is termed S-clustering, and

  • will have a significant bearing on how we can view laboratory data.

He describes S-clustering as extracting features from endogenous data that

  • amplify or maximize structural information to create distinctive classes.

The method classifies by taking the number of features

  • with sufficient variety to map into a theoretic standard.

The mapping is done by

  • a truth table, and each variable is scaled to assign values for each: message choice.

The number of messages and the number of choices forms an N-by N table.  He points out that the message

  • choice in an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced

  • by this compression, even though there is risk of loss of information.

Yet the real issue is how a combination of variables falls into a table with meaningful information. We are concerned with accurate assignment into uniquely variable groups by information in test relationships. One determines the effectiveness of each variable by

  • its contribution to information gain in the system.

The reference or null set is the class having no information.  Uncertainty in assigning to a classification is

  • only relieved by providing sufficient information.

The possibility for realizing a good model for approximating the effects of factors supported by data used

  • for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike
  • found a simple relationship between K-L information and Fisher’s maximized log-likelihood function.

In the last 60 years the application of entropy comparable to

  • the entropy of physics, information, noise, and signal processing,
  • has been fully developed by Shannon, Kullback, and others, and has been integrated with modern statistics,
  • as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and work by Coifman.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and

  • can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common
  • causes of hospital admission that carry a high cost and morbidity.

For example: anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome); pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia; multiple organ failure and hemodynamic shock; electrolyte/acid base balance disorders; acute and chronic liver disease; acute and chronic renal disease; diabetes mellitus; protein-energy malnutrition; acute respiratory distress of the newborn; acute coronary syndrome; congestive heart failure; disordered bone mineral metabolism; hemostatic disorders; leukemia and lymphoma; malabsorption syndromes; and cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.; Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression. C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.

The primary purpose is to

  1. gather medical information,
  2. generate metrics,
  3. analyze them in realtime and
  4. provide a differential diagnosis,
  5. meeting the highest standard of accuracy.

The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community. The

  • main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies
  • in which anomalous subprofiles are extracted and compared to potentially relevant cases.

As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise. We anticipate that the effect of implementing this diagnostic amplifier would result in

  • higher physician productivity at a time of great human resource limitations,
  • safer prescribing practices,
  • rapid identification of unusual patients,
  • better assignment of patients to observation, inpatient beds,
    intensive care, or referral to clinic,
  • shortened length of patients ICU and bed days.

The main benefit is a real time assessment as well as diagnostic options based on

  • comparable cases,
  • flags for risk and potential problems

as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Graphical presentation of patient status

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

Progression changes in patient ICU stay with SIRS

Chemistry of Herceptin [Trastuzumab] is explained with images in

http://www.chm.bris.ac.uk/motm/herceptin/index_files/Page450.htm

 

REFERENCES

The Cost Burden of Disease: U.S. and Michigan CHRT Brief. January 2010.
@www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.; Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering 1994; 33(7):2170–2174.

  1. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression. C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categories and 338 exemplars. Behavior Research Methods, Instruments, & Computers 2004; 36 (3): 506–515.

De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms.  Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts.  Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories. Lewandowsky, S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993:2(5):433.

Retinal prosthetic strategy with the capacity to restore normal vision, Sheila Nirenberg and Chethan Pandarinath

http://www.pnas.org/content/109/37/15012

 

Other related articles published in http://pharmaceuticalintelligence.com include the following:

 

  • The Automated Second Opinion Generator

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/13/the-automated-second-opinion-generator/

 

  • The electronic health record: How far we have travelled and where is journeys end

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-have-travelled-and-where-is-journeys-end/

 

  • The potential contribution of informatics to healthcare is more than currently estimated.

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-informatics-to-healthcare-is-more-than-currently-estimated/

 

  • Clinical Decision Support Systems for Management Decision Making of Cardiovascular Diseases

Justin Pearlman, MD, PhD, FACC and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/05/04/cardiovascular-diseases-decision-support-systems-for-disease-management-decision-making/

 

  • Demonstration of a diagnostic clinical laboratory neural network applied to three laboratory data conditioning problems

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/13/demonstration-of-a-diagnostic-clinical-laboratory-neural-network-agent-applied-to-three-laboratory-data-conditioning-problems/

 

  • CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2014/08/30/cracking-the-code-of-human-life-the-birth-of-bioinformatics-computational-genomics/

 

  • Genetics of conduction disease atrioventricular AV conduction disease block gene mutations transcription excitability and energy homeostasis

Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-and-energy-homeostasis/

 

  • Identification of biomarkers that are related to the actin cytoskeleton

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/12/10/identification-of-biomarkers-that-are-related-to-the-actin-cytoskeleton/

 

  • Regression: A richly textured method for comparison of predictor variables

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-for-comparison-and-classification-of-predictor-variables/

 

  • Diagnostic evaluation of SIRS by immature granulocytes

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/02/diagnostic-evaluation-of-sirs-by-immature-granulocytes/

 

  • Big data in genomic medicine

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/

 

  • Automated inferential diagnosis of SIRS, sepsis, septic shock

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-of-sirs-sepsis-septic-shock/

 

  • A Software Agent for Diagnosis of ACUTE MYOCARDIAL INFARCTION

Isaac E. Mayzlin, Ph.D., David Mayzlin and Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/12/1815/

 

  • Artificial Vision: Cornell and Stanford Researchers crack Retinal Code

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2012/08/15/1946/

 

  • Vinod Khosla: 20 doctor included speculations, musings of a technology optimist or technology will replace 80 percent of what doctors do

Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/05/13/vinod-khosla-20-doctor-included-speculations-musings-of-a-technology-optimist-or-technology-will-replace-80-of-what-doctors-do/

 

  • Biomaterials Technology: Models of Tissue Engineering for Reperfusion and Implantable Devices for Revascularization

Larry H Bernstein, MD, FACP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

 

  • The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN

Aviva Lev-Ari, PhD, RN 2/28/2013

https://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-based-pharmacological-therapy-including-thymosin/

 

  • FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology

Aviva Lev-Ari, PhD, RN 1/28/2013

https://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-cardiovascular-imaging-technology/

 

  • PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not Platelet Reactivity

Aviva Lev-Ari, PhD, RN 1/10/2013

https://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

 

  • The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX, and Clinical SYNTAX

Aviva Lev-Ari, PhD, RN 1/3/2013

https://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-established-risk-scores-timi-grace-syntax-and-clinical-syntax/

 

  • Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by Serum Protein Profiles

Aviva Lev-Ari, PhD, RN 12/29/2012

https://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles

 

  • New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia

Aviva Lev-Ari, PhD, RN 8/27/2012

https://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-fractional-flow-reserve-ffrct-for-tagging-ischemia/
 

Additional Related articles

  • Hospital EHRs Inadequate for Big Data; Need for Specialized -Omics Systems(labsoftnews.typepad.com)
  • Apple Inc. (AAPL), QUALCOMM, Inc. (QCOM): Disruptions Needed(insidermonkey.com)
  • Netsmart Names Dr. Ian Chuang Senior Vice President, Healthcare Informatics and Chief Medical Officer(prweb.com)
  • Strategic partnership signals new age of stratified medicine(prweb.com)
  • Personalized breast cancer therapeutic with companion diagnostic poised for clinical trials in H2(medcitynews.com)

Read Full Post »

Clinical Decision Support Systems for Management Decision Making of Cardiovascular Diseases

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC

and

Curator: Aviva Lev-Ari, PhD, RN

Clinical Decision Support Systems (CDSS)

Clinical decision support system (CDSS) is an interactive decision support system (DSS). It generally relies on computer software designed to assist physicians and other health professionals with decision-making tasks, such as when to apply a particular diagnosis, further specific tests or treatments. A functional definition proposed by Robert Hayward of the Centre for Health Evidence defines CDSS as follows:  “Clinical Decision Support systems link health observations with health knowledge to influence health choices by clinicians for improved health care”. CDSS is a major topic in artificial intelligence in medicine.

Vinod Khosla of A Khosla Ventures investment, in a Fortune Magazine article, “Technology will replace 80% of what doctors do”, on December 4, 2012, wrote about CDSS as a harbinger of science in medicine.

Computer-assisted decision support is in its infancy, but we have already begun to see meaningful impact on healthcare. Meaningful use of computer systems is now rewarded under the Affordable Care Act.  Studies have demonstrated the ability of computerized clinical decision support systems to lower diagnostic errors of omission significantly, by directly countering cognitive bias.  Isabel is a differential diagnosis tool and, according to a Stony Book study, matched the diagnoses of experienced clinicians in 74% of complex cases. The system improved to a 95% match after a more rigorous entry of patient data. The IBM supercomputer, Watson, after beating all humans at the intelligence-based task of playing Jeopardy, is now turning its attention to medical diagnosis. It can process natural language questions and is fast at parsing high volumes of medical information, reading and understanding 200 million pages of text in 3 seconds. 

Examples of CDSS

  1. CADUCEUS
  2. DiagnosisPro
  3. Dxplain
  4. MYCIN
  5. RODIA

VIEW VIDEO

“When Should a Physician Deviate from the Diagnostic Decision Support Tool and What Are the Associated Risks?”

Introduction

Justin D. Pearlman, MD, PhD

A Decision Support System consists of one or more tools to help achieve good decisions. For example, decisions that can benefit from DSS include whether or not to undergo surgery, whether or not to undergo a stress test first, whether or not to have an annual mammogram starting at a particular age, or a computed tomography (CT) to screen for lung cancer, whether or not to utilize intensive care support such as a ventilator, chest shocks, chest compressions, forced feeding, strong antibiotics and so on versus care directed to comfort measures only without regard to longevity.

Any DSS can be viewed like a digestive tract, chewing on input, and producing output, and like the digestive tract, the output may only be valuable to a farmer. A well designed DSS is efficient in the input, timely in its processing and useful in the output. Mathematically, a DSS is a model with input parameters and an output variable or set of variables that can be used to determine an action. The input can be categorical (alive, dead), semi-quantitative (cold-warm-hot), or quantitative (temperature, systolic blood pressure, heart rate, oxygen saturation). The output can be binary (yes-no) or it can express probabilities or confidence intervals.

The process of defining specifications for a function and then deriving a useful function is called mathematical modeling. We will derive the function for “average” as an example. By way of specifications, we want to take a list of numbers as input, and come out with a single number that represents the middle of the pack or “central tendency.”   The order of the list should not matter, and if we change scales, the output should scale the same way. For example, if we use centimeters instead of inches, and we apply 2.54 centimeters to an inch, then the output should increase by the multiplier 2.54. If the list of numbers are all the same then the output should be the consistent value. Representing these specifications symbolically:

1. order doesn’t matter: f(a,b) = f(b,a), where “a” and “b” are input values, “f” is the function.

2. multipliers pass through (linearity):  f(ka,kb)=k f(a,b), where k is a scalar e.g. 2.54 cm/inch.

3. identity:  f(a,a,a,…) = a

Properties 1 and 2 lead us to consider linear functions consisting of sums and multipliers: f(a,b,c)=Aa+Bb+Cc …, where the capital letters are multipliers by “constants” – numbers that are independent of the list values a,b,c, and since the order should not matter, we simplify to f(a,b,c)=K (a+b+c+…) because a constant multiplier K makes order not matter. Property 3 forces us to pick K = 1/N where N is the length of the list. These properties lead us to the mathematical solution: average = sum of list of numbers divided by the length of the list.

A coin flip is a simple DSS: heads I do it, tails I don’t. The challenge of a good DSS is to perform better than random choice and also perform better (more accurately, more efficiently, more reliably, more timely and/or under more adverse conditions) than unassisted human decision making.

Therefore, I propose the following guiding principles for DSS design: choose inputs wisely (accessible, timely, efficient, relevant), determine to what you want output to be sensitive AND to what you want output to be insensitive, and be very clear about your measures of success.

For example, consider designing a DSS to determine whether a patient should receive the full range of support capabilities of an intensive care unit (ICU), or not. Politicians have cited the large bump in the cost of the last year of life as an opportunity to reduce costs of healthcare, and now pay primary care doctors to encourage patients to establish advanced directives not to use ICU services. From the DSS standpoint, the reasoning is flawed because the decision not to use ICU services should be sensitive to benefit as well as cost, commonly called cost-benefit analysis. If we measure success of ICU services by the benefit of quality life net gain (QLNG, “quailing”), measured in quality life-years (QuaLYs) and achieve 50% success with that, then the cost per QuaLY measures the cost-benefit of ICU services. In various cost-benefit decisions, the US Congress has decided to proceed if the cost is under $20-$100,000/QuaLY. If ICU services are achieving such a cost-benefit, then it is not logical to summarily block such services in advance. Rather, the ways to reduce those costs include improving the cost efficiency of ICU care, and improving the decision-making of who will benefit.

An example of a DSS is the prediction of plane failure from a thousand measurements of strain and function of various parts of an airplane. The desired output is probability of failure to complete the next flight safely. Cost-Benefit analysis then establishes what threshold or operating point merits grounding the plane for further inspection and preventative maintenance repairs. If a DSS reports probability of failure, then the decision (to ground the plane) needs to establish a threshold at which a certain probability triggers the decision to ground the plane.

The notion of an operating point brings up another important concept in decision support. At first blush, one might think the success of a DSS is determined by its ability to correctly identify a predicted outcome, such as futility of ICU care (when will the end result be no quality life net gain). The flaw in that measure of success is that it depends on prevalence in the study group. As an extreme example, if you study a group of patients with fatal gunshot wounds to the head, none will benefit and the DSS requirement is trivial and any DSS that says no for that group has performed well. At the other extreme, if all patients become healthy, the DSS requirement is also trivial, just say yes. Therefore the proper assessment of a DSS should pay attention to the prevalence and the operating point.

The impact of prevalence and operating point on decision-making is addressed by receiver-operator curves. Consider looking at the blood concentration of Troponin-I (TnI) as the sole determinant to decide who is having a heart attack.  If one plots a graph with horizontal axis troponin level and vertical axis ultimate proof of heart attack, the percentage of hits will generally be higher for higher values of TnI. To create such a graph, we compute a “truth table” which reports whether the test was above or below a decision threshold operating point, and whether or not the disease (heart attack) was in fact present:

TRUTH TABLE

              Disease            Not Disease
Test Positive

TP

FP

Test Negative

FN

TN

Total

TP+FN

FP+TN

The sensitivity to the disease is the true positive rate (TPR), the percentage of all disease cases that are ranked by the decision support as positive: TPR = TP/(TP+FN). 100% sensitivity can be achieved trivially by lowering the threshold for a positive test to zero, at a cost.  While sensitivity is necessary for success it is not sufficient. In addition to wanting sensitivity to disease, we want to avoid labeling non-disease as disease. That is often measured by specificity, the true negative rate (TNR), the percentage of those without disease who are correctly identified as not having disease: TNR = TN/(FP+TN). I propose also we define the complement to specificity, the anti-sensitivity, as the false positive rate (FPR), FPR = FP/(FP+TN) = 1 – TNR. Anti-sensitivity is a penalty cost of lowering the diagnostic threshold to boost sensitivity, as the concomitant rise in anti-sensitivity means a growing number of non-disease subjects are labeled as having disease. We want high sensitivity to true disease without high anti-sensitivity to false disease, and we want to be insensitive to common distractors. In these formulas, note that false negatives (FN) are True for disease, and false positives (FP) are False for disease, so the denominators add FN to TP for total True disease, and add FP to TN for total False for disease.

The graph in figure 1 justifies the definition of anti-sensitivity. It is an ROC or “Receiver-Operator Curve” which is a plot of sensitivity versus anti-sensitivity for different diagnostic thresholds of a test (operating points). Note, higher sensitivity comes at the cost of higher anti-sensitivity. Where to operate (what threshold to use for diagnosis) can be selected according to cost-benefit analysis of sensitivity versus anti-sensitivity (and specificity).

 untitled
FIgure 1 ROC (Receiver-Operator Curve): Graph of sensitivity (true positive rate) versus anti-sensitivity (false positive rate) computed by changing the operating point (threshold for declaring a test numeric value positive for disease). High area under the curve (AUC) is favorable because it means less anti-sensitivity for high sensitivity (upper left corner of shaded area more to the left, and higher). The dots on the curve are operating points. An inclusive operating point (high on the curve, high sensitivity) is used for screening tests, whereas an exclusive operating point (low on the curve, low anti-sensitivity) is used for definitive diagnosis.

Cost benefit analysis generally is based on a semi-lattice, or upside-down branching tree, which represents all choices and outcomes. It is important to include all branches down to final outcomes. For example, if the test is a mammogram to screen for breast cancer, the cost is not just the cost of the test, and the benefit “early diagnosis.” The cost-benefit calculation forces us to put a numerical value on the impact, such as a financial cost to an avoidable death, or we can get a numerical result in terms of quality life years expected. The cost, however, is not just the cost of the mammogram, but also of downstream events such as the cost of the needle biopsies for the suspicious “positives” and so on.

semilattice decision treeFigure 2 Semi-lattice Decision Tree: Starting from all patients, create a branch point for your test result, and add further branch points for any subsequent step-wise outcomes until you reach the “bottom line.” Assign a value to each, resulting in a numerical net cost and net benefit. If tests invoke risks (for example, needle biopsy of lung can collapse a lung and require hospitalization for a chest tube) then insert branch points for whether the complication occurs or not, as the treatment of a complication counts as part of the cost. The intermediary nodes can have probability of occurrence as their numeric factor, and the bottom line can apply the net probability of the path leading to a value as a multiplier to the dollar value (a 10% chance of costing $10,000 counts as an expectation cost of 0.1 x 10,000 = $1,000).

A third area of discussion is the statistical power of a DSS – how reliable is it in the application that you care about? Commonly DSS design is contrary to common statistical applications which address significance of a deviation in a small number of variables that have been measured many times in a large population. Instead, DSS often uses many variables to fully describe or characterize the status of a small population. For example, thousands of different measurements may be performed on a few dozen airplanes, aiming to predict when the plane should be grounded for repairs. A similar inversion of numbers – numerous variables, small number of cases – is common in genomics studies.

The success of a DDS is measured by its predictive value compared to outcomes or other measures of success. Thus measures of success include positive predictive value, negative predictive value, and confidence. A major problem with DDS is the inversion of the usually desired ratio of repetitions to measurement variables. When you get a single medical lab test, you have a single measurement value such as potassium level and a large number of normal subjects for comparison. If we knew the  mean μ and standard deviation σ that describes the distribution of normal values in the population at large, then we could compute the confidence in the decision to call our observed value abnormal based on the normal distribution:  , <br /><br /><br /><br /><br /><br /><br /><br /><br />
f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }.<br /><br /><br /><br /><br /><br /><br /><br /><br />

A value may be deemed distinctive based on a 95% confidence interval if it falls outside of the norm, say by more than twice the standard deviation σ, thereby establishing that it is unlikely to be random as the distance from the mean excludes 95% of the normal distribution.

The determination of confidence in an observed set of results stems from maximized likelihood estimates. Earlier in this article we described how to derive the the mean, or center, of a set of measurements. A similar analysis can derive the standard deviation (square root of variance) as a measure of spread around the mean, as well as other descriptive statistics based on sample values. These formulas describe the distribution of sample values about the mean. The calculation is based on a simple inversion. If we knew the mean and variance of a population of values for a measurement, we could calculate the likelihood of each new measurement falling a particular distance from the mean, and we could calculate the combined likelihood for a set of observed values. Maximized Likelihood Estimation (MLE) simply inverts the method of calculation. Instead of treating the mean and variance as known, we can treat the sample observations as the known data, to characterize a distribution for the observed data samples from an estimate of the spread about an unknown mean from a set of N normal samples x(one can apply calculus to compute the formulas below for the unknown mean and unknown variance, based simply on computing how to maximize the joint likelihood of the observations  xfrom the frequency distribution above, in order t0 derive the following formulas): 

\sigma = \sqrt{\frac{1}{N}\left[(x_1-\mu)^2 + (x_2-\mu)^2 + \cdots + (x_N - \mu)^2\right]}, {\rm \ \ where\ \ } \mu = \frac{1}{N} (x_1 + \cdots + x_N),

The frequency distribution (a function of mean and spread) reports the frequency of observing x if it is drawn from a population with the specified mean μ and standard deviation σ . We can invert that by treating the observations, x, as known and the mean μ and standard deviation σ unknown, then calculate the values μ and  σ that maximize the likelihood of our sample set as coming from the dynamically described population.

In DSS there is typically an inversion of the usually requisite large number of samples (small versus large) and number of variables (large versus small. This inversion has major consequences on data confidence. If you measure just 14 independent variables versus one variable, each at 95% confidence, the net confidence drops exponentially to less than 50%: 0.9514=49%. In the airplane grounding screen tests, 1000 independent variables, at 95% confidence each, yields a net confidence of only 5 x 10-23 which is 10 sextillion times less than 50% confidence. This same problem arises in genomics research, in which we have a large array of gene product measurements on a small number of patients. Standard statistical tools are problematic at high variable counts. One can turn to qualitative grouping tools such as exploratory factor analysis, or recover statistical robustness with HykGene, a combined cluster and ranking method devised by the author to improve dramatically the ability to identify distinctions with confidence when the number of variables is high.

Evolution of DSS

Aviva Lev-Ari, PhD, RN

The examples provided above refer to sets of binary models, one family of DSS. Another type of DSS is multivariate in nature, a corollary of multivariate scenarios constitute alternative choice options. Last decade development in the DSS field involved the design of Recommendation Engines given manifested preference functions that involved simultaneous trade-off functions against cost function. Game theoretical context is embedded into Recommendation Engines. The output mentioned above, is in fact an array of options with probabilities of saving reward assigned by the Recommendation Engine.

Underlining Computation Engines

Methodological Basis of Clinical DSS

There are many different methodologies that can be used by a CDSS in order to provide support to the health care professional.[7]

The basic components of a CDSS include a dynamic (medical) knowledge base and an inference mechanism (usually a set of rules derived from the experts and evidence-based medicine) and implemented through medical logic modules based on a language such as Arden syntax. It could be based on Expert systems or artificial neural networks or both (connectionist expert systems).

Bayesian Network

The Bayesian network is a knowledge-based graphical representation that shows a set of variables and their probabilistic relationships between diseases and symptoms. They are based on conditional probabilities, the probability of an event given the occurrence of another event, such as the interpretation of diagnostic tests. Bayes’ rule helps us compute the probability of an event with the help of some more readily available information and it consistently processes options as new evidence is presented. In the context of CDSS, the Bayesian network can be used to compute the probabilities of the presence of the possible diseases given their symptoms.

Some of the advantages of Bayesian Network include the knowledge and conclusions of experts in the form of probabilities, assistance in decision making as new information is available and are based on unbiased probabilities that are applicable to many models.

Some of the disadvantages of Bayesian Network include the difficulty to get the probability knowledge for possible diagnosis and not being practical for large complex systems given multiple symptoms. The Bayesian calculations on multiple simultaneous symptoms could be overwhelming for users.

Example of a Bayesian network in the CDSS context is the Iliad system which makes use of Bayesian reasoning to calculate posterior probabilities of possible diagnoses depending on the symptoms provided. The system now covers about 1500 diagnoses based on thousands of findings.

Another example is the DXplain system that uses a modified form of the Bayesian logic. This CDSS produces a list of ranked diagnoses associated with the symptoms.

A third example is SimulConsult, which began in the area of neurogenetics. By the end of 2010 it covered ~2,600 diseases in neurology and genetics, or roughly 25% of known diagnoses. It addresses the core issue of Bayesian systems, that of a scalable way to input data and calculate probabilities, by focusing specialty by specialty and achieving completeness. Such completeness allows the system to calculate the relative probabilities, rather than the person inputting the data. Using the peer-reviewed medical literature as its source, and applying two levels of peer-review to the data entries, SimulConsult can add a disease with less than a total of four hours of clinician time. It is widely used by pediatric neurologists today in the US and in 85 countries around the world.

Neural Network

Artificial Neural Networks (ANN) is a nonknowledge-based adaptive CDSS that uses a form of artificial intelligence, also known as machine learning, that allows the systems to learn from past experiences / examples and recognizes patterns in clinical information. It consists of nodes called neuron and weighted connections that transmit signals between the neurons in a forward or looped fashion. An ANN consists of 3 main layers: Input (data receiver or findings), Output (communicates results or possible diseases) and Hidden (processes data). The system becomes more efficient with known results for large amounts of data.

The advantages of ANN include the elimination of needing to program the systems and providing input from experts. The ANN CDSS can process incomplete data by making educated guesses about missing data and improves with every use due to its adaptive system learning. Additionally, ANN systems do not require large databases to store outcome data with its associated probabilities. Some of the disadvantages are that the training process may be time consuming leading users to not make use of the systems effectively. The ANN systems derive their own formulas for weighting and combining data based on the statistical recognition patterns over time which may be difficult to interpret and doubt the system’s reliability.

Examples include the diagnosis of appendicitis, back pain, myocardial infarction, psychiatric emergencies and skin disorders. The ANN’s diagnostic predictions of pulmonary embolisms were in some cases even better than physician’s predictions. Additionally, ANN based applications have been useful in the analysis of ECG (A.K.A. EKG) waveforms.

Genetic Algorithms

Genetic Algorithm (GA) is a nonknowledge-based method developed in the 1940s at the Massachusetts Institute of Technology based on Darwin’s evolutionary theories that dealt with the survival of the fittest. These algorithms rearrange to form different re-combinations that are better than the previous solutions. Similar to neural networks, the genetic algorithms derive their information from patient data.

An advantage of genetic algorithms is these systems go through an iterative process to produce an optimal solution. The fitness function determines the good solutions and the solutions that can be eliminated. A disadvantage is the lack of transparency in the reasoning involved for the decision support systems making it undesirable for physicians. The main challenge in using genetic algorithms is in defining the fitness criteria. In order to use a genetic algorithm, there must be many components such as multiple drugs, symptoms, treatment therapy and so on available in order to solve a problem. Genetic algorithms have proved to be useful in the diagnosis of female urinary incontinence.

Rule-Based System

A rule-based expert system attempts to capture knowledge of domain experts into expressions that can be evaluated known as rules; an example rule might read, “If the patient has high blood pressure, he or she is at risk for a stroke.” Once enough of these rules have been compiled into a rule base, the current working knowledge will be evaluated against the rule base by chaining rules together until a conclusion is reached. Some of the advantages of a rule-based expert system are the fact that it makes it easy to store a large amount of information, and coming up with the rules will help to clarify the logic used in the decision-making process. However, it can be difficult for an expert to transfer their knowledge into distinct rules, and many rules can be required for a system to be effective.

Rule-based systems can aid physicians in many different areas, including diagnosis and treatment. An example of a rule-based expert system in the clinical setting is MYCIN. Developed at Stanford University by Edward Shortliffe in the 1970s, MYCIN was based on around 600 rules and was used to help identify the type of bacteria causing an infection. While useful, MYCIN can help to demonstrate the magnitude of these types of systems by comparing the size of the rule base (600) to the narrow scope of the problem space.

The Stanford AI group subsequently developed ONCOCIN, another rules-based expert system coded in Lisp in the early 1980s.[8] The system was intended to reduce the number of clinical trial protocol violations, and reduce the time required to make decisions about the timing and dosing of chemotherapy in late phase clinical trials. As with MYCIN, the domain of medical knowledge addressed by ONCOCIN was limited in scope and consisted of a series of eligibility criteria, laboratory values, and diagnostic testing and chemotherapy treatment protocols that could be translated into unambiguous rules. Oncocin was put into production in the Stanford Oncology Clinic.

Logical Condition

The methodology behind logical condition is fairly simplistic; given a variable and a bound, check to see if the variable is within or outside of the bounds and take action based on the result. An example statement might be “Is the patient’s heart rate less than 50 BPM?” It is possible to link multiple statements together to form more complex conditions. Technology such as a decision table can be used to provide an easy to analyze representation of these statements.

In the clinical setting, logical conditions are primarily used to provide alerts and reminders to individuals across the care domain. For example, an alert may warn an anesthesiologist that their patient’s heart rate is too low; a reminder could tell a nurse to isolate a patient based on their health condition; finally, another reminder could tell a doctor to make sure he discusses smoking cessation with his patient. Alerts and reminders have been shown to help increase physician compliance with many different guidelines; however, the risk exists that creating too many alerts and reminders could overwhelm doctors, nurses, and other staff and cause them to ignore the alerts altogether.

Causal Probabilistic Network

The primary basis behind the causal network methodology is cause and effect. In a clinical causal probabilistic network, nodes are used to represent items such as symptoms, patient states or disease categories. Connections between nodes indicate a cause and effect relationship. A system based on this logic will attempt to trace a path from symptom nodes all the way to disease classification nodes, using probability to determine which path is the best fit. Some of the advantages of this approach are the fact that it helps to model the progression of a disease over time and the interaction between diseases; however, it is not always the case that medical knowledge knows exactly what causes certain symptoms, and it can be difficult to choose what level of detail to build the model to.

The first clinical decision support system to use a causal probabilistic network was CASNET, used to assist in the diagnosis of glaucoma. CASNET featured a hierarchical representation of knowledge, splitting all of its nodes into one of three separate tiers: symptoms, states and diseases.

  1. a b c d e “Decision support systems .” 26 July 2005. 17 Feb. 2009 <http://www.openclinical.org/dss.html>.
  2. 2^ a b c d e f g Berner, Eta S., ed. Clinical Decision Support Systems. New York, NY: Springer, 2007.
  3. 3^ Khosla, Vinod (December 4, 2012). “Technology will replace 80% of what doctors do”. Retrieved April 25, 2013.
  4. ^ Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J et al. (2005). “Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review.”JAMA 293 (10): 1223–38. doi:10.1001/jama.293.10.1223PMID 15755945.
  5. ^ Kensaku Kawamoto, Caitlin A Houlihan, E Andrew Balas, David F Lobach. (2005). “Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success.”BMJ 330 (7494): 765. doi:10.1136/bmj.38398.500764.8FPMC 555881PMID 15767266.
  6. ^ Gluud C, Nikolova D (2007). “Likely country of origin in publications on randomised controlled trials and controlled clinical trials during the last 60 years.”Trials 8: 7. doi:10.1186/1745-6215-8-7PMC 1808475PMID 17326823.
  7. ^ Wagholikar, K. “Modeling Paradigms for Medical Diagnostic Decision Support: A Survey and Future Directions”. Journal of Medical Systems. Retrieved 2012.
  8. ^ ONCOCIN: An expert system for oncology protocol management E. H. Shortliffe, A. C. Scott, M. B. Bischoff, A. B. Campbell, W. V. Melle, C. D. Jacobs Seventh International Joint Conference on Artificial Intelligence, Vancouver, B.C.. Published in 1981

SOURCE for Computation Engines Section and REFERENCES:

http://en.wikipedia.org/wiki/Clinical_decision_support_system

Cardiovascular Diseases: Decision Support Systems (DSS) for Disease Management Decision Making – DSS analyzes information from hospital cardiovascular patients in real time and compares it with a database of thousands of previous cases to predict the most likely outcome.

Can aviation technology reduce heart surgery complications?

Algorithm for real-time analysis of data holds promise for forecasting
August 13, 2012 | By 

British researchers are working to adapt technology from the aviation industry to help prevent complications among heart patients after surgery. Up to 1,000 sensors aboard aircraft help airlines determine when a plane requires maintenance, reports The Engineer, serving as a model for the British risk-prediction system.

The system analyzes information from hospital cardiovascular patients in real time and compares it with a database of thousands of previous cases to predict the most likely outcome.

“There are vast amounts of clinical data currently collected which is not analyzed in any meaningful way. This tool has the potential to identify subtle early signs of complications from real-time data,” Stuart Grant, a research fellow in surgery at University Hospital of South Manchester, says in a hospital statement. Grant is part of the Academic Surgery Unit working with Lancaster University on the project, which is still its early stages.

The software predicts the patient’s condition over a 24-hour period using four metrics: systolic blood pressure, heart rate, respiration rate and peripheral oxygen saturationexplains EE Times.

As a comparison tool, the researchers obtained a database of 30,000 patient records from the Massachusetts Institute of Technology and combined it with a smaller, more specialized database from Manchester.

In six months of testing, its accuracy is about 75 percent, The Engineer reports. More data and an improved algorithm could boost that rate to 85 percent, the researchers believe. Making the software web-based would allow physicians to access the data anywhere, even on tablets or phones, and could enable remote consultation with specialists.

In their next step, the researchers are applying for more funding and for ethical clearance for a large-scale trial.

U.S. researchers are working on a similar crystal ball, but one covering an array of conditions. Researchers from the University of Washington, MIT and Columbia University are using a statistical model that can predict future ailments based on a patient’s history–and that of thousands of others.

And the U.S. Department of Health & Human Services is using mathematical modeling to analyze effects of specific healthcare interventions.

Predictive modeling also holds promise to make clinical research easier by using algorithms examine multiple scenarios based on different kinds of patient populations, specified health conditions and various treatment regimens

To learn more:
– here’s the Engineer article
– check out the hospital report
– read the EE Times article

Related Articles:
Algorithm looks to past to predict future health conditions
HHS moves to mathematical modeling for research, intervention evaluation
Decision support, predictive modeling may speed clinical research

SOURCE:

Can aviation technology reduce heart surgery complications? – FierceHealthIT http://www.fiercehealthit.com/story/can-aviation-technology-reduce-heart-surgery-complications/2012-08-13#ixzz2SITHc61J

http://www.fiercehealthit.com/story/study-decision-support-systems-must-be-flexible-adaptable-transparent/2012-08-20

Medical Decision Making Tools: Overview of DSS available to date  

http://www.openclinical.org/dss.html

Clinical Decision Support Systems – used for Cardiovascular Medical Decisions

Stud Health Technol Inform. 2010;160(Pt 2):846-50.

AALIM: a cardiac clinical decision support system powered by advanced multi-modal analytics.

Amir A, Beymer D, Grace J, Greenspan H, Gruhl D, Hobbs A, Pohl K, Syeda-Mahmood T, Terdiman J, Wang F.

Source

IBM Almaden Research Center, San Jose, CA, USA.

Abstract

Modern Electronic Medical Record (EMR) systems often integrate large amounts of data from multiple disparate sources. To do so, EMR systems must align the data to create consistency between these sources. The data should also be presented in a manner that allows a clinician to quickly understand the complete condition and history of a patient’s health. We develop the AALIM system to address these issues using advanced multimodal analytics. First, it extracts and computes multiple features and cues from the patient records and medical tests. This additional metadata facilitates more accurate alignment of the various modalities, enables consistency check and empowers a clear, concise presentation of the patient’s complete health information. The system further provides a multimodal search for similar cases within the EMR system, and derives related conditions and drugs information from them. We applied our approach to cardiac data from a major medical care organization and found that it produced results with sufficient quality to assist the clinician making appropriate clinical decisions.

PMID: 20841805 [PubMed – indexed for MEDLINE]

DSS development for Enhancement of Heart Drug Compliance by Cardiac Patients 

A good example of a thorough and effective CDSS development process is an electronic checklist developed by Riggio et al. at Thomas Jefferson University Hospital (TJUH) [12]. TJUH had a computerized physician order-entry system in place. To meet congestive heart failure and acute myocardial infarction quality measures (e.g., use of aspirin, beta blockers, and angiotensin-converting enzyme (ACE) inhibitors), a multidisciplinary team including a focus group of residents developed a checklist, embedded in the computerized discharge instructions, that required resident physicians to prescribe the recommended medications or choose from a drop-down list of contraindications. The checklist was vetted by several committees, including the medical executive committee, and presented at resident conferences for feedback and suggestions. Implementation resulted in a dramatic improvement in compliance.

http://virtualmentor.ama-assn.org/2011/03/medu1-1103.html

Early DSS Development at Stanford Medical Center in the 70s

MYCIN (1976)     MYCIN was a rule-based expert system designed to diagnose and recommend treatment for certain blood infections (antimicrobial selection for patients with bacteremia or meningitis). It was later extended to handle other infectious diseases. Clinical knowledge in MYCIN is represented as a set of IF-THEN rules with certainty factors attached to diagnoses. It was a goal-directed system, using a basic backward chaining reasoning strategy (resulting in exhaustive depth-first search of the rules base for relevant rules though with additional heuristic support to control the search for a proposed solution). MYCIN was developed in the mid-1970s by Ted Shortliffe and colleagues at Stanford University. It is probably the most famous early expert system, described by Mark Musen as being “the first convincing demonstration of the power of the rule-based approach in the development of robust clinical decision-support systems” [Musen, 1999].

The EMYCIN (Essential MYCIN) expert system shell, employing MYCIN’s control structures was developed at Stanford in 1980. This domain-independent framework was used to build diagnostic rule-based expert systems such as PUFF, a system designed to interpret pulmonary function tests for patients with lung disease.

http://www.bmj.com/content/346/bmj.f657

ECG for Detection of MI: DSS use in Cardiovascualr Disease Management

http://faculty.ksu.edu.sa/AlBarrak/Documents/Clinical%20Decision%20Support%20Systems_Ch01.pdf

also showed that neural networks did a better job than two experienced cardiologists in detecting acute myocardial infarction in electrocardiograms with concomitant left bundle branch block.

Olsson SE, Ohlsson M, Ohlin H, Edenbrandt L. Neural networks—a diagnostic tool in acute myocardial infarction with concomitant left bundle branch block. Clin Physiol Funct Imaging 2002;22:295–299.

Sven-Erik Olsson, Hans Öhlin, Mattias Ohlsson and Lars Edenbrandt
Neural networks – a diagnostic tool in acute myocardial infarction with concomitant left bundle branch block
Clinical Physiology and Functional Imaging 22, 295-299 (2002) 

Abstract
The prognosis of acute myocardial infarction (AMI) improves by early revascularization. However the presence of left bundle branch block (LBBB) in the electrocardiogram (ECG) increases the difficulty in recognizing an AMI and different ECG criteria for the diagnosis of AMI have proved to be of limited value. The purpose of this study was to detect AMI in ECGs with LBBB using artificial neural networks and to compare the performance of the networks to that of six sets of conventional ECG criteria and two experienced cardiologists. A total of 518 ECGs, recorded at an emergency department, with a QRS duration > 120 ms and an LBBB configuration, were selected from the clinical ECG database. Of this sample 120 ECGs were recorded on patients with AMI, the remaining 398 ECGs being used as a control group. Artificial neural networks of feed-forward type were trained to classify the ECGs as AMI or not AMI. The neural network showed higher sensitivities than both the cardiologists and the criteria when compared at the same levels of specificity. The sensitivity of the neural network was 12% (P = 0.02) and 19% (P = 0.001) higher than that of the cardiologists. Artificial neural networks can be trained to detect AMI in ECGs with concomitant LBBB more effectively than conventional ECG criteria or experienced cardiologists.

http://home.thep.lu.se/~mattias/publications/papers/lu_tp_00_38_abs.html

Additional SOURCES:

http://www.implementationscience.com/content/6/1/92

http://www.fiercehealthit.com/story/study-decision-support-systems-must-be-flexible-adaptable-transparent/2012-08-20

 Comment of Note

During 1979-1983 Dr. Aviva Lev-Ari was part of Prof. Ronald A. Howard, Stanford University, Study Team, the consulting group to Stanford Medical Center during MYCIN feature enhancement development.

Professor Howard is one of the founders of the decision analysis discipline. His books on probabilistic modeling, decision analysis, dynamic programming, and Markov processes serve as major references for courses and research in these fields.

https://engineering.stanford.edu/profile/rhoward

It was Prof. Howard from EES, Prof. Amos Tversky of Behavior Science  (Advisor of Dr. Lev-Ari’s Masters Thesis at HUJ), and Prof. Kenneth Arrow, Economics, with 15 doctoral students in the early 80s, that formed the Interdisciplinary Decision Analysis Core Group at Stanford. Students of Prof. Howard, chiefly, James E. Matheson, started the Decision Analysis Practice at Stanford Research Institute (SRI, Int’l) in Menlo Park, CA.

http://www.sri.com/

Dr. Lev-Ari  was hired on 3/1985 to head SRI’s effort in algorithm-based DSS development. The models she developed were applied in problem solving for  SRI Clients, among them Pharmaceutical Manufacturers: Ciba Geigy, now NOVARTIS, DuPont, FMC, Rhone-Poulenc, now Sanofi-Aventis.

Read Full Post »

%d bloggers like this: