Posts Tagged ‘hemogram’


The Automated Second Opinion Generator

Larry H. Bernstein, MD

Gil David and Larry Bernstein have developed a first generation software agent under the supervision of Prof. Ronal Coifman, in the Yale University Applied Mathematics Program that is the equivalent of an intelligent EHR Dashboard that learns.  What is a Dashboard?   A Dashboard is a visual display of essential metrics. The primary purpose is to gather information and generate the metrics relatively quickly, and analyze it, meeting the highest standard of accuracy.  This invention is a leap across traditional boundaries of Health Information Technology in that it integrates and digests extractable information sources from the medical record using the laboratory, the extractable vital signs, EKG, for instance, and documented clinical descriptors to form one or more  provisional diagnoses describing the patient status by inference from a nonparametric network algorithm.  This is the first generation of a “convergence” of medicine and information science.  The diagnoses are complete only after review of thousands of records to which diagnoses are first provided, and then training the algorithm, and validating the software by applying to a second set of data, and reviewing the accuracy of the diagnoses.

The only limitation of the algorithm is sparsity of data in some subsets, which doesn’t permit a probability calculation until sufficient data is obtained.  The limitation is not so serious because it does not disable the system from recognizing at least 95 percent of the information used in medical decision-making, and adequately covers the top 15 medical diagnoses.  An example of this exception would be the diagnosis of alpha or beta thalassemia, with a microcytic picture (MCV low) and RBC high with a low Hgb).  The accuracy is very high because the anomaly detection used for classifying the data creates aggregates that have common features.  The aggregates themselves are consistent within separatory  rules that pertain to any class.  As the model grows, however, there is unknown potential for there to be prognostic, as well as diagnostic information within classes (subclasses), and a further potential to uncover therapeutic differences within classes – which will be made coherent with new classes of drugs (personalized medicine) that are emerging from the “convergence” of genomics, metabolomics, and translational biology.

The fact that such algorithms have already been used for limited data sets and unencumbered diagnoses in many cases using the approach of studies with inclusions and exclusions common for clinical trials, the approach has proved ever more costly when used outside the study environment.   The elephant in the room is age-related co-morbidities and co-existence of obesity, lipid derangements, renal function impairment, genetic and environmental factors that are hidden from view.  The approach envisioned is manageable, overcoming these obstacles, and handles both inputs and outputs with considerable ease.

We anticipate that the effect of implementing this artificial intelligence diagnostic amplifier would result in higher physician productivity at a time of great human resource limitation(s), safer prescribing practices, rapid identification of unusual patients, better assignment of patients to observation, inpatient beds, intemsive care, or referral to clinic, shortened length of patients ICU and bed days.  If the observation of systemic issues in “To err is human” is now 10 years old with marginal improvement at great cost, this should be a quantum leap forward for the patient, the physician, the caregiving team, and the society that adopts it.



Read Full Post »