Archive for the ‘Electronic Health Record’ Category

Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.


3.3.18   Artificial Intelligence and Cardiovascular Disease, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.
















Read Full Post »

The Journey of Antibiotic Discovery

Reporter and Curator: Dr. Sudipta Saha, Ph.D.


The term ‘antibiotic’ was introduced by Selman Waksman as any small molecule, produced by a microbe, with antagonistic properties on the growth of other microbes. An antibiotic interferes with bacterial survival via a specific mode of action but more importantly, at therapeutic concentrations, it is sufficiently potent to be effective against infection and simultaneously presents minimal toxicity. Infectious diseases have been a challenge throughout the ages. From 1347 to 1350, approximately one-third of Europe’s population perished to Bubonic plague. Advances in sanitary and hygienic conditions sufficed to control further plague outbreaks. However, these persisted as a recurrent public health issue. Likewise, infectious diseases in general remained the leading cause of death up to the early 1900s. The mortality rate shrunk after the commercialization of antibiotics, which given their impact on the fate of mankind, were regarded as a ‘medical miracle’. Moreover, the non-therapeutic application of antibiotics has also greatly affected humanity, for instance those used as livestock growth promoters to increase food production after World War II.


Currently, more than 2 million North Americans acquire infections associated with antibiotic resistance every year, resulting in 23,000 deaths. In Europe, nearly 700 thousand cases of antibiotic-resistant infections directly develop into over 33,000 deaths yearly, with an estimated cost over €1.5 billion. Despite a 36% increase in human use of antibiotics from 2000 to 2010, approximately 20% of deaths worldwide are related to infectious diseases today. Future perspectives are no brighter, for instance, a government commissioned study in the United Kingdom estimated 10 million deaths per year from antibiotic resistant infections by 2050.


The increase in antibiotic-resistant bacteria, alongside the alarmingly low rate of newly approved antibiotics for clinical usage, we are on the verge of not having effective treatments for many common infectious diseases. Historically, antibiotic discovery has been crucial in outpacing resistance and success is closely related to systematic procedures – platforms – that have catalyzed the antibiotic golden age, namely the Waksman platform, followed by the platforms of semi-synthesis and fully synthetic antibiotics. Said platforms resulted in the major antibiotic classes: aminoglycosides, amphenicols, ansamycins, beta-lactams, lipopeptides, diaminopyrimidines, fosfomycins, imidazoles, macrolides, oxazolidinones, streptogramins, polymyxins, sulphonamides, glycopeptides, quinolones and tetracyclines.


The increase in drug-resistant pathogens is a consequence of multiple factors, including but not limited to high rates of antimicrobial prescriptions, antibiotic mismanagement in the form of self-medication or interruption of therapy, and large-scale antibiotic use as growth promotors in livestock farming. For example, 60% of the antibiotics sold to the USA food industry are also used as therapeutics in humans. To further complicate matters, it is estimated that $200 million is required for a molecule to reach commercialization, with the risk of antimicrobial resistance rapidly developing, crippling its clinical application, or on the opposing end, a new antibiotic might be so effective it is only used as a last resort therapeutic, thus not widely commercialized.


Besides a more efficient management of antibiotic use, there is a pressing need for new platforms capable of consistently and efficiently delivering new lead substances, which should attend their precursors impressively low rates of success, in today’s increasing drug resistance scenario. Antibiotic Discovery Platforms are aiming to screen large libraries, for instance the reservoir of untapped natural products, which is likely the next antibiotic ‘gold mine’. There is a void between phenotanypic screening (high-throughput) and omics-centered assays (high-information), where some mechanistic and molecular information complements antimicrobial activity, without the laborious and extensive application of various omics assays. The increasing need for antibiotics drives the relentless and continuous research on the foreground of antibiotic discovery. This is likely to expand our knowledge on the biological events underlying infectious diseases and, hopefully, result in better therapeutics that can swing the war on infectious diseases back in our favor.


During the genomics era came the target-based platform, mostly considered a failure due to limitations in translating drugs to the clinic. Therefore, cell-based platforms were re-instituted, and are still of the utmost importance in the fight against infectious diseases. Although the antibiotic pipeline is still lackluster, especially of new classes and novel mechanisms of action, in the post-genomic era, there is an increasingly large set of information available on microbial metabolism. The translation of such knowledge into novel platforms will hopefully result in the discovery of new and better therapeutics, which can sway the war on infectious diseases back in our favor.














Read Full Post »

Digital Therapeutics: A Threat or Opportunity to Pharmaceuticals

Digital Therapeutics: A Threat or Opportunity to Pharmaceuticals

Reporter and Curator: Dr. Sudipta Saha, Ph.D.


3.3.7   Digital Therapeutics: A Threat or Opportunity to Pharmaceuticals, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Digital Therapeutics (DTx) have been defined by the Digital Therapeutics Alliance (DTA) as “delivering evidence based therapeutic interventions to patients, that are driven by software to prevent, manage or treat a medical disorder or disease”. They might come in the form of a smart phone or computer tablet app, or some form of a cloud-based service connected to a wearable device. DTx tend to fall into three groups. Firstly, developers and mental health researchers have built digital solutions which typically provide a form of software delivered Cognitive-Behaviour Therapies (CBT) that help patients change behaviours and develop coping strategies around their condition. Secondly there are the group of Digital Therapeutics which target lifestyle issues, such as diet, exercise and stress, that are associated with chronic conditions, and work by offering personalized support for goal setting and target achievement. Lastly, DTx can be designed to work in combination with existing medication or treatments, helping patients manage their therapies and focus on ensuring the therapy delivers the best outcomes possible.

Pharmaceutical companies are clearly trying to understand what DTx will mean for them. They want to analyze whether it will be a threat or opportunity to their business. For a long time, they have been providing additional support services to patients who take relatively expensive drugs for chronic conditions. A nurse-led service might provide visits and telephone support to diabetics for example who self-inject insulin therapies. But DTx will help broaden the scope of support services because they can be delivered cost-effectively, and importantly have the ability to capture real-world evidence on patient outcomes. They will no-longer be reserved for the most expensive drugs or therapies but could apply to a whole range of common treatments to boost their efficacy. Faced with the arrival of Digital Therapeutics either replacing drugs, or playing an important role alongside therapies, pharmaceutical firms have three options. They can either ignore DTx and focus on developing drug therapies as they have done; they can partner with a growing number of DTx companies to develop software and services complimenting their drugs; or they can start to build their own Digital Therapeutics to work with their products.

Digital Therapeutics will have knock-on effects in health industries, which may be as great as the introduction of therapeutic apps and services themselves. Together with connected health monitoring devices, DTx will offer a near constant stream of data about an individuals’ behavior, real world context around factors affecting their treatment in their everyday lives and emotional and physiological data such as blood pressure and blood sugar levels. Analysis of the resulting data will help create support services tailored to each patient. But who stores and analyses this data is an important question. Strong data governance will be paramount to maintaining trust, and the highly regulated pharmaceutical industry may not be best-placed to handle individual patient data. Meanwhile, the health sector (payers and healthcare providers) is becoming more focused on patient outcomes, and payment for value not volume. The future will say whether pharmaceutical firms enhance the effectiveness of drugs with DTx, or in some cases replace drugs with DTx.

Digital Therapeutics have the potential to change what the pharmaceutical industry sells: rather than a drug it will sell a package of drugs and digital services. But they will also alter who the industry sells to. Pharmaceutical firms have traditionally marketed drugs to doctors, pharmacists and other health professionals, based on the efficacy of a specific product. Soon it could be paid on the outcome of a bundle of digital therapies, medicines and services with a closer connection to both providers and patients. Apart from a notable few, most pharmaceutical firms have taken a cautious approach towards Digital Therapeutics. Now, it is to be observed that how the pharmaceutical companies use DTx to their benefit as well as for the benefit of the general population.







Read Full Post »

Kaiser Permanente collecting patient data for DNA Research Bank

Reporter: Aviva Lev-Ari, PhD, RN


Kaiser Permanente this week launched a new database that enables researchers to examine participants’ DNA in conjunction with environmental and behavioral health.

Sourced through Scoop.it from: www.healthcareitnews.com

See on Scoop.itCardiovascular Disease: PHARMACO-THERAPY

Read Full Post »

Healthcare conglomeration to access Big Data and lower costs, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Healthcare conglomeration to access Big Data and lower costs

Curator: Larry H. Bernstein, MD, FCAP 



UPDATED on 3/17/2019


Medicare Advantage plans may be driving up quality of care in terms of preventive treatment for coronary artery disease patients, but that has had little impact on outcomes compared with fee-for-service Medicare, researchers reported in JAMA Cardiology.


The expected benefits are not as easily realized as anticipated.   The problem of access to data sources is not as difficult as the content needed for evaluation.


Healthcare Big Data Drives a New Round of Collaborations between Hospitals, Health Systems, and Care Management Companies

DARK DAILY   DARK DAILY info@darkreport.com


January 13, 2016

Recently-announced partnerships want to use big data to improve patient outcomes and lower costs; clinical laboratory test data will have a major role in these efforts

In the race to use healthcare big data to improve patient outcomes, several companies are using acquisitions and joint ventures to beef up and gain access to bigger pools of data. Pathologists and clinical laboratory managers have an interest in this trend, because medical laboratory test data will be a large proportion of the information that resides in these huge healthcare databases.

For health systems that want to be players in the healthcare big data market, one strategy is to do arisk-sharing venture with third-party care-management companies. This allows the health systems to leverage their extensive amounts of patient data while benefiting from the expertise of their venture partners.

Cardinal Health Acquires 71% Interest in naviHealth

One company that wants to work with hospitals and health systems in these types of arrangements is Cardinal Health. It recently acquired a 71% interest in Nashville-based naviHealth. This company partners with health plans, health systems, physicians, and post-acute providers to manage the entire continuum of post-acute care (PAC), according to a news release on the naviHealth website. NaviHealth’s business model involves sharing the financial risk with its clients and leveraging big data to predict best outcomes and lower costs.

“We created an economic model to take on the entire post-acute-care episode,” declared naviHealth CEO and President Clay Richards in a company news release. “It’s leveraging the technology and analytics to create individual care protocols.”

Click here to see image

“The most basic, and the most important, thing is … they [Cardinal Health] share the same core values as we do, which is to be on the right side of healthcare,” naviHealth CEO Clay Richards told The Tennessean. “It’s about how you deliver better outcomes for patients with lower costs: How do you solve the problems [with growing costs]? That’s what we and Cardinal define as being on the right side of healthcare.” (Caption and image copyright: The Tennessean.)

Provider Investments Signal Continuation of Trend

Cardinal Health intends to combine its ability to reduce costs while providing effective care with naviHealth’s evidence-based, personalized post-acute-care plans. This is one approach to harness the power of big data to improve patient care. One goal is focus this expertise on post-acute care, which is one of Medicare’s quality measures.

Patients and their families often are unsure of what to expect after being discharged. And, according to an article published in Kaiser Health News, a 2013 Institutes of Medicine (IOM) report noted a link between the quality of post-acute care and healthcare spending following the discharge of Medicare patients.

However, maximizing the use of healthcare big data requires the participation of multiple stakeholders. Information scientists, hospital administrators, software developers, insurers, clinicians, and patients themselves must all perform a role in order for big data to reach its full potential. No single sector will be able to bring the benefits of big data to fruition; rather collaboration and partnerships will be necessary.

Other Collaborations and Alliances Target Healthcare Big Data

Two other organizations engaged in a similar collaboration are the Mayo Clinic andOptum360, a revenue management services company that focuses on simplifying and streamlining the revenue cycle process. In a press release, the companies announced that they were partnering to “develop new revenue management services capabilities aimed at improving patient experiences and satisfaction while reducing administrative costs for healthcare providers.” (See Dark Daily, “When It Comes to Mining Healthcare Big Data, Including Medical Laboratory Test Results, Optum Labs Is the Company to Watch,” December 14, 2015.)

In order to accomplish this, Mayo will have to share its revenue cycle management (RCM) data with Optum360, which will use the data to devise improved revenue cycle processes and systems.

“What we’re trying to find out, if we can, is what does healthcare cost, and what of that spend really adds value to a patient’s outcome over time, especially with these high-impact diseases,” stated Mayo Clinic President and CEO John Noseworthy, MD, in a story published by the Star Tribune. He was referencing another big data project Mayo is engaged in with UnitedHealth Group. “Ultimately, we as a country have to figure this out, so people can have access to high-quality care and it doesn’t bankrupt them or the country.”

Click here to see image

Mayo Clinic President and CEO John Noseworthy, MD, believes big data may be the key to transforming healthcare costs by informing clinical decision-making and altering patient outcomes. (Photo copyright: Mayo Clinic.)

Another interesting healthcare big data partnership is the Pittsburgh Health Data Alliance (The Alliance). It involves a collaboration between Carnegie Mellon University (CMU), the University of Pittsburgh (PITT), and the University of Pittsburgh Medical Center (UPMC). The aim of The Alliance is to take raw data from wearable devices, insurance records, medical appointments, as well as other common sources, and develop ways to improve the health of individuals and the wider community.

The common thread among all these collaborative efforts is a desire to improve outcomes while reducing costs. This is the promise of healthcare big data. And no matter which direction the effort takes, clinical laboratories, which generate a vast amount of critical health data, are in a good position to play important roles involving the contribution of lab test data and identifying ways to use healthcare big data projects to improve patient care.
—Dava Stewart

Read Full Post »

N3xt generation carbon nanotubes, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

N3xt generation carbon nanotubes

Larry H. Bernstein, MD, FCAP, Curator



Skyscraper-style carbon-nanotube chip design ‘boosts electronic performance by factor of a thousand’





A new revolutionary high-rise architecture for computing (credit: Stanford University)


Researchers at Stanford and three other universities are creating a revolutionary new skyscraper-like high-rise architecture for computing based on carbon nanotube materials instead of silicon.

In Rebooting Computing, a special issue (in press) of the IEEE Computer journal, the team describes its new approach as “Nano-Engineered Computing Systems Technology,” or N3XT.

Suburban-style chip layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy, they note.

N3XT will break data bottlenecks by integrating processors and memory-like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators.

The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits, according to the researchers.

Stanford researchers including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong have “assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra says.

“When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong claims.

Carbon nanotube transistors

Engineers have previously tried to stack silicon chips but with limited success, the researchers suggest. Fabricating a silicon chip requires temperatures close to 1,800 degrees Fahrenheit, making it extremely challenging to build a silicon chip atop another without damaging the first layer. The current approach to what are called 3-D, or stacked, chips is to construct two silicon chips separately, then stack them and connect them with a few thousand wires.

But conventional 3-D silicon chips are still prone to traffic jams and it takes a lot of energy to push data through what are a relatively few connecting wires.

The N3XT team is taking a radically different approach: building layers of processors and memory directly atop one another, connected by millions of vias that can move more data over shorter distances that traditional wire, using less energy, and immersing computation and memory storage into an electronic super-device.

The key is the use of non-silicon materials that can be fabricated at much lower temperatures than silicon, so that processors can be built on top of memory without the new layer damaging the layer below. As in IBM’s recent chip breakthrough (see “Method to replace silicon with carbon nanotubes developed by IBM Research“), N3XT chips are based on carbon nanotube transistors.

Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors, and much thinner. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

Among the N3XT scholars working at this nexus of computation and memory are Christos Kozyrakis and Eric Pop of Stanford, Jeffrey Bokor and Jan Rabaey of the University of California, Berkeley, Igor Markov of the University of Michigan, and Franz Franchetti and Larry Pileggi of Carnegie Mellon University.

New storage technologies 

Team members also envision using data storage technologies that rely on materials other than silicon. This would allow for the new materials to be manufactured on top of CNTs, using low-temperature fabrication processes.

One such data storage technology is called resistive random-access memory, or RRAM (see “‘Memristors’ based on transparent electronics offer technology of the future“). Resistance slows down electrons, creating a zero, while conductivity allows electrons to flow, creating a one. Tiny jolts of electricity switch RRAM memory cells between these two digital states. N3XT team members are also experimenting with a variety of nanoscale magnetic storage materials.

Just as skyscrapers have ventilation systems, N3XT high-rise chip designs incorporate thermal cooling layers. This work, led by Stanford mechanical engineers Kenneth Goodson and Mehdi Asheghi, ensures that the heat rising from the stacked layers of electronics does not degrade overall system performance.

Mitra and Wong have already demonstrated a working prototype of a high-rise chip. At the International Electron Devices Meeting in December 2014 they unveiled a four-layered chip made up of two layers of RRAM memory sandwiched between two layers of CNTs (see “Stanford engineers invent radical ‘high-rise’ 3D chips“).

In their N3XT paper they ran simulations showing how their high-rise approach was a thousand times more efficient in carrying out many important and highly demanding industrial software applications.




  • M. Aly, M. Gao, G. Hills, C-S Lee, G. Pitner, M. Shulaker, T. Wu, M. Asheghi, J. Bokor, F. Franchetti, K. Goodson, C. Kozyrakis, I. Markov, K. Olukotun, L. Pileggi, E. Pop, J. Rabaey, C. Ré, H.-S.P. Wong and S. Mitra, Energy-Efficient Abundant-Data Computing: The N3XT 1,000X. IEEE Computer, Special Issue on Rebooting Computing, Dec. 2015 (in press)

This is a little off-thread, but it does involve carbon, so this is the only place I can squeeze this prediction in…
The big news tonight is that the lithium batteries in electric skate boards called “hoverboards” are catching on fire.
This is going to take a lot of them off of the market. But…they will rise again when carbon batteries with graphene and CNTs and boron nitride and I don’t know what else make batteries run as cool as these high-rise CNT chips.
In fact, these high-rise CNT chips will most likely be part of the power control and distribution in the N3XT generation of hoverboards.


I am really curious what will be the next computing paradigm. Personally, I see more than one type of computers being used at the same time : neuromorphic, optical, electronic von Neumann nanotube transistors 3D chips, quantum for some purposes and also magnetic storage (memristors).
What is even more interesting, Mr Kurzweil wrote and said that nanotube high-rise computers like described in the article here will be dominant type of computers in the 20s. It may or may not be true.


Oh for sure, Kynareth, the first von Neumann machines will do their work with these processors. But memristors should also be a part of them.
I suspect that high-rise cnt chips will indeed be dominant as soon as they come to market. This will give you the N3XT “Field of Dreams.”
If you build it they will come. As soon as it is produced in mass quantities, everybody will come and beat down the doors to get them.


This is fantastic. I want the next, or the third next gen of this to be a molecularly built up block of circuitry. This article shows advances using the nano-tube connector that has been experimented before, and also the replacement overall in the circuitry. I only suggest that further generation because it resembles the “molybloc circuits”-bricks and tiles of densely packed circuitry, molybloc standing for molecular block circuitry, found in the Honor Harringoton Sci Fi series (David Weber).


Stanford engineers invent radical ‘high-rise’ 3D chips



A four-layer prototype high-rise chip built by Stanford engineers. The bottom and top layers are logic transistors. Sandwiched between them are two layers of memory. The vertical tubes are nanoscale electronic “elevators” that connect logic and memory, allowing them to work together efficiently. (Credit: Max Shulaker, Stanford)


Stanford engineers have build 3D “high-rise” chips that could leapfrog the performance of the single-story logic and memory chips on today’s circuit cards, which are subject to frequent traffic jams between logic and memory.

The Stanford approach would attempt to end these jams by building layers of logic atop layers of memory to create a tightly interconnected high-rise chip. Many thousands of nanoscale electronic “elevators” would move data between the layers much faster, using less electricity, than the bottleneck-prone wires connecting single-story logic and memory chips today.

The work is led by Subhasish Mitra, a Stanford associate professor of electrical engineering and of computer science, and H.-S. Philip Wong, the Williard R. and Inez Kerr Bell Professor in Stanford’s School of Engineering. They describe their new high-rise chip architecture in a paper being presented at the IEEE International Electron Devices Meeting Dec. 15–17.

The researchers’ innovation leverages three breakthroughs: a new technology for creating transistors using nanotubes, a new type of computer memory that lends itself to multi-story fabrication, and a technique to build these new logic and memory technologies into high-rise structures in a radically different way than previous efforts to stack chips.

“This research is at an early stage, but our design and fabrication techniques are scalable,” Mitra said. “With further development this architecture could lead to computing performance that is much, much greater than anything available today.” Wong said the prototype chip unveiled at IEDM shows how to put logic and memory together into three-dimensional structures that can be mass-produced.

“Paradigm shift is an overused concept, but here it is appropriate,” Wong said. “With this new architecture, electronics manufacturers could put the power of a supercomputer in your hand.”

Overcoming silicon heat

Researchers have been trying to solve a major problem with chip-generated heat by creating carbon nanotubes (CNT)  transistors. Mitra and Wong are presenting a second paper at the conference showing how their team made some of the highest performance CNT transistors ever built.


Image of a CNT-based field-effect transistor (FET) using a new high-density process (credit: Mitra/Wong Lab, Stanford)


Until now the standard process used to grow CNTs did not create sufficient density. The Stanford engineers solved this problem  an ingenious technique. They started by growing CNTs the standard way, on round quartz wafers. Then they created a metal film that acts like a tape. Using this adhesive process, they lifted an entire crop of CNTs off the quartz growth medium and placed it onto a silicon wafer that would become the foundation of their high-rise chip.

They repeated this process 13 times, achieving some of the highest density, highest performance CNTs ever made. Moreover, the Stanford team showed that they could perform this technique on more than one layer of logic as they created their high-rise chip.


RRAM memory

Left: Today’s single-story electronic circuit cards, where logic and memory chips exist as separate structures connected by wires, can get jammed with digital traffic between logic and memory. Right: layers of logic and memory create skyscraper chips where data would move up and down on nanoscale “elevators” to avoid traffic jams. (Credit: Mitra/Wong Lab, Stanford)

Wong is a world leader in a new memory technology called “resistive random access memory” (RRAM) which he unveiled at last year’s IEDM conference.

Unlike today’s memory chips, this new storage technology is not based on silicon, but titanium nitride, hafnium oxide and platinum. This formed a metal/oxide/metal sandwich. Applying electricity to this three-metal sandwich one way causes it to resist the flow of electricity. Reversing the polarity causes the structure to conduct electricity again.

The change from resistive to conductive states is how this new memory technology creates digital zeroes and ones.

RRAM uses less energy than current memory, leading to prolonged battery life in mobile devices. Inventing this new memory technology was also the key to creating the high-rise chip because RRAM can be made at much lower temperatures than silicon memory.

Interconnected layers

Max Shulaker and Tony Wu, Stanford graduate students in electrical engineering, created the techniques behind the four-story high-rise chip unveiled at the conference.

The low-heat process for making RRAM and CNTs enabled them to fabricate each layer of memory directly atop each layer of CNT logic. While making each memory layer, they were able to drill thousands of interconnections into the logic layer below. This multiplicity of connections is what enables the high-rise chip to avoid the traffic jams on conventional circuit cards.

There is no way to tightly interconnect layers using today’s conventional silicon-based logic and memory. That’s because it takes so much heat to build a layer of silicon memory — about 1,000 degrees Celsius — that any attempt to do so would melt the logic below.

Previous efforts to stack silicon chips could save space but not avoid the digital traffic jams. That’s because each layer would have to be built separately and connected by wires — which would still be prone to traffic jams, unlike the nanoscale elevators in the Stanford design.



‘Memristors’ based on transparent electronics offer technology of the future

Memristors are faster, smaller, and use less power than non-volatile flash memory

Transparent electronics (pioneered at Oregon State University) may find one of their newest applications as a next-generation replacement for some uses of non-volatile flash memory, a multi-billion dollar technology nearing its limit of small size and information storage capacity.

Researchers at OSU have confirmed that zinc tin oxide, an inexpensive and environmentally benign compound,  could provide a new, transparent technology where computer memory is based on resistance, instead of an electron charge.

This resistive random access memory, or RRAM, is referred to by some researchers as a “memristor.” Products using this approach could become even smaller, faster and cheaper than the silicon transistors that have revolutionized modern electronics — and transparent as well.

Transparent electronics offer potential for innovative products that don’t yet exist, like information displayed on an automobile windshield, or surfing the web on the glass top of a coffee table.

“Flash memory has taken us a long way with its very small size and low price,” said John Conley, a professor in the OSU School of Electrical Engineering and Computer Science. “But it’s nearing the end of its potential, and memristors are a leading candidate to continue performance improvements.”

Memristors: faster than flash  

Memristors have a simple structure, are able to program and erase information rapidly, and consume little power. They accomplish a function similar to transistor-based flash memory, but with a different approach. Whereas traditional flash memory stores information with an electrical charge, RRAM accomplishes this with electrical resistance. Like flash, it can store information as long as it’s needed.

Flash memory computer chips are ubiquitous in almost all modern electronic products, ranging from cell phones and computers to video games and flat panel televisions.

Thin-film transistors that control liquid crystal displays

Some of the best opportunities for these new amorphous oxide semiconductors are not so much for memory chips, but with thin-film, flat panel displays, researchers say. Private industry has already shown considerable interest in using them for the thin-film transistors that control liquid crystal displays, and one compound approaching commercialization is indium gallium zinc oxide.

But indium and gallium are getting increasingly expensive, and zinc tin oxide — also a transparent compound — appears to offer good performance with lower cost materials. The new research also shows that zinc tin oxide can be used not only for thin-film transistors, but also for memristive memory, Conley said, an important factor in its commercial application.

More work is needed to understand the basic physics and electrical properties of the new compounds, researchers said.

This research was supported by the U.S. Office of Naval Research, the National Science Foundation and the Oregon Nanoscience and Microtechnologies Institute.


Resistive switching in zinc–tin-oxide

Santosh MuraliaJaana S. RajachidambarambSeung-Yeol HanbChih-Hung ChangbGregory S. HermanbJohn F. Conley Jr.a,

Bipolar resistive switching is demonstrated in the amorphous oxide semiconductor zinc–tin-oxide (ZTO). A gradual forming process produces improved switching uniformity. Al/ZTO/Pt crossbar devices show switching ratios greater than 103, long retention times, and good endurance. The resistive switching in these devices is consistent with a combined filamentary/interfacial mechanism. Overall, ZTO shows great potential as a low cost material for embedding memristive memory with thin film transistor logic for large area electronics.


► We present the first report of resistive switching in zinc–tin-oxide (ZTO). ► ZTO is the leading alternative material to IGZO for TFTs for LCDs. ► ZTO has an advantage over IGZO of lower cost due to the absence of In and Ga. ► Al/ZTO/Pt crossbar RRAM devices show switching ratios greater than 103. ► ZTO shows promise for embedding RRAM with TFT logic for large area electronics.

MemristorRRAMResistive switchingAmorphous oxide semiconductorsTransparent electronics; Zinc–tin-oxide

Plot of log|current| vs. top electrode voltage for a 50μm×50μm device with an ...



Method to replace silicon with carbon nanotubes developed by IBM Research

Could work down to the 1.8 nanometer node in the future
Schematic of a set of molybdenum (M0) end-contacted nanotube transistors (credit: Qing Cao et al./Science)

IBM Research has announced a “major engineering breakthrough” that could lead to carbon nanotubes replacing silicon transistors in future computing technologies.

As transistors shrink in size, electrical resistance increases within the contacts, which impedes performance. So IBM researchers invented a metallurgical process similar to microscopic welding that chemically binds the contact’s metal (molybdenum) atoms to the carbon atoms at the ends of nanotubes.

The new method promises to shrink transistor contacts without reducing performance of carbon-nanotube devices, opening a pathway to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional silicon semiconductors.

“This is the kind of breakthrough that we’re committed to making at IBM Research via our $3 billion investment over 5 years in research and development programs aimed a pushing the limits of chip technology,” said Dario Gil, VP, Science & Technology, IBM Research. “Our aim is to help IBM produce high-performance systems capable of handling the extreme demands of new data analytics and cognitive computing applications.”

The development was reported today in the October 2 issue of the journal Science.


Overcoming contact resistance


Schematic of carbon nanotube transistor contacts. Left: High-resistance side-bonded contact, where the single-wall nanotube (SWNT) (black tube) is partially covered by the metal molybdenum (Mo) (purple dots). Right: low-resistance end-bonded contact, where the SWNT is attached to the molybdenum electrode through carbide bonds, while the carbon atoms (black dots) from the originally covered portion of the SWNT uniformly diffuse out into the Mo electrode (credit: Qing Cao et al./Science)


The new “end-bonded contact scheme” allows carbon-nanotube contacts to be shrunken down to below 10 nanometers without deteriorating performance. IBM says the scheme could overcome contact resistance challenges all the way to the 1.8 nanometer node and replace silicon with carbon nanotubes.

Silicon transistors have been made smaller year after year, but they are approaching a point of physical limitation. With Moore’s Law running out of steam, shrinking the size of the transistor — including the channels and contacts — without compromising performance has been a challenge for researchers for decades.



Single wall carbon nanotube (credit: IBM)


IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometers, which is less than half the size of today’s leading silicon technology. Electrons in carbon transistors can move more easily than in silicon-based devices and use less power.

Carbon nanotubes are also flexible and transparent, making them useful for flexible and stretchable electronics or sensors embedded in wearables.

IBM acknowledges that several major manufacturing challenges still stand in the way of commercial devices based on nanotube transistors.

Earlier this summer, IBM unveiled the first 7 nanometer node silicon test chip, pushing the limits of silicon technologies.


Science 2 Oct 2015; 350(6256):6872      http://dx.doi.org:/10.1126/science.aac8006

Moving beyond the limits of silicon transistors requires both a high-performance channel and high-quality electrical contacts. Carbon nanotubes provide high-performance channels below 10 nanometers, but as with silicon, the increase in contact resistance with decreasing size becomes a major performance roadblock. We report a single-walled carbon nanotube (SWNT) transistor technology with an end-bonded contact scheme that leads to size-independent contact resistance to overcome the scaling limits of conventional side-bonded or planar contact schemes. A high-performance SWNT transistor was fabricated with a sub–10-nanometer contact length, showing a device resistance below 36 kilohms and on-current above 15 microampere per tube. The p-type end-bonded contact, formed through the reaction of molybdenum with the SWNT to form carbide, also exhibited no Schottky barrier. This strategy promises high-performance SWNT transistors, enabling future ultimately scaled device technologies.





Read Full Post »

Biomarker Development

Biomarker Development, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Biomarker Development

Curator: Larry H. Bernstein, MD, FCAP



NBDA’s Biomarker R&D Modules


“collaboratively creating the NBDA Standards* required for end-to-end, evidence – based biomarker development to advance precision (personalized) medicine”




Successful biomarkers should move systematically and seamlessly through specific R&D “modules” – from early discovery to clinical validation. NBDA’s end-to-end systems approach is based on working with experts from all affected multi-sector stakeholder communities to build an in-depth understanding of the existing barriers in each of these “modules” to support decision making at each juncture.  Following extensive “due diligence” the NBDA works with all stakeholders to assemble and/or create the enabling standards (guidelines, best practices, SOPs) needed to support clinically relevant and robust biomarker development.

Mission: Collaboratively creating the NBDA Standards* required for end-to-end, evidence – based biomarker development to advance precision (personalized) medicine.
NBDA Standards include but are not limited to: “official existing standards”, guidelines, principles, standard operating procedures (SOP), and best practices.



“The NBDA’s vision is not to just relegate the current biomarker development processes to history, but also to serve as a working example of what convergence of purpose, scientific knowledge and collaboration can accomplish.”

NBDA Workshop VII   December 14-15, 2015   Washington Court Hotel, Washington, DC

The upcoming meeting was preceded by an NBDA workshop held on December 1-2, 2014, “The Promising but Elusive Surrogate Endpoint:  What Will It Take?” where we explored in-depth with FDA leadership and experts in the field the current status and future vison for achieving success in surrogate endpoint development.  Through panels and workgroups, the attendees extended their efforts to pursue the FDA’s biomarker qualification pathway through the creation of sequential contexts of use models to support qualification of drug development tools – and ultimately surrogate endpoints.

Although the biomarker (drug development tools) qualification pathway (http://www.fda.gov/Drugs/DevelopmentApprovalProcess/DrugDevelopmentTools…) represents an opportunity to increase the value of predictive biomarkers, animal models, and clinical outcomes across the drug (and biologics) development continuum, there are myriad challenges.  In that regard, the lack of evidentiary standards to support contexts of use-specific biomarkers emerged from the prior NBDA workshop as the major barrier to achieving the promise of biomarker qualification.  It also became clear that overall, the communities do not understand the biomarker qualification process; nor do they fully appreciate that it is up to the stakeholders in the field (academia, non-profit foundations, pharmaceutical and biotechnology companies, and patient advocate organizations) to develop these evidentiary standards.

This NBDA workshop will feature a unique approach to address these problems.  Over the past two years, the NBDA has worked with experts in selected disease areas to develop specific case studies that feature a systematic approach to identifying the evidentiary standards needed for sequential contexts of use for specific biomarkers to drive biomarker qualification.   These constructs, and accompanying whitepapers are now the focus of collaborative discussions with FDA experts.

The upcoming meeting will feature in-depth panel discussions of 3-4 of these cases, including the case leader, additional technical contributors, and a number of FDA experts.  Each of the panels will analyze their respective case for strengths and weaknesses – including suggestions for making the biomarker qualification path for the specific biomarker more transparent and efficient. In addition, the discussions will highlight the problem of poor reproducibility of biomarker discovery results, and its impact on the qualification process.


Health Care in the Digital Age

Mobile, big data, the Internet of Things and social media are leading a revolution that is transforming opportunities in health care and research. Extraordinary advancements in mobile technology and connectivity have provided the foundation needed to dramatically change the way health care is practiced today and research is done tomorrow. While we are still in the early innings of using mobile technology in the delivery of health care, evidence supporting its potential to impact the delivery of better health care, lower costs and improve patient outcomes is apparent. Mobile technology for health care, or mHealth, can empower doctors to more effectively engage their patients and provide secure information on demand, anytime and anywhere. Patients demand safety, speed and security from their providers. What are the technologies that are allowing this transformation to take place?


https://youtu.be/WeXEa2cL3oA    Monday, April 27, 2015  Milken Institute


Michael Milken, Chairman, Milken Institute



Anna Barker, Fellow, FasterCures, a Center of the Milken Institute; Professor and Director, Transformative Healthcare Networks, and Co-Director, Complex Adaptive Systems Network, Arizona State University
Atul Butte, Director, Institute of Computational Health Sciences, University of California, San Francisco
John Chen, Executive Chairman and CEO, BlackBerry
Victor Dzau, President, Institute of Medicine, National Academy of Sciences; Chancellor Emeritus, Duke University
Patrick Soon-Shiong, Chairman and CEO, NantWorks, LLC


Mobile, big data, the Internet of Things and social media are leading a revolution that is transforming opportunities in health care and research. Extraordinary advancements in mobile technology and connectivity have provided the foundation needed to dramatically change the way health care is practiced today and research is done tomorrow. While we are still in the early innings of using mobile technology in the delivery of health care, evidence supporting its potential to impact the delivery of better health care, lower costs and improve patient outcomes is apparent. Mobile technology for health care, or mHealth, can empower doctors to more effectively engage their patients and provide secure information on demand, anytime and anywhere. Patients demand safety, speed and security from their providers. What are the technologies that are allowing this transformation to take place?

Read Full Post »

Better Bioinformatics, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Better bioinformatics

Larry H. Bernstein, MD, FCAP, Curator



Big data in biomedicine: 4 big questions

Eric Bender  Nature Nov 2015; S19, 527.     http://dx.doi.org:/10.1038/527S19a


Gathering and understanding the deluge of biomedical research and health data poses huge challenges. But this work is rapidly changing the face of medicine.




1. How can long-term access to biomedical data that are vital for research be improved?

Why it matters
Data storage may be getting cheaper, particularly in cloud computing, but the total costs of maintaining biomedical data are too high and climbing rapidly. Current models for handling these tasks are only stopgaps.

Next steps
Researchers, funders and others need to analyse data usage and look at alternative models, such as ‘data commons’, for providing access to curated data in the long term. Funders also need to incorporate resources for doing this.

“Our mission is to use data science to foster an open digital ecosystem that will accelerate efficient, cost-effective biomedical research to enhance health, lengthen life and reduce illness and disability.” Philip Bourne, US National Institutes of Health.


2. How can the barriers to using clinical trial results and patients’ health records for research be lowered?

Why it matters
‘De-identified’ data from clinical trials and patients’ medical records offer opportunities for research, but the legal and technical obstacles are immense. Clinical study data are rarely shared, and medical records are walled off by privacy and security regulations and by legal concerns.

Next steps
Patient advocates are lobbying for access to their own health data, including genomic information. The European Medicines Agency is publishing clinical reports submitted as part of drug applications. And initiatives such as CancerLinQ are gathering de-identified patient data.

“There’s a lot of genetic information that no one understands yet, so is it okay or safe or right to put that in the hands of a patient? The flip side is: it’s my information — if I want it, I should get it.”Megan O’Boyle, Phelan-McDermid Syndrome Foundation.


3. How can knowledge from big data be brought into point-of-care health-care delivery?

Why it matters
Delivering precision medicine will immensely broaden the scope of electronic health records. This massive shift in health care will be complicated by the introduction of new therapies, requiring ongoing education for clinicians who need detailed information to make clinical decisions.

Next steps
Health systems are trying to bring up-to-date treatments to clinics and build ‘health-care learning systems’ that integrate with electronic health records. For instance, the CancerLinQ project provides recommendations for patients with cancer whose treatment is hard to optimize.

“Developing a standard interface for innovators to access the information in electronic health records will connect the point of care to big data and the full power of the web, spawning an ‘app store’ for health.” Kenneth Mandl, Harvard Medical School.


4. Can academia create better career tracks for bioinformaticians?

Why it matters
The lack of attractive career paths in bioinformatics has led to a shortage of scientists that have both strong statistical skills and biological understanding. The loss of data scientists to other fields is slowing the pace of medical advances.

Next steps
Research institutions will take steps, including setting up formal career tracks, to reward bioinformaticians who take on multidisciplinary collaborations. Funders will find ways to better evaluate contributions from bioinformaticians.

“Perhaps the most promising product of big data, that labs will be able to explore countless and unimagined hypotheses, will be stymied if we lack the bioinformaticians that can make this happen.” Jeffrey Chang, University of Texas.


Eric Bender is a freelance science writer based in Newton, Massachusetts.

Read Full Post »

Inadequacy of EHRs

Larry H. Bernstein, MD, FCAP, Curator



EHRs need better workflows, less ‘chartjunk’

By Marla Durben Hirsch

Electronic health records currently handle data poorly and should be enhanced to better collect, display and use it to support clinical care, according to a new study published in JMIR Medical Informatics.

The authors, from Beth Israel Deaconess Medical Center and elsewhere, state that the next generation of EHRs need to improve workflow, clinical decision-making and clinical notes. They decry some of the problems with existing EHRs, including data that is not displayed well, under-networked, underutilized and wasted. The lack of available data causes errors, creates inefficiencies and increases costs. Data is also “thoughtlessly carried forward or copied and pasted into the current note” creating “chartjunk,” the researchers say.

They suggest ways that future EHRs can be improved, including:

  • Integrating bedside and telemetry monitoring systems with EHRs to provide data analytics that could support real time clinical assessments
  • Formulating notes in real-time using structured data and natural language processing on the free text being entered
  • Formulating treatment plans using information in the EHR plus a review of population data bases to identify similar patients, their treatments and outcomes
  • Creating a more “intelligent” design that capitalizes on the note writing process as well as the contents of the note.

“We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose,” the researchers say. “The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them.”

Many have pointed out the flaws in current EHR design that impede the optimum use of data and hinder workflow. Researchers have suggested that EHRs can be part of a learning health system to better capture and use data to improve clinical practice, create new evidence, educate, and support research efforts.


Disrupting Electronic Health Records Systems: The Next Generation

1Beth Israel Deaconess Medical Center, Division of Pulmonary, Critical Care, and Sleep Medicine, Boston, MA, US; 2Yale University, Yale-New Haven Hospital, Department of Pulmonary and Critical Care, New Haven, CT, US; 3Center for Urban Science and Progress, New York University, New York, NY, US; 4Center for Wireless Health, Departments of Anesthesiology and Neurological Surgery, University of Virginia, Charlottesville, VA, US

*these authors contributed equally

JMIR  23.10.15 Vol 3, No 4 (2015): Oct-Dec

This paper is in the following e-collection/theme issue:
Electronic Health Records
Clinical Information and Decision Making
Viewpoints on and Experiences with Digital Technologies in Health
Decision Support for Health Professionals

The health care system suffers from both inefficient and ineffective use of data. Data are suboptimally displayed to users, undernetworked, underutilized, and wasted. Errors, inefficiencies, and increased costs occur on the basis of unavailable data in a system that does not coordinate the exchange of information, or adequately support its use. Clinicians’ schedules are stretched to the limit and yet the system in which they work exerts little effort to streamline and support carefully engineered care processes. Information for decision-making is difficult to access in the context of hurried real-time workflows. This paper explores and addresses these issues to formulate an improved design for clinical workflow, information exchange, and decision making based on the use of electronic health records.   JMIR Med Inform 2015;3(4):e34   http://dx.doi.org:/10.2196/medinform.4192


Celi LA, Marshall JD, Lai Y, Stone DJ. Disrupting Electronic Health Records Systems: The Next Generation. JMIR Med Inform 2015;3(4):e34  DOI: 10.2196/medinform.4192  PMID: 26500106

Weed introduced the “Subjective, Objective, Assessment, and Plan” (SOAP) note in the late 1960s [1]. This note entails a high-level structure that supports the thought process that goes into decision-making: subjective data followed by ostensibly more reliable objective data employed to formulate an assessment and subsequent plan. The flow of information has not fundamentally changed since that time, but the complexities of the information, possible assessments, and therapeutic options certainly have greatly expanded. Clinicians have not heretofore created anything like an optimal data system for medicine [2,3]. Such a system is essential to streamline workflow and support decision-making rather than adding to the time and frustration of documentation [4].

What this optimal data system offers is not a radical departure from the traditional thought processes that go into the production of a thoughtful and useful note. However, in the current early stage digitized medical system, it is still incumbent on the decision maker/note creator to capture the relevant priors, and to some extent, digitally scramble to collect all the necessary updates. The capture of these priors is a particular challenge in an era where care is more frequently turned over among different caregivers than ever before. Finally, based on a familiarity of the disease pathophysiology, the medical literature and evidence-based medicine (EBM) resources, the user is tasked with creating an optimal plan based on that assessment. In this so-called digital age, the amount of memorization, search, and assembly can be minimized and positively supported by a well-engineered system purposefully designed to assist clinicians in note creation and, in the process, decision-making.

Since 2006, use of electronic health records (EHRs) by US physicians increased by over 160% with 78% of office-based physicians and 59% of hospitals having adopted an EHR by 2013 [5,6]. With implementation of federal incentive programs, a majority of EHRs were required to have some form of built-in clinical decision support tools by the end of 2012 with further requirements mandated as the Affordable Care Act (ACA) rolls out [7]. These requirements recognize the growing importance of standardization and systematization of clinical decision-making in the context of the rapidly changing, growing, and advancing field of medical knowledge. There are already EHRs and other technologies that exist, and some that are being implemented, that integrate clinical decision support into their functionality, but a more intelligent and supportive system can be designed that capitalizes on the note writing process itself. We should strive to optimize the note creation process as well as the contents of the note in order to best facilitate communication and care coordination. The following sections characterize the elements and functions of this decision support system (Figure 1).


Figure 1. Clinician documentation with fully integrated data systems support. Prior notes and data are input for the following note and decisions. Machine analyzes input and displays suggested diagnoses and problem list, and test and treatment recommendations based on various levels of evidence: CPG – clinical practice guidelines, UTD – Up to Date®, DCDM – Dynamic Clinical Data Mining.View this figure

Incorporating Data

Overwhelmingly, the most important characteristic of the electronic note is its potential for the creation and reception of what we term “bidirectional data streams” to inform both decision-making and research. By bidirectional data exchange, we mean that electronic notes have the potential to provide data streams to the entirety of the EHR database and vice versa. The data from the note can be recorded, stored, accessed, retrieved, and mined for a variety of real-time and future uses. This process should be an automatic and intrinsic property of clinical information systems. The incoming data stream is currently produced by the data that is slated for import into the note according to the software requirements of the application and the locally available interfaces [8]. The provision of information from the note to the system has both short- and long-term benefits: in the short term, this information provides essential elements for functions such as benchmarking and quality reporting; and in the long term, the information provides the afferent arm of the learning system that will identify individualized best practices that can be applied to individual patients in future formulations of plans.

Current patient data should include all the electronically interfaced elements that are available and pertinent. In addition to the usual elements that may be imported into notes (eg, laboratory results and current medications), the data should include the immediate prior diagnoses and treatment items, so far as available (especially an issue for the first note in a care sequence such as in the ICU), the active problem list, as well as other updates such as imaging, other kinds of testing, and consultant input. Patient input data should be included after verification (eg, updated reviews of systems, allergies, actual medications being taken, past medical history, family history, substance use, social/travel history, and medical diary that may include data from medical devices). These data priors provide a starting point that is particularly critical for those note writers who are not especially (or at all) familiar with the patient. They represent historical (and yet dynamic) evidence intended to inform decision-making rather than “text” to be thoughtlessly carried forward or copied and pasted into the current note.

Although the amount and types of data collected are extremely important, how it is used and displayed are paramount. Many historical elements of note writing are inexcusably costly in terms of clinician time and effort when viewed at a level throughout the entire health care system. Redundant items such as laboratory results and copy-and-pasted nursing flow sheet data introduce a variety of “chartjunk” that clutters documentation and makes the identification of truly important information more difficult and potentially even introduces errors that are then propagated throughout the chart [9,10]. Electronic systems are poised to automatically capture the salient components of care so far as these values are interfaced into the system and can even generate an active problem list for the providers. With significant amounts of free text and “unstructured data” being entered, EHRs will need to incorporate more sophisticated processes such as natural language processing and machine learning to provide accurate interpretation of text entered by a variety of different users, from different sources, and in different formats, and then translated into structured data that can be analyzed by the system.

Optimally, a fully functional EHR would be able to provide useful predictive data analytics including the identification of patterns that characterize a patient’s normal physiologic state (thereby enabling detection of significant change from that state), as well as mapping of the predicted clinical trajectory, such as prognosis of patients with sepsis under a number of different clinical scenarios, and with the ability to suggest potential interventions to improve morbidity or mortality [11]. Genomic and other “-omic” information will eventually be useful in categorizing certain findings on the basis of individual susceptibilities to various clinical problems such as sepsis, auto-immune disease, and cancer, and in individualizing diagnostic and treatment recommendations. In addition, an embedded data analytic function will be able to recognize a constellation of relatively subtle changes that are difficult or impossible to detect, especially in the presence of chronic co-morbidities (eg, changes consistent with pulmonary embolism, which can be a subtle and difficult diagnosis in the presence of long standing heart and/or lung disease) [12,13].

The data presentation section must be thoughtfully displayed so that the user is not overwhelmed, but is still aware of what elements are available, and directed to those aspects that are most important. The user then has the tools at hand to construct the truly cognitive sections of the note: the assessment and plan. Data should be displayed in a fashion that efficiently and effectively provides a maximally informationally rich and minimally distracting graphic display. The fundamental principle should result in a thoughtfully planned data display created on the ethos of “just enough and no more,” as well as the incorporation of clinical elements such as severity, acuity, stability, and reversibility. In addition to the now classic teachings of Edward Tufte in this regard, a number of new data artists have entered the field [14]. There is room for much innovation and improvement in this area, as medicine transitions from paper to a digital format that provides enormous potential and capability for new types of displays.

Integrating the Monitors

Bedside and telemetry monitoring systems have become an element of the clinical information system but they do not yet interact with the EHR in a bidirectional fashion to provide decision support. In addition to the raw data elements, the monitors can provide data analytics that could support real-time clinical assessment as well as material for predictive purposes apart from the traditional noisy alarms [15,16]. It may be less apparent how the reverse stream (EHR to bedside monitor) would work, but the EHR can set the context for the interpretation of raw physiologic signals based on previously digitally captured vital signs, patient co-morbidities and current medications, as well as the acute clinical context.

In addition, the display could provide an indication of whether technically ”out of normal range” vital signs (or labs in the emergency screen described below) are actually “abnormal” for this particular patient. For example, a particular type of laboratory value for a patient may have been chronically out of normal range and not represent a change requiring acute investigation and/or treatment. This might be accomplished by displaying these types of ”normally abnormal” values in purple or green rather than red font for abnormal, or via some other designating graphic. The purple font (or whatever display mode was utilized) would designate the value as technically abnormal, but perhaps notcontextually abnormal. Such designations are particularly important for caregivers who are not familiar with the patient.

It also might be desirable to use a combination of accumulated historical data from the monitor and the EHR to formulate personalized alarm limits for each patient. Such personalized alarm limits would provide a smarter range of acceptable values for each patient and perhaps also act to reduce the unacceptable number of false positive alarms that currently plague bedside caregivers (and patients) [17]. These alarm limits would be dynamically based on the input data and subject to reformulation as circumstances changed. We realize that any venture into alarm settings becomes a regulatory and potentially medico-legal issue, but these intimidating factors should not be allowed to grind potentially beneficial innovations to a halt. For example, “hard” limits could be built into the alarm machine so that the custom alarm limits could not fall outside certain designated values.

Supporting the Formulation of the Assessment

Building on both prior and new, interfaced and manually entered data as described above, the next framework element would consist of the formulation of the note in real time. This would consist of structured data so far as available and feasible, but is more likely to require real-time natural language processing performed on the free text being entered. Different takes on this kind of templated structure have already been introduced into several electronic systems. These include note templates created for specific purposes such as end-of-life discussions, or documentation of cardiopulmonary arrest. The very nature of these note types provides a robust context for the content. We also recognize that these shorter and more directed types of notes are not likely to require the kind of extensive clinical decision support (CDS) from which an admission or daily progress note may benefit.

Until the developers of EHRs find a way to fit structured data selection seamlessly and transparently into workflow, we will have to do the best we can with the free text that we have available. While this is a bit clunky in terms of data utilization purposes, perhaps it is not totally undesirable, as free text inserts a needed narrative element into the otherwise storyless EHR environment. Medical care can be described as an ongoing story and free text conveys this story in a much more effective and interesting fashion than do selected structured data bits. Furthermore, stories tend to be more distinctive than lists of structured data entries, which sometimes seem to vary remarkably little from patient to patient. But to extract the necessary information, the computer still needs a processed interpretation of that text. More complex systems are being developed and actively researched to act more analogously to our own ”human” form of clinical problem solving [18], but until these systems are integrated into existing EHRs, clinicians may be able to help by being trained to minimize the potential confusion engendered by reducing completely unconstrained free text entries and/or utilizing some degree of standardization within the use of free text terminologies and contextual modifiers.

Employing the prior data (eg, diagnoses X, Y, Z from the previous note) and new data inputs (eg, laboratory results, imaging reports, and consultants’ recommendations) in conjunction with the assessment being entered, the system would have the capability to check for inconsistencies and omissions based on analysis of both prior and new entries. For example, a patient in the ICU has increasing temperature and heart rate, and decreasing oxygen saturation. These continuous variables are referenced against other patient features and risk factors to suggest the possibility that the patient has developed a pulmonary embolism or an infectious ventilator-associated complication. The system then displays these possible diagnoses within the working assessment screen with hyperlinks to the patient’s flow sheets and other data supporting the suggested problems (Figure 2). The formulation of the assessment is clearly not as potentially evidence-based as that of the plan; however, there should still be dynamic, automatic and rapid searches performed for pertinent supporting material in the formulation of the assessment. These would include the medical literature, including textbooks, online databases, and applications such as WebMD. The relevant literature that the system has identified, supporting the associations listed in the assessment and plan, can then be screened by the user for accuracy and pertinence to the specific clinical context. Another potentially useful CDS tool for assessment formulation is a modality we have termed dynamic clinical data mining (DCDM) [19]. DCDM draws upon the power of large sets of population health data to provide differential diagnoses associated with groupings or constellations of symptoms and findings. Similar to the process just described, the clinician would then have the ability to review and incorporate these suggestions or not.

An optional active search function would also be provided throughout the note creation process for additional flexibility—clinicians are already using search engines, but doing so sometimes in the absence of specific clinical search algorithms (eg, a generic search engine such Google). This may produce search results that are not always of the highest possible quality [20,21]. The EHR-embedded search engine would have its algorithm modified to meet the task as Google has done previously for its search engine [22]. The searchable TRIP database provides a search engine for high-quality clinical evidence, as do the search modalities within Up to Date, Dynamed, BMJ Clinical Evidence, and others [23,24].


Figure 2. Mock visualization of symptoms, signs, laboratory results, and other data input and systems suggestion for differential diagnoses.

Supporting the Formulation of the Plan

With the assessment formulated, the system would then formulate a proposed plan using EBM inputs and DCDM refinements for issues lying outside EBM knowledge. Decision support for plan formulation would include items such as randomized control trials (RCTs), observational studies, clinical practice guidelines (CPGs), local guidelines, and other relevant elements (eg, Cochrane reviews). The system would provide these supporting modalities in a hierarchical fashion using evidence of the highest quality first before proceeding down the chain to lower quality evidence. Notably, RCT data are not available for the majority of specific clinical questions, or it is not applicable because the results cannot be generalized to the patient at hand due to the study’s inclusion and exclusion criteria [25]. Sufficiently reliable observational research data also may not be available, although we expect that the holes in the RCT literature will be increasingly filled by observational studies in the near future [16,26]. In the absence of pertinent evidence-based material, the system would include the functionality which we have termed DCDM, and our Stanford colleagues have termed the “green button” [19,27]. This still-theoretical process is described in detail in the references, but in brief, DCDM would utilize a search engine type of approach to examine a population database to identify similar patients on the basis of the information entered in the EHR. The prior treatments and outcomes of these historical patients would then be analyzed to present options for the care of the current patient that were, to a large degree, based on prior data. The efficacy of DCDM would depend on, among other factors, the availability of a sufficiently large population EHR database, or an open repository that would allow for the sharing of patient data between EHRs. This possibility is quickly becoming a reality with the advent of large, deidentified clinical databases such as that being created by the Patient Centered Outcomes Research Institute [26].

The tentative plan could then be modified by the user on the basis of her or his clinical “wetware” analysis. The electronic workflow could be designed in a number of ways that were modifiable per user choice/customization. For example, the user could first create the assessment and plan which would then be subject to comment and modification by the automatic system. This modification might include suggestions such as adding entirely new items, as well as the editing of entered items. In contrast, as described, the system could formulate an original assessment and plan that was subject to final editing by the user. In either case, the user would determine the final output, but the system would record both system and final user outputs for possible reporting purposes (eg, consistency with best practices). Another design approach might be to display the user entry in toto on the left half of a computer screen and a system-formulated assessment (Figure 3) and plan on the right side for comparison. Links would be provided throughout the system formulation so that the user could drill into EHR-provided suggestions for validation and further investigation and learning. In either type of workflow, the system would comparatively evaluate the final entered plan for consistency, completeness, and conformity with current best practices. The system could display the specific items that came under question and why. Users may proceed to adopt or not, with the option to justify their decision. Data reporting analytics could be formulated on the basis of compliance with EBM care. Such analytics should be done and interpreted with the knowledge that EBM itself is a moving target and many clinical situations do not lend themselves to resolution with the current tools supplied by EBM.

Since not all notes call for this kind of extensive decision support, the CDS material could be displayed in a separate columnar window adjacent to the main part of the screen where the note contents were displayed so that workflow is not affected. Another possibility would be an “opt-out” button by which the user would choose not to utilize these system resources. This would be analogous but functionally opposite to the “green button” opt-in option suggested by Longhurst et al, and perhaps be designated the “orange button” to clearly make this distinction [27]. Later, the system would make a determination as to whether this lack of EBM utilization was justified, and provide a reminder if the care was determined to be outside the bounds of current best practices. While the goal is to keep the user on the EBM track as much as feasible, the system has to “realize” that real care will still extend outside those bounds for some time, and that some notes and decisions simply do not require such machine support.

There are clearly still many details to be worked out regarding the creation and use of a fully integrated bidirectional EHR. There currently are smaller systems that use some components of what we propose. For example, a large Boston hospital uses a program called QPID which culls all previously collected patient data and uses a Google-like search to identify specific details of relevant prior medical history which is then displayed in a user-friendly fashion to assist the clinician in making real-time decisions on admission [28]. Another organization, the American Society of Clinical Oncology, has developed a clinical Health IT tool called CancerLinQ which utilizes large clinical databases of cancer patients to trend current practices and compare the specific practices of individual providers with best practice guidelines [29]. Another hospital system is using many of the components discussed in a new, internally developed platform called Fluence that allows aggregation of patient information, and applies already known clinical practice guidelines to patients’ problem lists to assist practitioners in making evidenced-based decisions [30]. All of these efforts reflect inadequacies in current EHRs and are important pieces in the process of selectively and wisely incorporating these technologies into EHRs, but doing so universally will be a much larger endeavor.


Figure 3. Mock screenshot for the “Assessment and Plan” screen with background data analytics. Based on background analytics that are being run by the system at all times, a series of “problems” are identified and suggested by the system, which are then displayed in the EMR in the box on the left. The clinician can then select problems that are suggested, or input new problems that are then displayed in the the box on the right of the EMR screen, and will now be apart of ongoing analytics for future assessment.  View this figure


Medicine has finally entered an era in which clinical digitization implementations and data analytic systems are converging. We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose (personal communication from Peter Szolovits, The Unreasonable Effectiveness of Clinical Data. Challenges in Big Data for Data Mining, Machine Learning and Statistics Conference, March 2014). The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them. The Institute of Medicine demands a “learning health care system” where analysis of patient data is a key element in continuously improving clinical outcomes [31]. This is also an age of increasing medical complexity bound up in increasing financial and time constraints. The latter dictate that medical practice should become more standardized and evidence-based in order to optimize outcomes at the lowest cost. Current EHRs, mostly implemented over the past decade, are a first step in the digitization process, but do not support decision-making or streamline the workflow to the extent to which they are capable. In response, we propose a series of information system enhancements that we hope can be seized, improved upon, and incorporated into the next generation of EHRs.

There is already government support for these advances: The Office of the National Coordinator for Health IT recently outlined their 6-year and 10-year plans to improve EHR and health IT interoperability, so that large-scale realizations of this idea can and will exist. Within 10 years, they envision that we “should have an array of interoperable health IT products and services that allow the health care system to continuously learn and advance the goal of improved health care.” In that, they envision an integrated system across EHRs that will improve not just individual health and population health, but also act as a nationwide repository for searchable and researchable outcomes data [32]. The first step to achieving that vision is by successfully implementing the ideas and the system outlined above into a more fully functional EHR that better supports both workflow and clinical decision-making. Further, these suggested changes would also contribute to making the note writing process an educational one, thereby justifying the very significant time and effort expended, and would begin to establish a true learning system of health care based on actual workflow practices. Finally, the goal is to keep clinicians firmly in charge of the decision loop in a “human-centered” system in which technology plays an essential but secondary role. As expressed in a recent article on the issue of automating systems [33]:

In this model (human centered automation)…technology takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

Key Concepts and Terminology

A number of concepts and terms were introduced throughout this paper, and some clarification and elaboration of these follows:

  • Affordable Care Act (ACA): Legislation passed in 2010 that constitutes two separate laws including the Patient Protection and Affordable Care Act and the Health Care and Education Reconciliation Act. These two pieces of legislation act together for the expressed goal of expanding health care coverage to low-income Americans through expansion of Medicaid and other federal assistance programs [34].
  • Clinical Decision Support (CDS) is defined by CMS as “a key functionality of health information technology” that encompasses a variety of tools including computerized alerts and reminders, clinical guidelines, condition-specific order sets, documentations templates, diagnostic support, and other tools that “when used effectively, increases quality of care, enhances health outcomes, helps to avoid errors and adverse events, improves efficiency, reduces costs, and boosts provider and patient satisfaction” [35].
  • Cognitive Computing is defined as “the simulation of human thought processes in a computerize model…involving self learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works” [36]. Defined by IBM as computer systems that “are trained using artificial intelligence and machine learning algorithms to sense, predict, infer and, in some ways, think” [37].
  • Deep learning is a form of machine learning (a more specific subgroup of cognitive computing) that utilizes multiple levels of data to make hierarchical connections and recognize more complex patterns to be able to infer higher level concepts from lower levels of input and previously inferred concepts [38]. Figure 3 demonstrates how this concept relates to patients illustrating the system recognizing patterns of signs and symptoms experienced by a patient, and then inferring a diagnosis (higher level concept) from those lower level inputs. The next level concept would be recognizing response to treatment for proposed diagnosis, and offering either alternative diagnoses, or change in therapy, with the system adapting as the patient’s course progresses.
  • Dynamic clinical data mining (DCDM): First, data mining is defined as the “process of discovering patterns, automatically or semi-automatically, in large quantities of data” [39]. DCDM describes the process of mining and interpreting the data from large patient databases that contain prior and concurrent patient information including diagnoses, treatments, and outcomes so as to make real-time treatment decisions [19].
  • Natural Language Processing (NLP) is a process based on machine learning, or deep learning, that enables computers to analyze and interpret unstructured human language input to recognize and even act upon meaningful patterns [39,40].


  1. Weed LL. Medical records, patient care, and medical education. Ir J Med Sci 1964 Jun;462:271-282. [Medline]
  2. Celi L, Csete M, Stone D. Optimal data systems: the future of clinical predictions and decision support. Curr Opin Crit Care 2014 Oct;20(5):573-580. [CrossRef] [Medline]
  3. Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS One 2013 Nov;8(11):e80318 [FREE Full text] [CrossRef] [Medline]
  4. Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study. JAMA Intern Med 2013 Nov 25;173(21):1962-1969. [CrossRef] [Medline]

more ….

The Electronic Health Record: How far we have travelled, and where is journey’s end?


A focus of the Accountable Care Act is improved delivery of quality, efficiency and effectiveness to the patients who receive healthcare in US from the providers in a coordinated system.  The largest confounder in all of this is the existence of silos that are not readily crossed, handovers, communication lapses, and a heavy paperwork burden.  We can add to that a large for profit insurance overhead that is disinterested in the patient-physician encounter.  Finally, the knowledge base of medicine has grown sufficiently that physicians are challenged by the amount of data and the presentation in the Medical Record.

I present a review of the problems that have become more urgent to fix in the last decade.  The administration and paperwork necessitated by health insurers, HMOs and other parties today may account for 40% of a physician’s practice, and the formation of large physician practice groups and alliances of the hospital and hospital staffed physicians (as well as hospital system alliances) has increased in response to the need to decrease the cost of non-patient care overhead.   I discuss some of the points made by two innovators from the healthcare and  the communications sectors.

I also call attention to the New York Times front page article calling attention to a sharp rise in inflation-adjusted Medicare payments for emergency-room services since 2006 due to upcoding at the highest level, partly related to the ability to physician ability to overstate the claim for service provided by correctible improvements I discuss below.  (NY Times, 9/22/2012).  The solution still has another built in step that requires quality control of both the input and the output, achievable today.  This also comes at a time that there is a nationwide implementation of ICD-10 to replace ICD-9 coding.

US medical groups' adoption of EHR (2005)

US medical groups’ adoption of EHR (2005) (Photo credit: Wikipedia)

The first finding by Robert S Didner, on “Decision Making in the Clinical Setting”, concludes that the gathering of information has large costs while reimbursements for the activities provided have decreased, detrimental to the outcomes that are measured.  He suggests that this data can be gathered and reformatted to improve its value in the clinical setting by leading to decisions with optimal outcomes.  He outlines how this can be done.

The second is a discussion by Emergency Medicine  physicians, Thomas A Naegele and harry P Wetzler,  who have developed a Foresighted Practice Guideline (FPG) (“The Foresighted Practice Guideline Model: A Win-Win Solution”).   They focus on collecting data from similar patients, their interventions, and treatments to better understand the value of alternative courses of treatment.  Using the FPG model will enable physicians to elevate their practice to a higher level and they will have hard information on what works.  These two views are more than 10 years old, and they are complementary.

Didner points out that there is no one sequence of tests and questions that can be optimal for all presenting clusters.  Even as data and test results are acquired, the optimal sequence of information gathering is changed, depending on the gathered information.  Thus, the dilemma is created of how to collect clinical data.  Currently, the way information is requested and presented does not support the way decisions are made.   Decisions are made in a “path-dependent” way, which is influenced by the sequence in which the components are considered.    Ideally, it would require a separate form for each combination of presenting history and symptoms, prior to ordering tests, which is unmanageable.   The blank paper format is no better, as the data is not collected in the way it would be used, and it constitutes separate clusters (vital signs, lab work{also divided into CBC, chemistry panel, microbiology, immunology, blood bank, special tests}].   Improvements have been made in the graphical presentation of a series of tests. Didner presents another means of gathering data in machine manipulable form that improves the expected outcomes.  The basis for this model is that at any stage of testing and information gathering there is an expected outcome from the process, coupled with a metric, or hierarchy of values to determine the relative desirability of the possible outcomes.

He creates a value hierarchy:

  1. Minimize the likelihood that a treatable, life-threatening disorder is not treated.
  2. Minimize the likelihood that a treatable, permanently-disabling or disfiguring disorder is not treated.
  3. Minimize the likelihood that a treatable, discomfort causing disorder is not treated.
  4. Minimize the likelihood that a risky procedure, (treatment or diagnostic procedure) is inappropriately administered.
  5. Minimize the likelihood that a discomfort causing procedure is inappropriately administered.
  6. Minimize the likelihood that a costly procedure is inappropriately administered.
  7. Minimize the time of diagnosing and treating the patient.
  8. Minimize the cost of diagnosing and treating the patient.

In reference to a way of minimizing the number, time and cost of tests, he determines that the optimum sequence could be found using Claude  Shannon’s Information theory.  As to a hierarchy of outcome values, he refers to the QALY scale as a starting point. At any point where a determination is made there is disparate information that has to be brought together, such as, weight, blood pressure, cholesterol, etc.  He points out, in addition, that the way the clinical information is organized is not opyimal for the way to display information to enhance human cognitive performance in decision support.  Furthermore, he looks at the limit of short term memory as 10 chunks of information at any time, and he compares the positions of chess pieces on the board with performance of a grand master, if the pieces are in an order commensurate with a “line of attack”.  The information has to be ordered in the way it is to be used! By presenting information used for a particular decision component in a compact space the load on short term memory is reduced, and there is less strain in searching for the relevant information.

He creates a Table to illustrate the point.

Correlation of weight with other cardiac risk factors

Chol                       0.759384
HDL                        -0.53908
LDL                         0.177297
bp-syst                 0.424728
bp-dia                   0.516167

Triglyc                   0.637817

The task of the information system designer is to provide or request the right information, in the best form, at each stage of the procedure.

The FPG concept as deployed by Naegele and Wetzler is a model for design of a more effective health record that has already shown substantial proof of concept in the emergency room setting.  In principle, every clinical encounter is viewed as a learning experience that requires the collection of data , learning from similar patients, and comparing the value of alternative courses of treatment.  The framework for standard data collection is the FPG model. The FPG is distinguished from hindsighted guidelines which are utilized by utilization and peer review organizations.  Over time, the data forms patient clusters and enables the physician to function at a higher level.

Hypothesis construction is experiential, and hypothesis generation and testing is required to go from art to science in the complex practice of medicine.  In every encounter there are 3 components: patient, process, and outcome.  The key to the process is to collect data on patients, processes and outcomes in a standard way.  The main problem with a large portion of the chart is that the description is not uniform.  This is not fully resolved with good natural language encryption.  The standard words and phrases that may be used for a particular complaint or condition constitute a guideline.  This type of “guided documentation” is a step in moving toward a guided practice.  It enables physicians to gather data on patients, processes and outcomes of care in routine settings, and they can be reviewed and updated.  This is a higher level of methodology than basing guidelines on “consensus and opinion”.
When Lee Goldman, et al., created the guideline for classifying chest pain in the emergency room, the characteristics of the chest pain was problematic. In dealing with this he determined that if the chest pain was “stabbing”, or if it radiated to the right foot, heart attack is excluded.

The IOM is intensely committed to practice guidelines for care.  The guidelines are the data bases of the science of medical decision making and disposition processing, and are related to process-flow.  However, the hindsighted  or retrospective approach is diagnosis or procedure oriented.  HPGs are the tool used in utilization review.  The FPG model focuses on the physician-patient encounter and is problem oriented.   We can go back further and remember the contribution by Lawrence Weed to the “structured medical record”.
The physicians today use an FPG framework in looking at a problem or pathology (especially in pathology, which extends the classification by used of biomarker staining).  The Standard Patient File Format (SPPF) was developed by Weed and includes: 1. Patient demographics; 2. Front of the chart; 3. Subjective: Objective; Assessment/diagnosis;6. Plan; Back of the chart.  The FPG retains the structure of the SPPF  All of the words and phrases in the FPG are the data base for the problem or condition. The current construct of the chart is uninviting: nurses notes, medications, lab results, radiology, imaging.

Realtime Clinical Expert Support and Validation System
Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.

The introduction of a DASHBOARD has allowed a presentation of drug reactions, allergies, primary and secondary diagnoses, and critical information about any patient the care giver needing access to the record. The advantage of this innovation is obvious. The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success. It is also imperative that the extraction of data from disparate sources will, in the long run, further improve the diagnostic process.  For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin). Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.  Characteristics expressed as measurements of size, density, and concentration, resulting in more than a dozen composite variables, including the mean corpuscular volume (MCV), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH), total white cell count (WBC), total lymphocyte count, neutrophil count (mature granulocyte count and bands), monocytes, eosinophils, basophils, platelet count, and mean platelet volume (MPV), blasts, reticulocytes and platelet clumps, as well as other features of classification.   This has been described in a previous post.

It is beyond comprehension that a better construct has not be created for common use.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments, & Computers 2004; 36 (3): 506–515.
De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms. Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts.
Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories. Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

The potential contribution of informatics to healthcare is more than currently estimated


 The estimate of improved costsavings in healthcare and diagnostic accuracy is extimated to be substantial.   I have written about the unused potential that we have not yet seen.  In short, there is justification in substantial investment in resources to this, as has been proposed as a critical goal.  Does this mean a reduction in staffing?  I wouldn’t look at it that way.  The two huge benefits that would accrue are:

  1. workflow efficiency, reducing stress and facilitating decision-making.
  2. scientifically, primary knowledge-based  decision-support by well developed algotithms that have been at the heart of computational-genomics.
 Can computers save health care? IU research shows lower costs, better outcomes

Cost per unit of outcome was $189, versus $497 for treatment as usual

 Last modified: Monday, February 11, 2013
BLOOMINGTON, Ind. — New research from Indiana University has found that machine learning — the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems — can drastically improve both the cost and quality of health care in the United States.
 Physicians using an artificial intelligence framework that predicts future outcomes would have better patient outcomes while significantly lowering health care costs.
Using an artificial intelligence framework combining Markov Decision Processes and Dynamic Decision Networks, IU School of Informatics and Computing researchers Casey Bennett and Kris Hauser show how simulation modeling that understands and predicts the outcomes of treatment could
  • reduce health care costs by over 50 percent while also
  • improving patient outcomes by nearly 50 percent.
The work by Hauser, an assistant professor of computer science, and Ph.D. student Bennett improves upon their earlier work that
  • showed how machine learning could determine the best treatment at a single point in time for an individual patient.
By using a new framework that employs sequential decision-making, the previous single-decision research
  • can be expanded into models that simulate numerous alternative treatment paths out into the future;
  • maintain beliefs about patient health status over time even when measurements are unavailable or uncertain; and
  • continually plan/re-plan as new information becomes available.

In other words, it can “think like a doctor.”  (Perhaps better because of the limitation in the amount of information a bright, competent physician can handle without error!)

“The Markov Decision Processes and Dynamic Decision Networks enable the system to deliberate about the future, considering all the different possible sequences of actions and effects in advance, even in cases where we are unsure of the effects,” Bennett said.  Moreover, the approach is non-disease-specific — it could work for any diagnosis or disorder, simply by plugging in the relevant information.  (This actually raises the question of what the information input is, and the cost of inputting.)
The new work addresses three vexing issues related to health care in the U.S.:
  1. rising costs expected to reach 30 percent of the gross domestic product by 2050;
  2. a quality of care where patients receive correct diagnosis and treatment less than half the time on a first visit;
  3. and a lag time of 13 to 17 years between research and practice in clinical care.


Framework for Simulating Clinical Decision-Making

“We’re using modern computational approaches to learn from clinical data and develop complex plans through the simulation of numerous, alternative sequential decision paths,” Bennett said. “The framework here easily out-performs the current treatment-as-usual, case-rate/fee-for-service models of health care.”  (see the above)
Bennett is also a data architect and research fellow with Centerstone Research Institute, the research arm of Centerstone, the nation’s largest not-for-profit provider of community-based behavioral health care. The two researchers had access to clinical data, demographics and other information on over 6,700 patients who had major clinical depression diagnoses, of which about 65 to 70 percent had co-occurring chronic physical disorders like diabetes, hypertension and cardiovascular disease.  Using 500 randomly selected patients from that group for simulations, the two
  • compared actual doctor performance and patient outcomes against
  • sequential decision-making models

using real patient data.

They found great disparity in the cost per unit of outcome change when the artificial intelligence model’s
  1. cost of $189 was compared to the treatment-as-usual cost of $497.
  2. the AI approach obtained a 30 to 35 percent increase in patient outcomes
Bennett said that “tweaking certain model parameters could enhance the outcome advantage to about 50 percent more improvement at about half the cost.”
While most medical decisions are based on case-by-case, experience-based approaches, there is a growing body of evidence that complex treatment decisions might be effectively improved by AI modeling.  Hauser said “Modeling lets us see more possibilities out to a further point –  because they just don’t have all of that information available to them.”  (Even then, the other issue is the processing of the information presented.)
Using the growing availability of electronic health records, health information exchanges, large public biomedical databases and machine learning algorithms, the researchers believe the approach could serve as the basis for personalized treatment through integration of diverse, large-scale data passed along to clinicians at the time of decision-making for each patient. Centerstone alone, Bennett noted, has access to health information on over 1 million patients each year. “Even with the development of new AI techniques that can approximate or even surpass human decision-making performance, we believe that the most effective long-term path could be combining artificial intelligence with human clinicians,” Bennett said. “Let humans do what they do well, and let machines do what they do well. In the end, we may maximize the potential of both.”
Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach” was published recently in Artificial Intelligence in Medicine. The research was funded by the Ayers Foundation, the Joe C. Davis Foundation and Indiana University.
For more information or to speak with Hauser or Bennett, please contact Steve Chaplin, IU Communications, at 812-856-1896 or stjchap@iu.edu.
IBM Watson Finally Graduates Medical School
It’s been more than a year since IBM’s Watson computer appeared on Jeopardy and defeated several of the game show’s top champions. Since then the supercomputer has been furiously “studying” the healthcare literature in the hope that it can beat a far more hideous enemy: the 400-plus biomolecular puzzles we collectively refer to as cancer.
Anomaly Based Interpretation of Clinical and Laboratory Syndromic Classes

Larry H Bernstein, MD, Gil David, PhD, Ronald R Coifman, PhD.  Program in Applied Mathematics, Yale University, Triplex Medical Science.

 Statement of Inferential  Second Opinion  
 Realtime Clinical Expert Support and Validation System

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides
  • empirical medical reference and suggests quantitative diagnostics options.


The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record by
  • services, by
  • diagnostic method, and by
  • date, to cite examples.

This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports in a workstation entered by keying to icons.  This requires that the medical practitioner finds

  • the history,
  • medications,
  • laboratory reports,
  • cardiac imaging and EKGs, and
  • radiology
in different workspaces.  The introduction of a DASHBOARD has allowed a presentation of
  • drug reactions,
  • allergies,
  • primary and secondary diagnoses, and
  • critical information about any patient the care giver needing access to the record.
 The advantage of this innovation is obvious.  The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success.


We are proposing an innovation that supercedes the main design elements of a DASHBOARD and
  • utilizes the conjoined syndromic features of the disparate data elements.
So the important determinant of the success of this endeavor is that it facilitates both
  1. the workflow and
  2. the decision-making process
  • with a reduction of medical error.
 This has become extremely important and urgent in the 10 years since the publication “To Err is Human”, and the newly published finding that reduction of error is as elusive as reduction in cost.  Whether they are counterproductive when approached in the wrong way may be subject to debate.
We initially confine our approach to laboratory data because it is collected on all patients, ambulatory and acutely ill, because the data is objective and quality controlled, and because
  • laboratory combinatorial patterns emerge with the development and course of disease.  Continuing work is in progress in extending the capabilities with model data-sets, and sufficient data.
It is true that the extraction of data from disparate sources will, in the long run, further improve this process.  For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin) above a level determined by a receiver operator curve (ROC) analysis, particularly in the absence of substantially reduced renal function.
The conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.  Traditionally this has been accomplished by an intuitive interpretation of the data by the individual clinician.  Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion.
The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates the review of a peripheral smear.  While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the production, release or suppression of the formed elements from the blood-forming organ to the circulation.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.
Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of
  • size,
  • density, and
  • concentration,
resulting in more than a dozen composite variables, including the
  1. mean corpuscular volume (MCV),
  2. mean corpuscular hemoglobin concentration (MCHC),
  3. mean corpuscular hemoglobin (MCH),
  4. total white cell count (WBC),
  5. total lymphocyte count,
  6. neutrophil count (mature granulocyte count and bands),
  7. monocytes,
  8. eosinophils,
  9. basophils,
  10. platelet count, and
  11. mean platelet volume (MPV),
  12. blasts,
  13. reticulocytes and
  14. platelet clumps,
  15. perhaps the percent immature neutrophils (not bands)
  16. as well as other features of classification.
The use of such variables combined with additional clinical information including serum chemistry analysis (such as the Comprehensive Metabolic Profile (CMP)) in conjunction with the clinical history and examination complete the traditional problem-solving construct. The intuitive approach applied by the individual clinician is limited, however,
  1. by experience,
  2. memory and
  3. cognition.
The application of rules-based, automated problem solving may provide a more reliable and valid approach to the classification and interpretation of the data used to determine a knowledge-based clinical opinion.
The classification of the available hematologic data in order to formulate a predictive model may be accomplished through mathematical models that offer a more reliable and valid approach than the intuitive knowledge-based opinion of the individual clinician.  The exponential growth of knowledge since the mapping of the human genome has been enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.  In a univariate universe the individual has significant control in visualizing data because unlike data may be identified by methods that rely on distributional assumptions.  As the complexity of statistical models has increased, involving the use of several predictors for different clinical classifications, the dependencies have become less clear to the individual.  The powerful statistical tools now available are not dependent on distributional assumptions, and allow classification and prediction in a way that cannot be achieved by the individual clinician intuitively. Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
In the diagnosis of anemia the variables MCV,MCHC and MCH classify the disease process  into microcytic, normocytic and macrocytic categories.  Further consideration of
proliferation of marrow precursors,
  • the domination of a cell line, and
  • features of suppression of hematopoiesis

provide a two dimensional model.  Several other possible dimensions are created by consideration of

  • the maturity of the circulating cells.
The development of an evidence-based inference engine that can substantially interpret the data at hand and convert it in real time to a “knowledge-based opinion” may improve clinical problem solving by incorporating multiple complex clinical features as well as duration of onset into the model.
An example of a difficult area for clinical problem solving is found in the diagnosis of SIRS and associated sepsis.  SIRS (and associated sepsis) is a costly diagnosis in hospitalized patients.   Failure to diagnose sepsis in a timely manner creates a potential financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria (temperature, heart rate, respiratory rate and WBC count) by the clinician.   The application of those clinical criteria, however, defines the condition after it has developed and has not provided a reliable method for the early diagnosis of SIRS.  The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including transthyretin, C-reactive protein and procalcitonin.  Immature granulocyte (IG) measurement has been proposed as a more readily available indicator of the presence of
  • granulocyte precursors (left shift).
The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, may provide a mechanism to enhance workflow and decision making.
An accurate classification based on the multiplicity of available data can be provided by an innovative system that utilizes  the conjoined syndromic features of disparate data elements.  Such a system has the potential to facilitate both the workflow and the decision-making process with an anticipated reduction of medical error.

This study is only an extension of our approach to repairing a longstanding problem in the construction of the many-sided electronic medical record (EMR).  On the one hand, past history combined with the development of Diagnosis Related Groups (DRGs) in the 1980s have driven the technology development in the direction of “billing capture”, which has been a focus of epidemiological studies in health services research using data mining.

In a classic study carried out at Bell Laboratories, Didner found that information technologies reflect the view of the creators, not the users, and Front-to-Back Design (R Didner) is needed.  He expresses the view:

“Pre-printed forms are much more amenable to computer-based storage and processing, and would improve the efficiency with which the insurance carriers process this information.  However, pre-printed forms can have a rather severe downside. By providing pre-printed forms that a physician completes
to record the diagnostic questions asked,
  • as well as tests, and results,
  • the sequence of tests and questions,
might be altered from that which a physician would ordinarily follow.  This sequence change could improve outcomes in rare cases, but it is more likely to worsen outcomes. “
Decision Making in the Clinical Setting.   Robert S. Didner
 A well-documented problem in the medical profession is the level of effort dedicated to administration and paperwork necessitated by health insurers, HMOs and other parties (ref).  This effort is currently estimated at 50% of a typical physician’s practice activity.  Obviously this contributes to the high cost of medical care.  A key element in the cost/effort composition is the retranscription of clinical data after the point at which it is collected.  Costs would be reduced, and accuracy improved, if the clinical data could be captured directly at the point it is generated, in a form suitable for transmission to insurers, or machine transformable into other formats.  Such data capture, could also be used to improve the form and structure of how this information is viewed by physicians, and form a basis of a more comprehensive database linking clinical protocols to outcomes, that could improve the knowledge of this relationship, hence clinical outcomes.
  How we frame our expectations is so important that
  • it determines the data we collect to examine the process.
In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.   This has meaning for
  • hospital operations, for
  • nonhospital laboratory operations, for
  • companies in the diagnostic business, and
  • for planning of health systems.
In 1983, a vision for creating the EMR was introduced by Lawrence Weed and others.  This is expressed by McGowan and Winstead-Fry.
J J McGowan and P Winstead-Fry. Problem Knowledge Couplers: reengineering evidence-based medicine through interdisciplinary development, decision support, and research.
Bull Med Libr Assoc. 1999 October; 87(4): 462–470.   PMCID: PMC226622    Copyright notice

Example of Markov Decision Process (MDP) trans...

Example of Markov Decision Process (MDP) transition automaton (Photo credit: Wikipedia)

Control loop of a Markov Decision Process

Control loop of a Markov Decision Process (Photo credit: Wikipedia)

English: IBM's Watson computer, Yorktown Heigh...

English: IBM’s Watson computer, Yorktown Heights, NY (Photo credit: Wikipedia)

English: Increasing decision stakes and system...

English: Increasing decision stakes and systems uncertainties entail new problem solving strategies. Image based on a diagram by Funtowicz, S. and Ravetz, J. (1993) “Science for the post-normal age” Futures 25:735–55 (http://dx.doi.org/10.1016/0016-3287(93)90022-L). (Photo credit: Wikipedia)

Read Full Post »

%d bloggers like this: