Inadequacy of EHRs
Larry H. Bernstein, MD, FCAP, Curator
LPBI
EHRs need better workflows, less ‘chartjunk’
By Marla Durben Hirsch
Electronic health records currently handle data poorly and should be enhanced to better collect, display and use it to support clinical care, according to a new study published in JMIR Medical Informatics.
The authors, from Beth Israel Deaconess Medical Center and elsewhere, state that the next generation of EHRs need to improve workflow, clinical decision-making and clinical notes. They decry some of the problems with existing EHRs, including data that is not displayed well, under-networked, underutilized and wasted. The lack of available data causes errors, creates inefficiencies and increases costs. Data is also “thoughtlessly carried forward or copied and pasted into the current note” creating “chartjunk,” the researchers say.
They suggest ways that future EHRs can be improved, including:
- Integrating bedside and telemetry monitoring systems with EHRs to provide data analytics that could support real time clinical assessments
- Formulating notes in real-time using structured data and natural language processing on the free text being entered
- Formulating treatment plans using information in the EHR plus a review of population data bases to identify similar patients, their treatments and outcomes
- Creating a more “intelligent” design that capitalizes on the note writing process as well as the contents of the note.
“We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose,” the researchers say. “The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them.”
Many have pointed out the flaws in current EHR design that impede the optimum use of data and hinder workflow. Researchers have suggested that EHRs can be part of a learning health system to better capture and use data to improve clinical practice, create new evidence, educate, and support research efforts.
Disrupting Electronic Health Records Systems: The Next Generation
http://medinform.jmir.org/article/viewFile/4192/1/68491
Figure 2. Mock visualization of symptoms, signs, laboratory results, and other data input and systems suggestion for differential diagnoses.
Supporting the Formulation of the Plan
With the assessment formulated, the system would then formulate a proposed plan using EBM inputs and DCDM refinements for issues lying outside EBM knowledge. Decision support for plan formulation would include items such as randomized control trials (RCTs), observational studies, clinical practice guidelines (CPGs), local guidelines, and other relevant elements (eg, Cochrane reviews). The system would provide these supporting modalities in a hierarchical fashion using evidence of the highest quality first before proceeding down the chain to lower quality evidence. Notably, RCT data are not available for the majority of specific clinical questions, or it is not applicable because the results cannot be generalized to the patient at hand due to the study’s inclusion and exclusion criteria [
]. Sufficiently reliable observational research data also may not be available, although we expect that the holes in the RCT literature will be increasingly filled by observational studies in the near future [ , ]. In the absence of pertinent evidence-based material, the system would include the functionality which we have termed DCDM, and our Stanford colleagues have termed the “green button” [ , ]. This still-theoretical process is described in detail in the references, but in brief, DCDM would utilize a search engine type of approach to examine a population database to identify similar patients on the basis of the information entered in the EHR. The prior treatments and outcomes of these historical patients would then be analyzed to present options for the care of the current patient that were, to a large degree, based on prior data. The efficacy of DCDM would depend on, among other factors, the availability of a sufficiently large population EHR database, or an open repository that would allow for the sharing of patient data between EHRs. This possibility is quickly becoming a reality with the advent of large, deidentified clinical databases such as that being created by the Patient Centered Outcomes Research Institute [ ].The tentative plan could then be modified by the user on the basis of her or his clinical “wetware” analysis. The electronic workflow could be designed in a number of ways that were modifiable per user choice/customization. For example, the user could first create the assessment and plan which would then be subject to comment and modification by the automatic system. This modification might include suggestions such as adding entirely new items, as well as the editing of entered items. In contrast, as described, the system could formulate an original assessment and plan that was subject to final editing by the user. In either case, the user would determine the final output, but the system would record both system and final user outputs for possible reporting purposes (eg, consistency with best practices). Another design approach might be to display the user entry in toto on the left half of a computer screen and a system-formulated assessment (
) and plan on the right side for comparison. Links would be provided throughout the system formulation so that the user could drill into EHR-provided suggestions for validation and further investigation and learning. In either type of workflow, the system would comparatively evaluate the final entered plan for consistency, completeness, and conformity with current best practices. The system could display the specific items that came under question and why. Users may proceed to adopt or not, with the option to justify their decision. Data reporting analytics could be formulated on the basis of compliance with EBM care. Such analytics should be done and interpreted with the knowledge that EBM itself is a moving target and many clinical situations do not lend themselves to resolution with the current tools supplied by EBM.Since not all notes call for this kind of extensive decision support, the CDS material could be displayed in a separate columnar window adjacent to the main part of the screen where the note contents were displayed so that workflow is not affected. Another possibility would be an “opt-out” button by which the user would choose not to utilize these system resources. This would be analogous but functionally opposite to the “green button” opt-in option suggested by Longhurst et al, and perhaps be designated the “orange button” to clearly make this distinction [
]. Later, the system would make a determination as to whether this lack of EBM utilization was justified, and provide a reminder if the care was determined to be outside the bounds of current best practices. While the goal is to keep the user on the EBM track as much as feasible, the system has to “realize” that real care will still extend outside those bounds for some time, and that some notes and decisions simply do not require such machine support.There are clearly still many details to be worked out regarding the creation and use of a fully integrated bidirectional EHR. There currently are smaller systems that use some components of what we propose. For example, a large Boston hospital uses a program called QPID which culls all previously collected patient data and uses a Google-like search to identify specific details of relevant prior medical history which is then displayed in a user-friendly fashion to assist the clinician in making real-time decisions on admission [
]. Another organization, the American Society of Clinical Oncology, has developed a clinical Health IT tool called CancerLinQ which utilizes large clinical databases of cancer patients to trend current practices and compare the specific practices of individual providers with best practice guidelines [ ]. Another hospital system is using many of the components discussed in a new, internally developed platform called Fluence that allows aggregation of patient information, and applies already known clinical practice guidelines to patients’ problem lists to assist practitioners in making evidenced-based decisions [ ]. All of these efforts reflect inadequacies in current EHRs and are important pieces in the process of selectively and wisely incorporating these technologies into EHRs, but doing so universally will be a much larger endeavor.
http://medinform.jmir.org/article/viewFile/4192/1/68492
Figure 3. Mock screenshot for the “Assessment and Plan” screen with background data analytics. Based on background analytics that are being run by the system at all times, a series of “problems” are identified and suggested by the system, which are then displayed in the EMR in the box on the left. The clinician can then select problems that are suggested, or input new problems that are then displayed in the the box on the right of the EMR screen, and will now be apart of ongoing analytics for future assessment. View this figure
Conclusions
Medicine has finally entered an era in which clinical digitization implementations and data analytic systems are converging. We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose (personal communication from Peter Szolovits, The Unreasonable Effectiveness of Clinical Data. Challenges in Big Data for Data Mining, Machine Learning and Statistics Conference, March 2014). The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them. The Institute of Medicine demands a “learning health care system” where analysis of patient data is a key element in continuously improving clinical outcomes [
]. This is also an age of increasing medical complexity bound up in increasing financial and time constraints. The latter dictate that medical practice should become more standardized and evidence-based in order to optimize outcomes at the lowest cost. Current EHRs, mostly implemented over the past decade, are a first step in the digitization process, but do not support decision-making or streamline the workflow to the extent to which they are capable. In response, we propose a series of information system enhancements that we hope can be seized, improved upon, and incorporated into the next generation of EHRs.There is already government support for these advances: The Office of the National Coordinator for Health IT recently outlined their 6-year and 10-year plans to improve EHR and health IT interoperability, so that large-scale realizations of this idea can and will exist. Within 10 years, they envision that we “should have an array of interoperable health IT products and services that allow the health care system to continuously learn and advance the goal of improved health care.” In that, they envision an integrated system across EHRs that will improve not just individual health and population health, but also act as a nationwide repository for searchable and researchable outcomes data [
]. The first step to achieving that vision is by successfully implementing the ideas and the system outlined above into a more fully functional EHR that better supports both workflow and clinical decision-making. Further, these suggested changes would also contribute to making the note writing process an educational one, thereby justifying the very significant time and effort expended, and would begin to establish a true learning system of health care based on actual workflow practices. Finally, the goal is to keep clinicians firmly in charge of the decision loop in a “human-centered” system in which technology plays an essential but secondary role. As expressed in a recent article on the issue of automating systems [ ]:In this model (human centered automation)…technology takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.
Key Concepts and Terminology
A number of concepts and terms were introduced throughout this paper, and some clarification and elaboration of these follows:
- Affordable Care Act (ACA): Legislation passed in 2010 that constitutes two separate laws including the Patient Protection and Affordable Care Act and the Health Care and Education Reconciliation Act. These two pieces of legislation act together for the expressed goal of expanding health care coverage to low-income Americans through expansion of Medicaid and other federal assistance programs [ ].
- Clinical Decision Support (CDS) is defined by CMS as “a key functionality of health information technology” that encompasses a variety of tools including computerized alerts and reminders, clinical guidelines, condition-specific order sets, documentations templates, diagnostic support, and other tools that “when used effectively, increases quality of care, enhances health outcomes, helps to avoid errors and adverse events, improves efficiency, reduces costs, and boosts provider and patient satisfaction” [ ].
- Cognitive Computing is defined as “the simulation of human thought processes in a computerize model…involving self learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works” [ ]. Defined by IBM as computer systems that “are trained using artificial intelligence and machine learning algorithms to sense, predict, infer and, in some ways, think” [ ].
- Deep learning is a form of machine learning (a more specific subgroup of cognitive computing) that utilizes multiple levels of data to make hierarchical connections and recognize more complex patterns to be able to infer higher level concepts from lower levels of input and previously inferred concepts [ ]. demonstrates how this concept relates to patients illustrating the system recognizing patterns of signs and symptoms experienced by a patient, and then inferring a diagnosis (higher level concept) from those lower level inputs. The next level concept would be recognizing response to treatment for proposed diagnosis, and offering either alternative diagnoses, or change in therapy, with the system adapting as the patient’s course progresses.
- Dynamic clinical data mining (DCDM): First, data mining is defined as the “process of discovering patterns, automatically or semi-automatically, in large quantities of data” [ ]. DCDM describes the process of mining and interpreting the data from large patient databases that contain prior and concurrent patient information including diagnoses, treatments, and outcomes so as to make real-time treatment decisions [ ].
- Natural Language Processing (NLP) is a process based on machine learning, or deep learning, that enables computers to analyze and interpret unstructured human language input to recognize and even act upon meaningful patterns [ , ].
References
- Weed LL. Medical records, patient care, and medical education. Ir J Med Sci 1964 Jun;462:271-282. [Medline]
- Celi L, Csete M, Stone D. Optimal data systems: the future of clinical predictions and decision support. Curr Opin Crit Care 2014 Oct;20(5):573-580. [CrossRef] [Medline]
- Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS One 2013 Nov;8(11):e80318 [FREE Full text] [CrossRef] [Medline]
- Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study. JAMA Intern Med 2013 Nov 25;173(21):1962-1969. [CrossRef] [Medline]
more ….
The Electronic Health Record: How far we have travelled, and where is journey’s end?
A focus of the Accountable Care Act is improved delivery of quality, efficiency and effectiveness to the patients who receive healthcare in US from the providers in a coordinated system. The largest confounder in all of this is the existence of silos that are not readily crossed, handovers, communication lapses, and a heavy paperwork burden. We can add to that a large for profit insurance overhead that is disinterested in the patient-physician encounter. Finally, the knowledge base of medicine has grown sufficiently that physicians are challenged by the amount of data and the presentation in the Medical Record.
I present a review of the problems that have become more urgent to fix in the last decade. The administration and paperwork necessitated by health insurers, HMOs and other parties today may account for 40% of a physician’s practice, and the formation of large physician practice groups and alliances of the hospital and hospital staffed physicians (as well as hospital system alliances) has increased in response to the need to decrease the cost of non-patient care overhead. I discuss some of the points made by two innovators from the healthcare and the communications sectors.
I also call attention to the New York Times front page article calling attention to a sharp rise in inflation-adjusted Medicare payments for emergency-room services since 2006 due to upcoding at the highest level, partly related to the ability to physician ability to overstate the claim for service provided by correctible improvements I discuss below. (NY Times, 9/22/2012). The solution still has another built in step that requires quality control of both the input and the output, achievable today. This also comes at a time that there is a nationwide implementation of ICD-10 to replace ICD-9 coding.
The first finding by Robert S Didner, on “Decision Making in the Clinical Setting”, concludes that the gathering of information has large costs while reimbursements for the activities provided have decreased, detrimental to the outcomes that are measured. He suggests that this data can be gathered and reformatted to improve its value in the clinical setting by leading to decisions with optimal outcomes. He outlines how this can be done.
The second is a discussion by Emergency Medicine physicians, Thomas A Naegele and harry P Wetzler, who have developed a Foresighted Practice Guideline (FPG) (“The Foresighted Practice Guideline Model: A Win-Win Solution”). They focus on collecting data from similar patients, their interventions, and treatments to better understand the value of alternative courses of treatment. Using the FPG model will enable physicians to elevate their practice to a higher level and they will have hard information on what works. These two views are more than 10 years old, and they are complementary.
Didner points out that there is no one sequence of tests and questions that can be optimal for all presenting clusters. Even as data and test results are acquired, the optimal sequence of information gathering is changed, depending on the gathered information. Thus, the dilemma is created of how to collect clinical data. Currently, the way information is requested and presented does not support the way decisions are made. Decisions are made in a “path-dependent” way, which is influenced by the sequence in which the components are considered. Ideally, it would require a separate form for each combination of presenting history and symptoms, prior to ordering tests, which is unmanageable. The blank paper format is no better, as the data is not collected in the way it would be used, and it constitutes separate clusters (vital signs, lab work{also divided into CBC, chemistry panel, microbiology, immunology, blood bank, special tests}]. Improvements have been made in the graphical presentation of a series of tests. Didner presents another means of gathering data in machine manipulable form that improves the expected outcomes. The basis for this model is that at any stage of testing and information gathering there is an expected outcome from the process, coupled with a metric, or hierarchy of values to determine the relative desirability of the possible outcomes.
He creates a value hierarchy:
- Minimize the likelihood that a treatable, life-threatening disorder is not treated.
- Minimize the likelihood that a treatable, permanently-disabling or disfiguring disorder is not treated.
- Minimize the likelihood that a treatable, discomfort causing disorder is not treated.
- Minimize the likelihood that a risky procedure, (treatment or diagnostic procedure) is inappropriately administered.
- Minimize the likelihood that a discomfort causing procedure is inappropriately administered.
- Minimize the likelihood that a costly procedure is inappropriately administered.
- Minimize the time of diagnosing and treating the patient.
- Minimize the cost of diagnosing and treating the patient.
In reference to a way of minimizing the number, time and cost of tests, he determines that the optimum sequence could be found using Claude Shannon’s Information theory. As to a hierarchy of outcome values, he refers to the QALY scale as a starting point. At any point where a determination is made there is disparate information that has to be brought together, such as, weight, blood pressure, cholesterol, etc. He points out, in addition, that the way the clinical information is organized is not opyimal for the way to display information to enhance human cognitive performance in decision support. Furthermore, he looks at the limit of short term memory as 10 chunks of information at any time, and he compares the positions of chess pieces on the board with performance of a grand master, if the pieces are in an order commensurate with a “line of attack”. The information has to be ordered in the way it is to be used! By presenting information used for a particular decision component in a compact space the load on short term memory is reduced, and there is less strain in searching for the relevant information.
He creates a Table to illustrate the point.
Correlation of weight with other cardiac risk factors
Chol 0.759384
HDL -0.53908
LDL 0.177297
bp-syst 0.424728
bp-dia 0.516167
Triglyc 0.637817
The task of the information system designer is to provide or request the right information, in the best form, at each stage of the procedure.
The FPG concept as deployed by Naegele and Wetzler is a model for design of a more effective health record that has already shown substantial proof of concept in the emergency room setting. In principle, every clinical encounter is viewed as a learning experience that requires the collection of data , learning from similar patients, and comparing the value of alternative courses of treatment. The framework for standard data collection is the FPG model. The FPG is distinguished from hindsighted guidelines which are utilized by utilization and peer review organizations. Over time, the data forms patient clusters and enables the physician to function at a higher level.
Hypothesis construction is experiential, and hypothesis generation and testing is required to go from art to science in the complex practice of medicine. In every encounter there are 3 components: patient, process, and outcome. The key to the process is to collect data on patients, processes and outcomes in a standard way. The main problem with a large portion of the chart is that the description is not uniform. This is not fully resolved with good natural language encryption. The standard words and phrases that may be used for a particular complaint or condition constitute a guideline. This type of “guided documentation” is a step in moving toward a guided practice. It enables physicians to gather data on patients, processes and outcomes of care in routine settings, and they can be reviewed and updated. This is a higher level of methodology than basing guidelines on “consensus and opinion”.
When Lee Goldman, et al., created the guideline for classifying chest pain in the emergency room, the characteristics of the chest pain was problematic. In dealing with this he determined that if the chest pain was “stabbing”, or if it radiated to the right foot, heart attack is excluded.
The IOM is intensely committed to practice guidelines for care. The guidelines are the data bases of the science of medical decision making and disposition processing, and are related to process-flow. However, the hindsighted or retrospective approach is diagnosis or procedure oriented. HPGs are the tool used in utilization review. The FPG model focuses on the physician-patient encounter and is problem oriented. We can go back further and remember the contribution by Lawrence Weed to the “structured medical record”.
The physicians today use an FPG framework in looking at a problem or pathology (especially in pathology, which extends the classification by used of biomarker staining). The Standard Patient File Format (SPPF) was developed by Weed and includes: 1. Patient demographics; 2. Front of the chart; 3. Subjective: Objective; Assessment/diagnosis;6. Plan; Back of the chart. The FPG retains the structure of the SPPF All of the words and phrases in the FPG are the data base for the problem or condition. The current construct of the chart is uninviting: nurses notes, medications, lab results, radiology, imaging.
Realtime Clinical Expert Support and Validation System
Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.
The introduction of a DASHBOARD has allowed a presentation of drug reactions, allergies, primary and secondary diagnoses, and critical information about any patient the care giver needing access to the record. The advantage of this innovation is obvious. The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success. It is also imperative that the extraction of data from disparate sources will, in the long run, further improve the diagnostic process. For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin). Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion. In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions. Characteristics expressed as measurements of size, density, and concentration, resulting in more than a dozen composite variables, including the mean corpuscular volume (MCV), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH), total white cell count (WBC), total lymphocyte count, neutrophil count (mature granulocyte count and bands), monocytes, eosinophils, basophils, platelet count, and mean platelet volume (MPV), blasts, reticulocytes and platelet clumps, as well as other features of classification. This has been described in a previous post.
It is beyond comprehension that a better construct has not be created for common use.
W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments, & Computers 2004; 36 (3): 506–515.
De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms. Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts.
Behavior Research Methods 2008; 40 (4): 1030-1048
Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum]. Murray Hill, NJ: Bell Laboratories. Lewandowsky , S. (1991).
Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.
Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.
The potential contribution of informatics to healthcare is more than currently estimated
The estimate of improved costsavings in healthcare and diagnostic accuracy is extimated to be substantial. I have written about the unused potential that we have not yet seen. In short, there is justification in substantial investment in resources to this, as has been proposed as a critical goal. Does this mean a reduction in staffing? I wouldn’t look at it that way. The two huge benefits that would accrue are:
- workflow efficiency, reducing stress and facilitating decision-making.
- scientifically, primary knowledge-based decision-support by well developed algotithms that have been at the heart of computational-genomics.
Cost per unit of outcome was $189, versus $497 for treatment as usual
- reduce health care costs by over 50 percent while also
- improving patient outcomes by nearly 50 percent.
- showed how machine learning could determine the best treatment at a single point in time for an individual patient.
- can be expanded into models that simulate numerous alternative treatment paths out into the future;
- maintain beliefs about patient health status over time even when measurements are unavailable or uncertain; and
- continually plan/re-plan as new information becomes available.
In other words, it can “think like a doctor.” (Perhaps better because of the limitation in the amount of information a bright, competent physician can handle without error!)
- rising costs expected to reach 30 percent of the gross domestic product by 2050;
- a quality of care where patients receive correct diagnosis and treatment less than half the time on a first visit;
- and a lag time of 13 to 17 years between research and practice in clinical care.
Framework for Simulating Clinical Decision-Making
- compared actual doctor performance and patient outcomes against
- sequential decision-making models
using real patient data.
- cost of $189 was compared to the treatment-as-usual cost of $497.
- the AI approach obtained a 30 to 35 percent increase in patient outcomes
Larry H Bernstein, MD, Gil David, PhD, Ronald R Coifman, PhD. Program in Applied Mathematics, Yale University, Triplex Medical Science.
- empirical medical reference and suggests quantitative diagnostics options.
Background
- services, by
- diagnostic method, and by
- date, to cite examples.
This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports in a workstation entered by keying to icons. This requires that the medical practitioner finds
- the history,
- medications,
- laboratory reports,
- cardiac imaging and EKGs, and
- radiology
- drug reactions,
- allergies,
- primary and secondary diagnoses, and
- critical information about any patient the care giver needing access to the record.
Proposal
- utilizes the conjoined syndromic features of the disparate data elements.
- the workflow and
- the decision-making process
- with a reduction of medical error.
- laboratory combinatorial patterns emerge with the development and course of disease. Continuing work is in progress in extending the capabilities with model data-sets, and sufficient data.
- size,
- density, and
- concentration,
- mean corpuscular volume (MCV),
- mean corpuscular hemoglobin concentration (MCHC),
- mean corpuscular hemoglobin (MCH),
- total white cell count (WBC),
- total lymphocyte count,
- neutrophil count (mature granulocyte count and bands),
- monocytes,
- eosinophils,
- basophils,
- platelet count, and
- mean platelet volume (MPV),
- blasts,
- reticulocytes and
- platelet clumps,
- perhaps the percent immature neutrophils (not bands)
- as well as other features of classification.
- by experience,
- memory and
- cognition.
- the domination of a cell line, and
- features of suppression of hematopoiesis
provide a two dimensional model. Several other possible dimensions are created by consideration of
- the maturity of the circulating cells.
- granulocyte precursors (left shift).
This study is only an extension of our approach to repairing a longstanding problem in the construction of the many-sided electronic medical record (EMR). On the one hand, past history combined with the development of Diagnosis Related Groups (DRGs) in the 1980s have driven the technology development in the direction of “billing capture”, which has been a focus of epidemiological studies in health services research using data mining.
In a classic study carried out at Bell Laboratories, Didner found that information technologies reflect the view of the creators, not the users, and Front-to-Back Design (R Didner) is needed. He expresses the view:
- as well as tests, and results,
- the sequence of tests and questions,
- it determines the data we collect to examine the process.
- hospital operations, for
- nonhospital laboratory operations, for
- companies in the diagnostic business, and
- for planning of health systems.

English: Increasing decision stakes and systems uncertainties entail new problem solving strategies. Image based on a diagram by Funtowicz, S. and Ravetz, J. (1993) “Science for the post-normal age” Futures 25:735–55 (http://dx.doi.org/10.1016/0016-3287(93)90022-L). (Photo credit: Wikipedia)
Related articles
- Artificial intelligence system diagnoses illnesses better than doctors (sott.net)
- Wow of the Week: AI that “thinks like a doctor” shows lower costs, better outcomes in study (medcitynews.com)
- AI system diagnoses illnesses better than doctors (rawstory.com)
- Artificial Intelligence Smarter than Doctors? (americanlivewire.com)
- Can computers save health care? IU research shows lower costs, better outcomes (binaryhealthcare.wordpress.com)
- CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics and Computational Genomics (pharmaceuticalintelligence.com)
- Where Does Informatics Belong? Part II (himss.org)
- Kaiser Health News/Philadelphia Inquirer on MedInformaticsMD: “The flaws of electronic records” (hcrenewal.bl
Leave a Reply