wearables and patient centric data
Larry H. Bernstein, MD, FCAP, Curator
LPBI

Article ID #190: wearables and patient centric data. Published on 11/1/2015
WordCloud Image Produced by Adam Tubman
The Science of the Art of Medicine Author Interviewed
Interview with Robert A. Harrington, MD
http://www.medscape.com/viewarticle/852124
Robert A. Harrington, MD: Hi. This is Bob Harrington from Stanford University on Medscape Cardiology and the heart.org. Thanks for joining us today. Over the last couple of months, I have interviewed a couple of physician authors who have commented on some of the issues of the time, whether that’s related to the electronic health records, as in our discussions with Bob Wachter, or the revolution that’s taking place with wearables and patient centric data, as with my conversation with Eric Topol. These shows have been well received, and those conversations were useful to the community to have a frame of reference not just in regard to the daily practice of medicine but also in terms of how the bigger issues in society around digital health, wearables, electronic records, etc., might influence the practice of medicine.
Today we’re also going to interview a physician author but with a different approach to some of the issues within medicine. He is trying to get at how physicians and other clinical providers take what we learn from the literature, from scientific studies that have been done, and use some combination of statistics coupled with the art of caring for other human beings in a way that ends up with first-rate medical care. And this author not only wondered how we do that but also how we begin to think about teaching that to our trainees, to our students, our residents, our fellows, etc. I’m really pleased to have with me today the author of the book, The Science of the Art of Medicine, which is just coming out in its second edition. The author is Dr John Brush. John is a professor of medicine at Eastern Virginia Medical School and a practicing cardiologist in Norfolk, Virginia. John and I are friends and colleagues through his service as a recent board of trustee member for the American College of Cardiology (ACC). John, thanks for joining us here today on Medscape Cardiology.
John E. Brush, MD: Thanks very much, Bob. It’s a pleasure being here, and I look forward to our conversation.
Cognitive Psychology
Dr Harrington: You heard my opening remarks reflecting on you as a physician author. What made you write this book? What were some of the issues that compelled you to put pen to paper, so to speak, and try to formulate some view of the world in medicine as you saw it?
Dr Brush: One of the origins of this book was actually my work with the ACC related to quality of care and quality and care initiatives. I was chairman of the Quality Strategic Direction Committee for 3 years and really got thinking about quality and why people make errors. What leads to good judgment? What leads to bad judgment? And that led me to stumble upon the cognitive psychology literature. When I was in college, there was no such thing as cognitive psychology; it was mostly behavioral psychology. The field of cognitive psychology has arisen in the past 30 years. Cognitive psychologists are interested in understanding how people make decisions. Doctors make decisions every single day, and yet we don’t teach anything about cognitive psychology, and we don’t learn anything about it in medical school. It seemed like it was a relevant area to think about, and so I jumped in. I taught myself cognitive psychology. I bought book after book after book and just decided I was going read about it.
What cognitive psychologists have figured out is that we’re not purely rational; we think we are, but we make mistakes. We fall into typical traps. We have biases because we use heuristics or mental short cuts to try to make rapid decisions under conditions of uncertainty. If you think about it, that’s what doctors do every single day: make rapid decisions under conditions of uncertainty. I wanted to learn more about it myself. And then the more I read about it, I jokingly said to my wife, “You know, I could write a book on this stuff.” And then I stumbled on this ibook authoring tool, and I figured out that it’s actually easy to write an ibook—easier than I thought. And so I said, “I am going to write a book about it.” I thought it was going to take me 5 years; it took me about 3 months because I was so consumed by this.
Dr Harrington: Before you dive further, let me ask you a couple of things about cognitive psychology because it is interesting that you took this observation of people making errors and then did the deeper dive, which I applaud you for. I’m thinking of some other authors in this area that some of our listeners might know about, such as Dan Ariely from Duke and the book,Predictably Irrational. Would you put that in the context of cognitive psychology?
Dr Brush: Yes, he’s a well-known cognitive psychologist. There’s also Daniel Kahneman who wrote, Thinking Fast, Thinking Slow. Daniel Kahneman is from Princeton and a very prominent writer in cognitive psychology. A guy named Gerd Gigerenzer from Germany has written books on risk and how to evaluate risk—basically how to think about probability. His books were very influential for me. There’s Herbert Simon who was a computer scientist in Pittsburgh, and he is generally thought of as the father of cognitive psychology; he was at Carnegie Mellon. Cognitive psychology is also referred to as behavioral economics, by the way, and both Herbert Simon and Daniel Kahneman have won the Nobel Prize in Economics.
Natural Bayesians Placing Bets
Dr Brush: The topic is too broad to cover in a single lecture or single commentary. I did write a series of blog posts that were posted on cardioexchange.org. Harlan Krumholz invited me to do that, and it was very helpful because it enabled me to collect my thoughts and to test out ways of explaining this. One of the challenges is explaining a subject matter that is outside medicine. You’re bringing a new area to people’s attention, and so you have to figure out how to go about explaining it. I’d done this series of blog posts; I did a series of lectures for our residents here in Norfolk. And I have to tell you that the first lectures I did on this utterly failed because they were dull, they were boring. I had to figure out how to make it interesting because if you discuss it in the setting of a patient and you study it on rounds, it’s absolutely fascinating. But if you talk about it in isolation, it’s like a statistics course in isolation where the subject matter is too dry. The way you make it interesting is by relating it to patients, so I started collecting examples, and there are simple examples in cardiology.
Cardiology is the ideal place to teach medical reasoning because we almost get immediate feedback. You can put your money down and make your bets as to whether somebody is having a myocardial infarction or whether somebody has heart failure or whether somebody has a pulmonary embolism. Generally by the next day you find out, and so you get this immediate feedback where you can train your intuition. We have so many numerical things from troponin levels, D-dimer, stress tests, and what have you. You can begin to develop estimates of probability so that you can start to calibrate your intuition. And there’s no better place in medicine than cardiology to that as an exercise.
Dr Harrington: One of the things I love about the book is that you spend time, particularly at the beginning of the book, going through the basics of statistics that help us in the practice of medicine. And one area you spend time talking about is Bayes’ theorem and the notion that while we use both frequentist and Bayesian statistics in medicine, in many ways clinicians are natural Bayesians, aren’t we?
Dr Brush: We are, but we don’t think hard enough about what that means. We wander into being Bayesians, and we pick that up in our training. It’s a good idea to step back, dig a little bit deeper, and ask exactly what that means. The cognitive psychologists tell us that we make mistakes by deviating from a normative mode of thinking in one of two ways. We either do things that are illogical, or we make bad probability estimates. I wanted to think about the logic of what we do as well as the probability.
We think like Bayesians, but we never do the calculations. Instead we use a heuristic called anchoring and adjusting. Anchoring and adjusting is a natural way that you use Bayesian reasoning where you come up with an anchor, which is your pretest probability, and you adjust that anchor (that pretest probability) based on new information to give you a posttest probability. We intuitively know how to use this heuristic called anchoring and adjusting, and that’s actually what we do in practice, rather than pulling out our calculator and calculating the Bayesian probabilities.
Dr Harrington: What you’re describing in part is what makes the great clinician, and the great clinician as you described has to have a grounding in statistical methods, has to understand what probability is and how one thinks about it. At the same time, the great clinician has to have a body of experience upon which to anchor that probability; and then finally, the great clinician has to be able to be observant and trust his or her intuition. Is that a reasonable way to put it?
Teaching Good Decision-Making
Dr Brush: When you look back on your colleagues or mentors who just seemed to get it, one characteristic that describes the person who just seemed to get it is that they made good bets. They had a good sense of what’s most likely, what’s least likely. They were savvy about how they made those calculations. They had good intuition about how they made those judgments.
People get all balled up with those definitions that sensitivity relates to true positive and specificity to true negative. That’s a little bit more descriptive of what you’re actually talking about. But there’s something even better, and that’s likelihood ratios because a positive likelihood ratio is the probability of something divided by the probability that something is not there. It’s the evidence for something divided by the evidence against it, and so it’s a dimensionless number. You don’t even have to keep track of what’s in the numerator or the denominator because it’s a dimensionless number.
Leave a Reply