Feeds:
Posts
Comments

Archive for the ‘Biomedical Measurement Science’ Category

Antibody alternatives in specific aptamer 3-D scaffold binding

Curator: Larry H. Bernstein, MD, FCAP

 

 

New Proteomics Tools to Open Up Drug Discovery

Dr. Paul Ko Ferrigno, Chief Scientific Officer, and Dr. Jane McLeod, of Avacta Life Sciences

http://www.dddmag.com/articles/2016/01/new-proteomics-tools-open-drug-discovery

 

Size matters: Smaller molecules allow a tighter packing density on solid surfaces for improved signal-to-noise in assays, such as SPR or ELISA.

http://www.dddmag.com/sites/dddmag.com/files/AvactaAffimer%20stills_00006.jpg

Size matters: Smaller molecules allow a tighter packing density on solid surfaces for improved signal-to-noise in assays, such as SPR or ELISA.

 

In the current genomic era of very accurate DNA analyses by in situ hybridization, DNA chip analyses, and deep sequencing, it is often assumed that antibodies have an analogous ability to identify molecular targets accurately. Nothing could be further from the truth. Proteomics as a field is still lagging behind its genomic counterpart in the level of detail we can achieve, the level of data we can collect and the overall levels of accuracy and reliability that the collected data represent.

From the estimated 20,000 human genes 100,000 different possible proteins have been predicted. What is more, the variation achieved via the post-translational modification of these proteins brings another layer of complexity to cellular signalling. All this means that while studying the genetic blueprint can offer insights into the cell, it is only through examining the functional protein units that we can comprehensively map the dynamic interactions that occur within the cell to drive an organism or disease process.

 

Why not use antibodies?

Antibodies form the basis of molecular recognition in proteomics, whether this is to identify a protein within a complex mixture or label a specific protein within a cell. Both their target specificity and high binding capacity have made these molecules fantastically useful tools within diagnostics. However, the large majority of commercially available antibodies are for use as reagents in research and development, where they are simply not as well validated and issues with their manufacture have created problems that have hindered drug development.

It has proven next to impossible to develop antibodies to certain targets. This may be because of their high homology to the host protein, so the immune system fails to recognise them as different, or due to antigen processing resulting in the loss of post-translational modifications or discontinuous epitopes. However, without the necessary antibodies to investigate these targets the corresponding research avenues have remained closed and key drug targets may have been missed.

Almost worse than lacking the necessary research tools is the problem of antibody reproducibility. Matthias Uhlen revealed that of the 5,436 antibodies tested as part of the Protein Atlas project over 50 percent failed to recognize their target in at least one of two commonly used assays. Antibodies that are not specific to their target or do not recognize their target at all are responsible for increasing the cost of biological research, with an estimated $800 million spent globally every year on bad antibodies. Many published studies have had to be retracted due to antibody-derived error and those that remain unidentified in the literature will continue to lead researchers down blind alleys. This level of misinformation in the published literature is not just hindering the progression of the field, but possibly even sending it backwards and costing more than is needed — an issue not to be taken lightly when so many research budgets are coming under the knife.

More recently there has been a call from a number of leading scientists for the use of polyclonal antibodies, considered to be the worst offenders in terms of batch-to-batch irreproducibility, to be abandoned. They suggest a move towards recombinant systems of production, which would remove the restrictions of the immune system on antibody production. Yet, they state that $1 billion dollars investment would be required to re-route antibody production down this path and suggest that a period of five to ten years may be required to bring about these changes.

Simply producing recombinant antibodies rather than animal-derived affinity reagents will still leave us with a number of problems regarding their use. Antibodies are simply too large to target many smaller, hidden epitopes and the presence of key disulphide bonds within their structure makes them all too susceptible to reduction within the cell, rendering them useless for applications such as live cell imaging of molecules within the cytoplasm. Moreover, can we afford to wait another decade for a solution to this problem before we pursue protein targets for basic understanding and drug development?

 

 

 

 

Antibody alternatives for new targets and techniques

Antibody alternatives are already available to researchers in the life sciences field. They are produced either from nucleic acid or protein molecules. Aptamers are short, single-stranded RNA or DNA molecules that fold to form 3D scaffolds, which can present a specific interaction surface to allow specific binding to its target molecule. Protein scaffolds are formed from parts of or whole proteins modified to present a peptide sequence. This peptide sequence works in a similar manner to present a specific interaction surface for specific binding to a desired target.

These antibody alternatives are produced in recombinant systems, minimizing batch-to-batch variation and allowing them to be produced to theoretically any target. Additionally, as they do not use animals in their production they are generally less expensive to produce than traditional antibodies.

 

 

 

 

While companies such as Affibody and Avacta Life Sciences are aiming to open up the drug pipeline, by offering these alternative affinity reagents to previously inaccessible targets for use in research and development, many have moved into exploiting their therapeutic potential. Noxxon produce an RNA-based scaffold, Spiegelmers, which are currently in phase 2 clinical trials for diabetic nephropathy, and Molecular Partners have reached phase 3 clinical trials with their protein scaffold DARPins, for wet age-related macular degeneration.

These new antibody alternatives are smaller than traditional antibodies. This opens up the use of new technologies, such as super resolution microscopy. While the diffraction limit remained at about 250 nmm the length of the antibody at 15 nm was of little importance, tagging your molecule as accurately as necessary. Removing this limit in super resolution microscopy has meant that antibodies are now too large to provide the required level of accuracy. Instead, using an antibody alternative of only 2nm to tag your protein of interest opens up this new technique offering greater precision to intracellular localization.

 

 

 

As these scaffolds have been engineered to be fit-for-purpose many contain no intramolecular disulphide bonds and so are not sensitive to the reducing environment of the cell. This function enables their use as intracellular reporters of molecular conformation, as well as in standard assays like IHC or western blotting, so allowing scientists to use the same reagent across both intracellular and biochemical assays, thus bridging the gap between cell biology and biochemical studies.

Offering increased reproducibility, access to an increased range of applications, and the opportunity to hit previously inaccessible targets, antibody alternatives are opening up potential new avenues of drug discovery.

 

 

About the Authors

Dr. Paul Ko Ferrigno is Chief Scientific Officer at Avacta Life Sciences and a Visiting Professor at the University of Leeds. He has been working on peptide aptamers since 2001 in Leeds and at the MRC Cancer Cell Unit in Cambridge, UK where his team developed the Affimer scaffold technology.

Dr. Jane McLeod is a Scientific Writer at Avacta Life Sciences.

 

 

 

 

Read Full Post »

TSUNAMI in HealthCare under the New Name Verily.com, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

TSUNAMI in HealthCare under the New Name Verily.com

Curator: Aviva Lev-Ari, PhD, RN

 

UPDATED on 6/8/2016

The Tricorder project was announced only 3 months after Google entered the life sciences field, according to the report, and came from the same incubator which rolled out the company’s self-driving car and recently cancelled Google Glass.

Verily CEO Andrew Conrad said the scientific basis for the device was proven upon unveiling in 2014, but experts have presented conflicting views on the reality of such a device, STAT Newsreports.

“What (Verily is) really good at is physical measurements — things like temperature, pulse rate, activity level. They are not particularly good at … the chemical and the biological stuff,” Walt toldSTAT news.

Four former Verily employees said the Tricorder “has been seen internally more as a way to generate buzz than as a viable project,” according to the report.

SOURCE

http://www.massdevice.com/googles-star-trek-tricorder-bid-flops/?spMailingID=9031578&spUserID=MTI2MTQxNTczMjM5S0&spJobID=940786327&spReportId=OTQwNzg2MzI3S0

 

UPDATED on 4/16/2016

SOURCE

http://recode.net/2016/04/13/verily-alphabet-profitable/

Verily, Alphabet’s medical business, is profitable, Sergey Brin tells Googlers

20160413-verily-google-life-sciences

Verily | YouTube

SCIENCE

Publicly, Alphabet has said very little about its assortment of companies not named Google.

But internally, Alphabet is a little more forthcoming.

As we reported earlier, Nest CEO Tony Fadell appeared before Google’s all-hands meeting two weeks ago to address recent criticism of his company. During that meeting, Google co-founder and Alphabet exec Sergey Brin also defended another company under the holding conglomerate: Verily, the medical tech unit previously called Google Life Sciences.

Lumped together, Alphabet’s moonshots aren’t making money yet — but Verily is, Brin said.

Verily was the target of a scathing article — in Stat, a medical publication from the Boston Globe — scrutinizing its CEO, Andy Conrad. Several former employees told Stat that Verily suffered a talent exodus due to “derisive and impulsive” leadership by Conrad.

Here’s what Brin said in response at Google’s TGIF meeting:

I have seen a smattering of articles. And, you know, it’s actually sad to see sometimes where it appeared that … former employees or soon-to-be former employees talked to the press. But, anyhow, I can tell you what’s going on with these companies, fortunately. So in Verily’s case, despite a handful of examples, their attrition rate is below Google’s and Alphabet’s as a whole. And also, there are articles that have generally said we are blowing a lot of money and so forth. It’s true that, you know, as whole our Other Bets are not yet profitable, but some of them are, including Verily on a cash basis and increasingly so. So we’re pretty excited about these efforts.

Verily makes money through

  • partnerships with pharmaceutical companies — such as Novartis, which is licensing and planning to sell Verily’s smart contact lens — and
  • medical institutions.

It is one of three units contributing to the Other Bets total revenue ($448 million) in 2015, along with

  • Google Fiber and
  • Nest.

As we reported earlier, Nest likely brought in around $340 million of that and Fiber pulled close to $100 million, meaning that Verily’s sales were somewhere around $10 million. During the year, all the moonshot units combined reported operating losses of $3.6 billion.

Note Brin’s stipulation that Verily’s profit comes on a “cash basis.” That probably means that it’s not making profit on the normal basis, meaning when you take into account total sales minus total costs. But “cash positive” suggests they’re booking sales faster than they’re spending money, which is a positive sign. Companies normally report financials accounting for all costs. And that’s how Alphabet will next week, when it shares first-quarter results for Google and the Other Bets — although we almost certainly won’t see figures on Verily’s profitability.

We reached out to Alphabet and Verily reps for more clarity, but didn’t get any.

SOURCE

http://recode.net/2016/04/13/verily-alphabet-profitable/

 

Original Curation dated 12/14/2015

  1. Part 1: Verily in Action
  2. Part II: Innovations at a Different Scale: GDE Enterprises – A Case in Point of Healthcare in Focus – Work-in-Progress

12/31/2015 – All time

  1. Following this Journal by e-mail subscription: along with X other amazing people
  2. views
  3. comments
  4. Top Post and Pages
  5. Click Summaries
  6. Posts
  7. Categories
  8. Tags
  9. Top Authors Views

 

Part 1: Verily in Action

They write @ https://verily.com/

When Google[x] embarked on a project in 2012 to put computing inside a contact lens — an immensely challenging technical problem with an important application to health — we could not have imagined where it would lead us. As a life sciences team within Google[x], we were able to combine the best of our technology heritage with expertise from across many fields. Now, as an independent company, Verily is focused on using technology to better understand health, as well as prevent, detect, and manage disease.

Andy Conrad, Ph.D.

Chief Executive OfficerFormerly the chief scientific officer of LabCorp, Andy is a cell biologist with a doctorate from UCLA. He has always been passionate about early detection and prevention of disease: Andy co-founded the National Genetics Institute, which developed the first cost-effective test to screen for HIV in blood supply.

Brian Otis, Ph.D.

Chief Technical OfficerBrian’s team focuses on end-to-end innovation ranging from integrated circuits to biocompatible materials to sensors. He joined Google[x] as founder of the smart contact lens project and now leads our efforts across all hardware and device projects, including wearables, implanted devices, and technology like Liftware.

Jessica Mega, M.D., MPH

Chief Medical OfficerJessica leads the clinical strategy and research team at Verily. She is a board-certified cardiologist who trained and practiced at Massachusetts General Hospital and Brigham and Women’s Hospital. As a faculty member at Harvard Medical School and a senior investigator with the TIMI Study Group, Jessica directed large, international trials evaluating novel cardiovascular therapies.

Linus Upson

Head of EngineeringA long-time Google software engineer, Linus has been a team lead in developing products that now help billions of people worldwide find the information they need on the Internet, including Chrome and Chrome OS. He now oversees our engineering teams.

Tom Stanis

Head of SoftwareTom spent nine years working on core Google products before joining Google[x] in 2014 to work on the Baseline Study. He now leads all our Software projects, including the development of machine learning algorithms for applications ranging from robotic-assisted surgery to diabetes management.

Vikram (Vik) Bajaj, Ph.D.

Chief Scientific OfficerVik’s broad research interests in industry and as a former academic principal investigator have included structural and systems biology, molecular imaging, nanoscience, and bioinformatics. Vik now leads the Science team in research directions related to our mission.

What are the Dimensions of the Tsumani in Healthcare?

  • prevention,
  • detection,
  • management of disease

 

Hardware

  • contact lens with an embedded glucose sensor for measuring the glucose in human tears.

Software

  • multiple sclerosis, for example, combines wearable sensors with traditional clinical tests
  • signals that could lead to new knowledge about the disease and why it progresses differently among individuals.

Clinical

  • Constituencies industry, hospitals, government, academic centers, medical societies, and patient advocacy groups
  • The Baseline Study is one of these dedicated efforts, a multi-year initiative that aims to identify the traits of a healthy human by closely observing the transition to disease.

Science

  • Understand processes that lead to conditions like cancer, heart disease, and diabetes
  • computational systems biology platforms and life sciences tools
  • bio-molecular nanotechnology for precision diagnostics and therapeutic delivery
  • advanced imaging methods for applications ranging from early diagnosis to surgical robotics.

 

FOLLOW the LEADER of Parish in the Tsunami

 

Google[x] searches for ways to boost cancer immunotherapy | Science/AAAS | News

http://news.sciencemag.org/math/2015/01/googlex-searches-ways-boost-cancer-immunotherapy

 

Google Life Sciences and American Heart Association commit $50M to study heart disease | VentureBeat

http://venturebeat.com/2015/11/08/google-life-sciences-and-american-heart-association-commit-50m-to-study-heart-disease/

 

Google Life Sciences Division Is Now Called… Verily?

http://gizmodo.com/google-life-sciences-division-is-now-called-verily-1746729894

 

WIRED: Google’s Verily Is Spinning Off ‘Verb,’ a Secretive Robot-Surgery Startup

Alphabet’s Verily, née Google Life Sciences, has announced its first spinoff, a brand new robot-assisted surgery company.

http://www.wired.com/2015/12/googles-verily-is-spinning-off-verb-a-secretive-robot-surgery-startup/

 

Google Life Sciences Rebrands as Verily under Alphabet – Fortune

Vik Bajaj, CSO

http://fortune.com/2015/12/08/google-alphabet-verily/

Verily, I Swear, Google Life Sciences debuts a New Name

By CHARLES PILLER  DECEMBER 7, 2015

http://www.statnews.com/2015/12/07/verily-google-life-sciences-name/

 

Why biomedical superstars are signing on with Google Tech firm’s ambitious goals and abundant resources attract life scientists.

Erika Check Hayden 21 October 2015

http://www.nature.com/news/why-biomedical-superstars-are-signing-on-with-google-1.18600

 

GOOGLE LIFE SCIENCES MAKES DIABETES ITS FIRST BIG TARGET

http://www.wired.com/2015/08/google-life-sciences-makes-diabetes-first-big-target/

 

GOOGLE WON THE INTERNET. NOW IT WANTS TO CURE DISEASES

http://www.wired.com/2015/08/google-won-internet-now-wants-cure-diseases/

 

Google Reveals Health-Tracking Wristband

Caroline Chen and Brian Womack

June 23, 2015 — 9:30 AM EDT

http://www.bloomberg.com/news/articles/2015-06-23/google-developing-health-tracking-wristband-for-health-research

 

Google Moves to the Operating Room in Robotics Deal With J&J

ALISTAIR BARR and JOSEPH WALKER

http://blogs.wsj.com/digits/2015/03/27/google-moves-to-the-operating-room-in-robotics-deal-with-jj/

 

Google, Biogen Seek Reasons for Advance of Multiple Sclerosis

Caroline Chen

January 27, 2015 — 9:00 AM EST

http://www.bloomberg.com/news/articles/2015-01-27/google-biogen-seek-reasons-for-advance-of-multiple-sclerosis

 

Google’s Newest Search: Cancer Cells

Google X Team Hopes to Develop Nanoparticles to Provide Early Detection of Cancer, Other Diseases

ALISTAIR BARR and RON WINSLOW

Updated Oct. 29, 2014 11:17 a.m. ET

http://www.wsj.com/articles/google-designing-nanoparticles-to-patrol-human-body-for-disease-1414515602

 

A Spoon That Shakes To Counteract Hand Tremors

Updated May 14, 201411:43 AM ET

INA JAFFE

http://www.npr.org/sections/health-shots/2014/05/13/310399325/a-spoon-that-shakes-to-counteract-hand-tremors

 

Google’s New Moonshot Project: the Human Body

Baseline Study to Try to Create Picture From the Project’s Findings

ALISTAIR BARR

Updated July 27, 2014 7:24 p.m. ET

http://www.wsj.com/articles/google-to-collect-data-to-define-healthy-human-1406246214

 

Novartis Joins With Google to Develop Contact Lens That Monitors Blood Sugar

MARK SCOTT JULY 15, 2014

http://www.nytimes.com/2014/07/16/business/international/novartis-joins-with-google-to-develop-contact-lens-to-monitor-blood-sugar.html

 

Google[x] searches for ways to boost cancer immunotherapy

Jon Cohen

15 January 2015 6:25 am

http://news.sciencemag.org/math/2015/01/googlex-searches-ways-boost-cancer-immunotherapy

 

SOURCE

https://verily.com/

Part II: Innovations at a Different Scale: GDE Enterprises

A Case in Point of Healthcare in Focus –

Work-in-Progress

 

 

Read Full Post »

Rheumatoid arthritis update

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Innovation update: Advancing the standard of care in rheumatoid arthritis 

Old innovation makes way for new innovation

Twenty years ago, the standard of care for RA was some combination of basic NSAIDS, along with methotrexate. Caregivers focused on symptom relief, and it was widely understood that many patients would fail to achieve remission. As the disease developed, patients would eventually develop severely life-limiting disabilities as their disease progressed.

During this period, researchers presenting at conferences grew excited about data on a new class of drugs known as anti-tumor necrosis factor (TNF) antibodies. In an article published in Acta Orthopaedica Scandinavica in 1995, two physician-researchers wrote the following:

“Primary results have recently been published on the use of anti-TNF monoclonal antibodies. In a controlled trial these antibodies were able to significantly influence a number of disease-activity variables in RA. An important observation was that the clinical effect lasted from weeks to, in some cases, months.  Although the potential of these agents for clinical use is still uncertain, these observations suggest that interfering with certain targets of the immune-inflammatory process is possible, effective and so far without side effects.”

About four years after Drs. Van de Putte and Van Riel extolled the virtues of disease-modifying biologics in clinical trials, the first anti-TNF antibody, Remicade (infliximab) was approved in 1999. At that point, the standard of care for RA improved significantly, forever changing the treatment paradigm for patients with RA.

 

The expanding class of JAK inhibitors

At this year’s ACR meeting, researchers  focused on  anti-inflammatory antibodies and a relatively new class of oral drugs known as janus kinase (JAK) inhibitors.  Interest in JAK inhibitors has spiked since the approval of Pfizer’s oral medication Xeljanz (tofacitinib) —the first, and currently the only, JAK inhibitor approved for the treatment of moderate-to-severe RA.JAK inhibitors have garnered interest because of the role they can play in expanding a treatment area dominated by synthetic and biologic disease-modifying anti-rheumatic drugs (DMARDs). Could JAK inhibitors provide the breakthrough in RA that the anti-TNF antibodies provided almost 20 years ago?

Currently, Eli Lilly and Incyte are in late-stage development of baricitinib, a JAK1/JAK2 inhibitor for treatment of RA. Until last December, Johnson & Johnson (J&J) and Astellas were working jointly on another JAK inhibitor, known as ASPO15K, but J&J exercised its opt-out option and left the partnership. Astellas vowed to go it alone or look for a new partner, but there have not been many updates on ASPO15K within the last year.

 

Innovation means understanding and responding to unmet needs

Like many other therapeutic areas, RA treatments are often used in combination. For some patients, the combination of methotrexate and a powerful biologic, such as Remicade (infliximab), will help a patient achieve remission Yet others will either not respond to methotrexate and Remicade, or will have a negative reaction. Understanding how to help nonresponders achieve relief has become a key area of research in RA.

According to Terence Rooney, MD, Medical Director at Lilly Bio-Medicines, “A substantial proportion of patients treated with methotrexate – commonly used across the disease continuum for 25 years – do not achieve satisfactory disease control, signaling a need for more effective RA treatment options. In addition, studies have shown that some patients who initially respond to biologics lose response over time, and approximately 40 percent of patients with high disease activity never respond adequately to TNF antagonist biologics.”

 

Innovative clinical trial design

As Lilly and Incyte approach the end of the development process for baricitinib, they have been collecting results from clinical trials designed to both establish basic efficacy and safety in placebo-controlled and comparator trials, and to obtain data on targeted patient populations.

According to Rooney, “The baricitinib phase three program investigated the benefit of baricitinib across the spectrum of patients with rheumatoid arthritis, including newly diagnosed patients, patients who had failed to respond to conventional DMARDs, and patients who had failed multiple injectable biologic DMARD therapies.”

“In addition, the phase 3 program included two 52-week studies that incorporated either methotrexate or adalimumab as active comparators to provide useful information for therapeutic positioning of baricitinib. In these studies, baricitinib was statistically superior to methotrexate and to adalimumab in improving signs and symptoms, physical function, and important patient-reported outcomes including pain, fatigue and stiffness.”

Rooney also pointed out that there is additional data establishing baricitinib as a DMARD that significantly inhibits progressive radiographic joint damage.

 

Experience plus evidence equals more innovation

As has become the norm, companies at ACR often highlight new data confirming the efficacy and safety of already approved drugs in larger patient populations and in real-world settings..

Lilly currently has data on more than 40,000 patients worldwide, reflecting its global ambitions. Assuming that baricitinib is approved next year (the goal is to file at the end of the year), Lilly will continue to present data at ACR in the coming years highlighting the results of its long-term extension study, RA-BEYOND.

 

Pfizer’s up-to-date Xeljanz data presentation at ACR

Although Xeljanz has been on the market for three years in more than 40 countries, Pfizer continues to focus on collecting new data and using it to expand use of Xeljanz. In fact, Pfizer had 20 abstracts focused solely on Xeljanz at ACR 2015.

According to Rory O’Connor, MD, Senior Vice President and Head of Global Medical Affairs, Global Innovative Pharmaceuticals Business, Pfizer, “Ongoing clinical trials and long-term extension studies provide important information about the safety and efficacy of Xeljanz in RA. We are focused on continuing to build on our knowledge of the clinical application of Xeljanz in real-world settings.”

Pfizer was also able to highlight new data that supports their recent NDA for Xeljanz XR, a once-daily formulation of Xeljanz, which is currently approved as a twice-daily dosing formulation.

 

JAK inhibition beyond RA

One of the most exciting things about the progress with JAK inhibitors is the possibility to innovate treatments beyond RA. Lilly has been exploring the role of JAK-dependent cytokines in the pathogenesis of numerous inflammatory and autoimmune diseases. The company also plans to meet with regulatory authorities to develop a pediatric program for juvenile RA and idiopathic arthritis.

Meanwhile, Pfizer has developed a broad portfolio of various JAK inhibitors and therapies with new modes of action. Already, Pfizer researchers have completed two phase three studies in ulcerative colitis and the top-line results have been positive.

Medical meetings are exciting, because they provide a forum for discussing breakthroughs and portending a future in which the standard of care improves. For companies like Lilly, Incyte, and Pfizer, continual development of more novel approaches to serious diseasesis like a call-response echo chamber in which innovation drives more innovation, resulting in better long-term outcomes for patients.

 

 

The JAK/STAT signaling pathway
, ,

http://d1dvw62tmnyoft.cloudfront.net/content/joces/117/8/1281/F1.large.jpg

 

In addition to the principal components of the pathway, other effector proteins have been identified that contribute to at least a subset of JAK/STAT signaling events. STAMs (signal-transducing adapter molecules) are adapter molecules with conserved VHS and SH3 domains (Lohi and Lehto, 2001). STAM1 and STAM2A can be phosphorylated by JAK1-JAK3 in a manner that is dependent on a third domain present in some STAMs, the ITAM (inducible tyrosine-based activation motif). Through a poorly understood mechanism, the STAMs facilitate the transcriptional activation of specific target genes, including MYC. A second adapter that facilitates JAK/STAT pathway activation is StIP (stat-interacting protein), a WD40 protein. StIPs can associate with both JAKs and unphosphorylated STATs, perhaps serving as a scaffold to facilitate the phosphorylation of STATs by JAKs. A third class of adapter with function in JAK/STAT signaling is the SH2B/Lnk/APS family. These proteins contain both pleckstrin homology and SH2 domains and are also substrates for JAK phosphorylation. Both SH2-Bβ and APS associate with JAKs, but the former facilitates JAK/STAT signaling while the latter inhibits it. The degree to which each of these adapter families contributes to JAK/STAT signaling is not yet well understood, but it is clear that various proteins outside the basic pathway machinery influence JAK/STAT signaling.

In addition to JAK/STAT pathway effectors, there are three major classes of negative regulator: SOCS (suppressors of cytokine signaling), PIAS (protein inhibitors of activated stats) and PTPs (protein tyrosine phosphatases) (reviewed by Greenhalgh and Hilton, 2001). Perhaps the simplest are the tyrosine phosphatases, which reverse the activity of the JAKs. The best characterized of these is SHP-1, the product of the mouse motheaten gene. SHP-1 contains two SH2 domains and can bind to either phosphorylated JAKs or phosphorylated receptors to facilitate dephosphorylation of these activated signaling molecules. Other tyrosine phosphatases, such as CD45, appear to have a role in regulating JAK/STAT signaling through a subset of receptors.

SOCS proteins are a family of at least eight members containing an SH2 domain and a SOCS box at the C-terminus (reviewed by Alexander, 2002). In addition, a small kinase inhibitory region located N-terminal to the SH2 domain has been identified for SOCS1 and SOCS3. The SOCS complete a simple negative feedback loop in the JAK/STAT circuitry: activated STATs stimulate transcription of the SOCS genes and the resulting SOCS proteins bind phosphorylated JAKs and their receptors to turn off the pathway. The SOCS can affect their negative regulation by three means. First, by binding phosphotyrosines on the receptors, SOCS physically block the recruitment of signal transducers, such as STATs, to the receptor. Second, SOCS proteins can bind directly to JAKs or to the receptors to specifically inhibit JAK kinase activity. Third, SOCS interact with the elongin BC complex and cullin 2, facilitating the ubiquitination of JAKs and, presumably, the receptors. Ubiquitination of these targets decreases their stability by targeting them for proteasomal degradation.

The third class of negative regulator is the PIAS proteins: PIAS1, PIAS3, PIASx and PIASy. These proteins have a Zn-binding RING-finger domain in the central portion, a well-conserved SAP (SAF-A/Acinus/PIAS) domain at the N-terminus, and a less-well-conserved carboxyl domain. The latter domains are involved in target protein binding. The PIAS proteins bind to activated STAT dimers and prevent them from binding DNA. The mechanism by which PIAS proteins act remains unclear. However, PIAS proteins have recently been demonstrated to associate with the E2 conjugase Ubc9 and to have E3 conjugase activity for sumoylation that is mediated by the RING finger domain (reviewed by Jackson, 2001). Although there is evidence that STATs can be modified by sumoylation (Rogers et al., 2003), the function of that modification in negative regulation is not yet known.

Although the mechanism of JAK/STAT signaling is relatively simple in theory, the biological consequences of pathway activation are complicated by interactions with other signaling pathways (reviewed by Heinrich et al., 2003; Rane and Reddy, 2000; Shuai, 2000). An understanding of this cross-talk is only beginning to emerge, but the best characterized interactions of the JAK/STAT pathway are with the receptor tyrosine kinase (RTK)/Ras/MAPK (mitogen-activated protein kinase) pathway. The relationship between these cascades is complex and their paths cross at multiple levels, each enhancing activation of the other. First, activated JAKs can phosphorylate tyrosines on their associated receptors that can serve as docking sites for SH2-containing adapter proteins from other signaling pathways. These include SHP-2 and Shc, which recruit the GRB2 adapter and stimulate the Ras cascade. The same mechanism stimulates other cascades, such as the recruitment and JAK phosphorylation of insulin receptor substrate (IRS) and p85, which results in the activation of the phosphoinositide 3-kinase (PI3K) pathway [for more on PI3K signaling, see Foster et al. (Foster et al., 2003)]. JAK/STAT signaling also indirectly promotes Ras signaling through the transcriptional activation of SOCS3. SOCS3 binds RasGAP, a negative regulator of Ras signaling, and reduces its activity, thereby promoting activation of the Ras pathway. Reciprocally, RTK pathway activity promotes JAK/STAT signaling by at least two mechanisms. First, the activation of some RTKs, including EGFR and PDGFR, results in the JAK-independent tyrosine phosphorylation of STATs, probably by the Src kinase. Second, RTK/Ras pathway stimulation causes the downstream activation of MAPK. MAPK specifically phosphorylates a serine near the C-terminus of most STATs. While not absolutely necessary for STAT activity, this serine phosphorylation dramatically enhances transcriptional activation by STAT. In addition to RTK and PI3K interactions with JAK/STAT signaling, multiple levels of cross-talk with the TGF-β signaling pathway have been recently reported [for a review of TGF-β, see (Moustakas, 2002)]. Furthermore, the functions of activated STATs can be altered through association with other transcription factors and cofactors that are regulated by other signaling pathways. Thus the integration of input from many signaling pathways must be considered if we are to understand the biological consequences of cytokine stimulation.

References

…..

 

https://youtu.be/9JHBHSHaBeI

Published on 27 Feb 2014

The JAK/STAT secondary messenger signaliing pathway..
Presented by: Joseph Farahany, M.D

 

Jak/Stat Signaling Pathway

 

Jaks and Stats are critical components of many cytokine receptor systems; regulating growth, survival, differentiation, and pathogen resistance. An example of these pathways is shown for the IL-6 (or gp130) family of receptors, which coregulate B cell differentiation, plasmacytogenesis, and the acute phase reaction. Cytokine binding induces receptor dimerization, activating the associated Jaks, which phosphorylate themselves and the receptor. The phosphorylated sites on the receptor and Jaks serve as docking sites for the SH2-containing Stats, such as Stat3, and for SH2-containing proteins and adaptors that link the receptor to MAP kinase, PI3K/Akt, and other cellular pathways.

Phosphorylated Stats dimerize and translocate into the nucleus to regulate target gene transcription. Members of the suppressor of cytokine signaling (SOCS) family dampen receptor signaling via homologous or heterologous feedback regulation. Jaks or Stats can also participate in signaling through other receptor classes, as outlined in the Jak/Stat Utilization Table. Researchers have found Stat3 and Stat5 to be constitutively activated by tyrosine kinases other than Jaks in several solid tumors

The Jak/Stat pathway mediates the effects of cytokines, like erythropoietin, thrombopoietin, and G-CSF, which are protein drugs for the treatment of anemia, thrombocytopenia, and neutropenia, respectively. The pathway also mediates signaling by interferons, which are used as antiviral and antiproliferative agents. Researchers have found that dysregulated cytokine signaling contributes to cancer. Aberrant IL-6 signaling contributes to the pathogenesis of autoimmune diseases, inflammation, and cancers such as prostate cancer and multiple myeloma. Jak inhibitors currently are being tested in models of multiple myeloma. Stat3 can act as an oncogene and is constitutively active in many tumors. Crosstalk between cytokine signaling and EGFR family members is seen in some cancer cells. Research has shown that in glioblastoma cells overexpressing EGFR, resistance to EGFR kinase inhibitors is induced by Jak2 binding to EGFR via the FERM domain of the former [Sci. Signal. (2013) 6, ra55].

Activating Jak mutations are major molecular events in human hematological malignancies. Researchers have found a unique somatic mutation in the Jak2 pseudokinase domain (V617F) that commonly occurs in polycythemia vera, essential thrombocythemia, and idiopathic myelofibrosis. This mutation results in the pathologic activation Jak2, associated with receptors for erythropoietin, thrombopoietin, and G-CSF, which control erythroid, megakaryocytic, and granulocytic proliferation and differentiation. Researchers have also shown that somatic acquired gain-of-function mutations of Jak1 are found in adult T cell acute lymphoblastic leukemia. Somatic activating mutations in Jak1, Jak2, and Jak3 have also been identified in pediatric acute lymphoblastic leukemia (ALL). Furthermore, Jak2 mutations have been detected around pseudokinase domain R683 (R683G or DIREED) in Down syndrome childhood B-ALL and pediatric B-ALL.

Selected Reviews:

– See more at: http://www.cellsignal.com/contents/science-pathway-research-immunology-and-inflammation/jak-stat-signaling-pathway/pathways-il6#sthash.8SVwSWXw.dpuf

 

The JAK-STAT Signaling Pathway: Input and Output Integration1

  1. Peter J. Murray

The Journal of Immunology Mar 1, 2007;  178(5): 2623-2629    http://dx.doi.org:/10.4049/​jimmunol.178.5.2623

Universal and essential to cytokine receptor signaling, the JAK-STAT pathway is one of the best understood signal transduction cascades. Almost 40 cytokine receptors signal through combinations of four JAK and seven STAT family members, suggesting commonality across the JAK-STAT signaling system. Despite intense study, there remain substantial gaps in understanding how the cascades are activated and regulated. Using the examples of the IL-6 and IL-10 receptors, I will discuss how diverse outcomes in gene expression result from regulatory events that effect the JAK1-STAT3 pathway, common to both receptors. I also consider receptor preferences by different STATs and interpretive problems in the use of STAT-deficient cells and mice. Finally, I consider how the suppressor of cytokine signaling (SOCS) proteins regulate the quality and quantity of STAT signals from cytokine receptors. New data suggests that SOCS proteins introduce additional diversity into the JAK-STAT pathway by adjusting the output of activated STATs that alters downstream gene activation.

 

 

The mammalian JAK and STAT family members have been extensively, and seemingly exhaustively, analyzed in the mouse and human systems. All four JAK and seven STAT family members have been deleted in the mouse, in addition to the creation of conditional alleles for genes whose loss of function leads to embryonic or perinatal lethality (Stat3, combined deficiency of Stat5a and Stat5b, and Jak2). In humans, detailed genetic studies have been performed in people bearing mutant Jak or Stat genes. Specific Abs to phospho-forms of each protein are used to study how the JAK-STAT cascade is activated by cytokine receptors. Crystallographic studies have illuminated structural information for multiple STAT family members in different forms. Pharmacological inhibitors have been developed for clinical use where JAK-STAT signaling is implicated in disease pathology and progression. Finally, in most cases, a specific JAK-STAT combination has been paired with each cytokine receptor, and this information translated into cell-type specific patterns of cytokine responsiveness and gene expression.

Major questions remain concerning how the JAK-STAT cascade functions to control specific gene expression patterns, and how the cascades are regulated. I will describe three elements of JAK-STAT signaling that require experimental investigation. First, I will address an unexpected experimental complication that arises from the analysis of mice and cells that lack one or more STAT family member. Second, I will use JAK1-STAT3 signaling from the IL-10R and IL-6R systems to illustrate that we lack detailed understanding of how specificity in gene expression is generated by receptors that use identical JAK-STAT members. Third, we have yet to explain how STAT activation is negatively regulated. Although the suppressor of cytokine signaling (SOCS)3 proteins are the best understood negative regulators of the JAK-STAT pathway, the biochemical mechanism of SOCS-mediated inhibition is unexplained. Moreover, additional inhibitory pathways have also been proposed to block the production of activated STATs. Collectively, I will argue that our understanding of the pathway from cytokine receptor to gene expression profile is in its infancy, but remains one of the best opportunities to dissect signal transduction.

Overview of the proximal JAK-STAT activation mechanism

The current model of JAK-STAT signaling holds that cytokine receptor engagement activates the associated JAK combination, which in turn phosphorylates the receptor cytoplasmic domain to allow recruitment of a STAT, which in turn is phosphorylated, dimerizes and moves to the nucleus to bind specific sequences in the genome and activate gene expression. Cytoplasmic domains of cytokine receptors associate with JAKs via JAK binding sites located close to the membrane (1). The postulated role of JAKs in trafficking or chaperoning the receptors to the cell surface is debated (2, 3, 4, 5, 6). Regardless of the when and where cytokine receptors and JAKs associate, their close apposition at the membrane is required to stimulate the kinase activity of the JAK following cytokine binding. At this stage in the activation of the pathway, we understand next to nothing about the structural basis of the JAK-receptor interaction, how receptor intracellular domains reorient upon cytokine binding and physically contact the JAK to receive the phosphorylation modification.

JAK-mediated phosphorylation of the receptor creates binding sites for the Src homology 2 (SH2) domains of the STATs. STAT recruitment is followed by tyrosine, and in some cases, serine phosphorylation on key residues (by the JAKs and other closely associated kinases) that leads to transit into the nucleus. This brief summary of the activation of the JAK-STAT pathway omits numerous unresolved details: the STAT monomer to dimer transition has been questioned, as has the role of phosphorylation in dimerization and nuclear transit (7). Furthermore, it is unclear how many configurations of STAT homo- and heterocomplexes are present in cells before, during, and after cytokine stimulation (8, 9,10). We do not understand the detailed structural basis for the preference of one SH2 domain for a given receptor, and we have little knowledge of how other non-JAK kinases are recruited to the receptors and phosphorylate the STATs.

Many receptors signal through a small number of JAKs

Cytokine receptors signal through two types of pathways: the JAK-STAT pathway and other pathways that usually involve the activation of the MAP kinase cascade. Although the latter will not be discussed here, it is worth noting that elegant genetic studies have demonstrated the importance of these pathways in various pathological systems (11, 12,13, 14). There are now ∼36 cytokine receptor combinations that respond to ∼38 cytokines (counting the type I IFNs as one because they all signal through the IFN-αβR). Different cells and tissues express distinct receptor combinations that respond to cytokine combinations unique to the microenvironment or systemic response of the organism. Hence, at any given time, a single cell may integrate signals from multiple cytokine receptors. Genetic studies have established that the cytokine receptor system is restrictive in that different classes of receptors preferentially use one JAK or JAK combination (7): receptors required for hemopoietic cell development and proliferation use JAK2, common γ-chain receptors use JAK1 and JAK3 whereas other receptors use only JAK1 (Fig. 1). Unexplained is the selective use of these combinations: why the IFN-γR rigidly uses the JAK1, JAK2 combination is unknown as is the restricted use of TYK2. Compared with JAK1–3, TYK2 is unusual in that loss of function mutations in the mouse have shown obligate, but not absolute, requirements in IFN-αβR and IL-12R signaling (15, 16). In contrast, human TYK2 seems to be essential for signaling through a broader range of cytokine receptors (17).

 

FIGURE 1.

FIGURE 1.

The majority of cytokine receptors use three JAK combinations. Shown are well-studied cases where JAK usage by each cytokine receptor has been established by genetic and biochemical studies. Exceptions shown are the G-CSFR (∗) where it is currently unclear whether both JAK1 and JAK2 are required together. Additionally, the IL-12R (†) and IL-23R (†) require TYK2 but the requirement for JAK2 has not been definitively determined. Receptors that use JAK2 and JAK3, JAK3 alone, TYK2 alone, or JAK3 and TYK2 have not been described.

The preferential association of JAKs to certain receptor classes raises several issues. First, how did the JAK-receptor combinations evolve? Because the number of receptors is relatively large, why has the number of JAKs remained small? Why have the combinations of JAK pairs also remained small given that there are 10 possible combinations that can be used (Fig. 1)? Second, how flexible is the cytokine receptor-JAK pair? That is, can receptors be engineered for interchangeable JAK use, or is a given JAK combination fixed for a specific receptor class? For example, can JAK1, JAK3, or TYK2 activate erythropoietin receptor (EpoR) signaling (if so engineered) or is JAK2 obligatory for signaling? These questions allude to a fundamental issue that concerns the function of the JAK in cytokine receptor activation: if the only function of the JAKs is to phosphorylate tyrosine resides on the cytoplasmic domain of the receptors, then it should be possible to trade JAK-receptor pairs. If these receptors retain identical downstream gene expression profiles, then the signal generated by the JAK is generic and functions primarily to activate the receptor (6). Conversely, it is also possible that each receptor-JAK combination retains crucial specificity functions and swapping, for example, JAK1 for JAK2 on the EpoR will modify or destroy a specific function in erythrogenesis. These questions can be addressed experimentally by replacing one preferred JAK binding site for another in genes encoding different receptors. The EpoR is a good test example because the activity of the receptor and its signaling pathway is essential for life and erythropoiesis is readily assayed.

Core versus cell-type specific STAT signaling

Microarray experiments designed to monitor changes in gene expression induced by JAK-STAT signaling have revealed that both cell-type specific transcription and core, or stereotypic, mRNA profiles are induced by activated cytokine receptors in different cell types (Fig. 2). For example, IFN-γ, via STAT1, induces the expression of a similar cohort of genes regardless of the cell type tested (18). These genes are often termed the “IFN signature” and overlap with the gene expression pattern induced by IFN-αβ signaling that also involves STAT1, in cooperation with STAT2 and IRF9. The IFN signature is readily observed in microarray experiments and is indicative of STAT1 activity. The STAT6 pathway activated by IL-4 or IL-13 provides an example of a cell-type specific response. IL-4-regulated genes in T cells have a distinct signature compared with IL-4/IL-13 signaling in macrophages or other non-lymphocytes (19, 20, 21, 22). In the latter, genes such as Arg1(encoding arginase 1) are often induced >100-fold but are silent in T cells (23, 24, 25, 26,27). Collectively these data argue that STATs activate defined gene sets, depending on their genomic accessibility, and possibly on cofactors that further refine gene expression profiles. STAT3 signaling illustrates a more complex system and will be discussed below to illustrate the distinctions between IL-6 and IL-10 signaling.

 

FIGURE 2.

FIGURE 2.

Core signaling by STATs. Representative examples of gene expression induced by STAT signaling in different tissues. The examples were extracted and edited from numerous microarray and empirical studies.

Interpreting experiments using STAT loss-of-function systems

Experiments with the different STAT knockout mice, and cells derived from these animals, have been critical for understanding specific requirements of individual STATs in gene expression following cytokine receptor signaling. The interpretation of these experiments is generally straightforward. For example, STAT5a and STAT5b are essential for the expression of genes that promote hemopoietic survival (28, 29, 30) whereas STAT1 is required for the expression of IFN-regulated genes that are involved in the protection against pathogens (18). However, by EMSA and immunoblotting experiments, most cytokines have been shown to activate multiple STATs, prompting experiments to determine transcriptional responses that can be activated in the absence of a given STAT. An initial example of this type of approach was performed by Schreiber and colleagues who interrogated gene expression profiles induced by IFN-γ signaling in the absence of STAT1 (31, 32). In these experiments, IFN-γ was used to stimulate STAT1-deficient bone marrow-derived macrophages and fibroblasts. Numerous genes were induced by IFN-γ in the absence of STAT1, leading to the conclusion that the IFN-γR activates a STAT1-independent gene expression program. However, inspection of the genes induced by IFN-γ signaling in STAT1-deficient cells shows many to be STAT3-regulated genes such asSocs3, Gadd45, and Cebpb. STAT3 phosphorylation is normally induced by IFN-γ in wild-type cells but in the absence of STAT1, STAT3 signaling is dominant. What is the mechanism of this effect? We now know from experiments using STAT-deficient cells that receptor occupancy, or lack of occupancy by the dominant STAT that binds the receptor, causes a switch from one activated STAT to another (33). A converse example is the conversion of IL-6R signaling to a dominant STAT1 activation in STAT3-deficient cells (34). This switch causes the downstream induction of the IFN gene expression pathway just as IFN-γ would cause in wild-type cells.

A related example is observed when IL-6 signaling is tested in the absence of SOCS3. SOCS3 is induced by STAT signaling from different cytokine receptors and functions as a feedback inhibitor of the IL-6R (and the G-CSFR, LIFR, and leptinR) by binding to phosphorylated Y757 on the gp130 cytoplasmic domain (see below). However in the absence of SOCS3, STAT3 phosphorylation is greatly increased (35, 36, 37). At the same time however, STAT1 phosphorylation is also induced, leading to a dominant IFN-like gene expression signature (35, 36). Thus SOCS3 regulates both the quantity and type of STAT signal generated from the IL-6R. Although the mechanism of the SOCS3 effect is unclear, the promiscuity of different receptors for different STATs argues that loss-of-function experiments must be carefully examined for the activation of other STAT molecules that fill the “hole” created by the loss of one STAT. These data also suggest that different cytokine receptors have evolved selectivity for different classes of STATs. Although STAT1 and STAT3 can apparently interchangeably bind the IL-6R or IFN-γR when either molecule is missing, signaling in wild-type cells shows a strong preference for one STAT over the other. Likewise, other receptors may have evolved to bind only one STAT, and in the absence of the key STAT, the other STATs cannot bind and/or be activated by the receptor.

The above examples primarily describe experiments using STAT1–STAT3-activating receptors but these are not isolated cases. In T cells stimulated by IL-12, STAT4 is activated and drives IFN-γ production. This pathway is a central regulatory event in the development of the Th1 type T cell responses. IFN-αβ, via the IFN-αβR, also activates STAT4 (in addition to STAT1 and STAT2 that forms a complex with IRF-9 to mediate anti-viral gene expression) but cannot activate strong IFN-γ production and therefore cannot drive Th1 development (38). However, in the absence of STAT1, IFN-αβ causes a large increase in IFN-γ production, especially in vivo during viral infection (39, 40). These data were originally interpreted to mean that STAT1 normally suppressed IFN-γ production. However, the data can just as easily be resolved when we consider that STAT4 activation from the IFN-αβR is increased in the absence of STAT1. Recent data confirm this interpretation but also show that STAT4 activation by the IFN-αβR, although increased, cannot sustain IFN-γ production from T cells when compared with IL-12 (38). This is probably because of the stronger differential activity of SOCS1 on the IFN-αβR versus the IL-12R (discussed below). I would predict that an IFN-αβR that is refractory to SOCS1 (or active in a Socs1−/− background) would behave identically to the IL-12R in the absence of STAT1.

Although loss of gene expression may be observed in a given STAT knockout, a corresponding increase in the ectopic activation of another STAT pathway may confound the interpretation of results in both in vitro and in vivo systems. Because specific Abs are available for each tyrosine-phosphorylated STAT molecule, a simple solution is to first measure which other STATs are activated by a given receptor in the absence of the STAT of interest. Experiments using STAT knockout systems should also be supported by additional data that uses complimentary mutations in the receptor that ablate STAT recruitment, or complete loss of the receptor. Finally, it is worth noting that the loss of a STAT pathway from a receptor signaling system can cause additional loss of key negative regulatory systems including feedback loops such as SOCS induction as presently debated for G-CSFR signaling and receptor systems discussed below (41, 42, 43, 44, 45).

  1. Negative regulation of the JAK-STAT signal
  2. Is there functional equivalence in signaling from receptors using the same JAK-STAT combination in the same cell?
  3. Future directions

 

FIGURE 3.

FIGURE 3.

Proposed differential STAT activation by IL-10 or IL-6. Shown are three classes of genes activated by STAT3 where Socs3 is a representative “common” gene induced by both receptors. In the absence of SOCS3, the IL-6R can activate the anti-inflammatory genes in the same way as the IL-10R. The mechanism of this effect remains to be established.

 

JAK/STAT Activation Inhibitors

The JAK/STAT pathway plays an important role in cytokine receptor-mediated signal transduction via activation of downstream signal transducers and activators of transcription (STAT), phosphatidylinositol 3-kinase (PI3K), and mitogen-activated protein kinase (MAPK) pathways.
These inhibitors are useful tools for exploring the contribution of JAK/STAT-mediated signaling.

Pathways of inhibition of JAK/STAT activation

JAK/STAT Activation Inhibitors

AG490 JAK2 inhibitor 10 mg
AZD1480 NEW! JAK1 & JAK2 inhibitor 5 mg
CP-690550 JAK3 Inhibitor 5 mg
CYT387 NEW! JAK1/JAK2 & TBK1/IKK-ε inhibitor 10 mg
Ruxolitinib JAK1 & JAK2 Inhibitor 5 mg

 

Methotrexate Is a JAK/STAT Pathway Inhibitor

Sally Thomas, Katherine H. Fisher, John A. Snowden, Sarah J. Danson, Stephen Brown, Martin P. Zeidler

PLOS   Published: July 1, 2015
DOI: http://dx.doi.org:/10.1371/journal.pone.0130078
Background 

The JAK/STAT pathway transduces signals from multiple cytokines and controls haematopoiesis, immunity and inflammation. In addition, pathological activation is seen in multiple malignancies including the myeloproliferative neoplasms (MPNs). Given this, drug development efforts have targeted the pathway with JAK inhibitors such as ruxolitinib. Although effective, high costs and side effects have limited its adoption. Thus, a need for effective low cost treatments remains.

Methods & Findings        

We used the low-complexity Drosophila melanogaster pathway to screen for small molecules that modulate JAK/STAT signalling. This screen identified methotrexate and the closely related aminopterin as potent suppressors of STAT activation. We show that methotrexate suppresses human JAK/STAT signalling without affecting other phosphorylation-dependent pathways. Furthermore, methotrexate significantly reduces STAT5 phosphorylation in cells expressing JAK2 V617F, a mutation associated with most human MPNs. Methotrexate acts independently of dihydrofolate reductase (DHFR) and is comparable to the JAK1/2 inhibitor ruxolitinib. However, cells treated with methotrexate still retain their ability to respond to physiological levels of the ligand erythropoietin.

Conclusions

Aminopterin and methotrexate represent the first chemotherapy agents developed and act as competitive inhibitors of DHFR. Methotrexate is also widely used at low doses to treat inflammatory and immune-mediated conditions including rheumatoid arthritis. In this low-dose regime, folate supplements are given to mitigate side effects by bypassing the biochemical requirement for DHFR. Although independent of DHFR, the mechanism-of-action underlying the low-dose effects of methotrexate is unknown. Given that multiple pro-inflammatory cytokines signal through the pathway, we suggest that suppression of the JAK/STAT pathway is likely to be the principal anti-inflammatory and immunosuppressive mechanism-of-action of low-dose methotrexate. In addition, we suggest that patients with JAK/STAT-associated haematological malignancies may benefit from low-dose methotrexate treatments. While the JAK1/2 inhibitor ruxolitinib is effective, a £43,200 annual cost precludes widespread adoption. With an annual methotrexate cost of around £32, our findings represent an important development with significant future potential.

Citation: Thomas S, Fisher KH, Snowden JA, Danson SJ, Brown S, Zeidler MP (2015) Methotrexate Is a JAK/STAT Pathway Inhibitor. PLoS ONE 10(7): e0130078.   http://dx.doi.org:/10.1371/journal.pone.0130078

 

 

 

Read Full Post »

microscopy and spatially resolved chemical analysis

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

Combining the Power of Mass Spectrometry, Microscopy

Published: Monday, Nov 9, 2015    http://www.technologynetworks.com/news.aspx?ID=184983

A tool that provides world-class microscopy and spatially resolved chemical analysis shows considerable promise for advancing a number of areas of study, including chemical science, pharmaceutical development and disease progression

The hybrid optical microscope/mass spectrometry-based imaging system developed at the Department of Energy’s Oak Ridge National Laboratory operates under ambient conditions and requires no pretreatment of samples to analyze chemical compounds with sub-micron resolution. One micron is equal to about 1/100th the width of a human hair. Results of the work have recently been published by postdoctoral associate Jack Cahill and Gary Van Berkel and Vilmos Kertesz of ORNL’s Chemical Sciences Division.

“Knowing the chemical basis of material interactions that take place at interfaces is vital for designing and advancing new functional materials that are important for DOE missions such as organic photovoltaics for solar energy,” Van Berkel said. “In addition, the new tool can be used to better understand the chemical basis of important biological processes such as drug transport, disease progression and response for treatment.”

InTheNewsArticle_1.jpg

The hybrid instrument transfers tiny amounts of a material such as human tissue or an organic polymer from a sample by a laser ablation process in which material is captured and transported via liquid stream to the ionization source of the mass spectrometer. In just seconds, a computer screen displays the results.

Researchers noted that the resolution of less than one micron is essential to accurately differentiate and distinguish between polymers and sub-components of similar-sized cells.

“Today’s mass spectrometry imaging techniques are not yet up to the task of reliably acquiring molecular information on a wide range of compound types,” Cahill said. “Examples include synthetic polymers used in various functional materials like light harvesting and emitting devices or biopolymers like cellulose in plants or proteins in animal tissue.”

This technology, however, provides the long-sought detailed chemical analysis through a simple interface between a hybrid optical microscope and an electrospray ionization system for mass spectrometry.

 

This technology, however, provides the long-sought detailed chemical analysis through a simple interface between a hybrid optical microscope and an electrospray ionization system for mass spectrometry

 

Read Full Post »

Clinical Laboratory Challenges

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

CLINICAL LABORATORY NEWS   

The Lab and CJD: Safe Handling of Infectious Prion Proteins

Body fluids from individuals with possible Creutzfeldt-Jakob disease (CJD) present distinctive safety challenges for clinical laboratories. Sporadic, iatrogenic, and familial CJD (known collectively as classic CJD), along with variant CJD, kuru, Gerstmann-Sträussler-Scheinker, and fatal familial insomnia, are prion diseases, also known as transmissible spongiform encephalopathies. Prion diseases affect the central nervous system, and from the onset of symptoms follow a typically rapid progressive neurological decline. While prion diseases are rare, it is not uncommon for the most prevalent form—sporadic CJD—to be included in the differential diagnosis of individuals presenting with rapid cognitive decline. Thus, laboratories may deal with a significant number of possible CJD cases, and should have protocols in place to process specimens, even if a confirmatory diagnosis of CJD is made in only a fraction of these cases.

The Lab’s Role in Diagnosis

Laboratory protocols for handling specimens from individuals with possible, probable, and definitive cases of CJD are important to ensure timely and appropriate patient management. When the differential includes CJD, an attempt should be made to rule-in or out other causes of rapid neurological decline. Laboratories should be prepared to process blood and cerebrospinal fluid (CSF) specimens in such cases for routine analyses.

Definitive diagnosis requires identification of prion aggregates in brain tissue, which can be achieved by immunohistochemistry, a Western blot for proteinase K-resistant prions, and/or by the presence of prion fibrils. Thus, confirmatory diagnosis is typically achieved at autopsy. A probable diagnosis of CJD is supported by elevated concentration of 14-3-3 protein in CSF (a non-specific marker of neurodegeneration), EEG, and MRI findings. Thus, the laboratory may be required to process and send CSF samples to a prion surveillance center for 14-3-3 testing, as well as blood samples for sequencing of the PRNP gene (in inherited cases).

Processing Biofluids

Laboratories should follow standard protective measures when working with biofluids potentially containing abnormally folded prions, such as donning standard personal protective equipment (PPE); avoiding or minimizing the use of sharps; using single-use disposable items; and processing specimens to minimize formation of aerosols and droplets. An additional safety consideration is the use of single-use disposal PPE; otherwise, re-usable items must be either cleaned using prion-specific decontamination methods, or destroyed.

Blood. In experimental models, infectivity has been detected in the blood; however, there have been no cases of secondary transmission of classical CJD via blood product transfusions in humans. As such, blood has been classified, on epidemiological evidence by the World Health Organization (WHO), as containing “no detectible infectivity,” which means it can be processed by routine methods. Similarly, except for CSF, all other body fluids contain no infectivity and can be processed following standard procedures.

In contrast to classic CJD, there have been four cases of suspected secondary transmission of variant CJD via transfused blood products in the United Kingdom. Variant CJD, the prion disease associated with mad cow disease, is unique in its distribution of prion aggregates outside of the central nervous system, including the lymph nodes, spleen, and tonsils. For regions where variant CJD is a concern, laboratories should consult their regulatory agencies for further guidance.

CSF. Relative to highly infectious tissues of the brain, spinal cord, and eye, infectivity has been identified less often in CSF and is considered to have “low infectivity,” along with kidney, liver, and lung tissue. Since CSF can contain infectious material, WHO has recommended that analyses not be performed on automated equipment due to challenges associated with decontamination. Laboratories should perform a risk assessment of their CSF processes, and, if deemed necessary, consider using manual methods as an alternative to automated systems.

Decontamination

The infectious agent in prion disease is unlike any other infectious pathogen encountered in the laboratory; it is formed of misfolded and aggregated prion proteins. This aggregated proteinacious material forms the infectious unit, which is incredibly resilient to degradation. Moreover, in vitro studies have demonstrated that disrupting large aggregates into smaller aggregates increases cytotoxicity. Thus, if the aim is to abolish infectivity, all aggregates must be destroyed. Disinfectant procedures used for viral, bacterial, and fungal pathogens such as alcohol, boiling, formalin, dry heat (<300°C), autoclaving at 121°C for 15 minutes, and ionizing, ultraviolet, or microwave radiation, are either ineffective or variably effective against aggregated prions.

The only means to ensure no risk of residual infectious prions is to use disposable materials. This is not always practical, as, for instance, a biosafety cabinet cannot be discarded if there is a CSF spill in the hood. Fortunately, there are several protocols considered sufficient for decontamination. For surfaces and heat-sensitive instruments, such as a biosafety cabinet, WHO recommends flooding the surface with 2N NaOH or undiluted NaClO, letting stand for 1 hour, mopping up, and rinsing with water. If the surface cannot tolerate NaOH or NaClO, thorough cleaning will remove most infectivity by dilution. Laboratories may derive some additional benefit by using one of the partially effective methods discussed previously. Non-disposable heat-resistant items preferably should be immersed in 1N NaOH, heated in a gravity displacement autoclave at 121°C for 30 min, cleaned and rinsed in water, then sterilized by routine methods. WHO has outlined several alternate decontamination methods. Using disposable cover sheets is one simple solution to avoid contaminating work surfaces and associated lengthy decontamination procedures.

With standard PPE—augmented by a few additional safety measures and prion-specific decontamination procedures—laboratories can safely manage biofluid testing in cases of prion disease.

 

The Microscopic World Inside Us  

Emerging Research Points to Microbiome’s Role in Health and Disease

Thousands of species of microbes—bacteria, viruses, fungi, and protozoa—inhabit every internal and external surface of the human body. Collectively, these microbes, known as the microbiome, outnumber the body’s human cells by about 10 to 1 and include more than 1,000 species of microorganisms and several million genes residing in the skin, respiratory system, urogenital, and gastrointestinal tracts. The microbiome’s complicated relationship with its human host is increasingly considered so crucial to health that researchers sometimes call it “the forgotten organ.”

Disturbances to the microbiome can arise from nutritional deficiencies, antibiotic use, and antiseptic modern life. Imbalances in the microbiome’s diverse microbial communities, which interact constantly with cells in the human body, may contribute to chronic health conditions, including diabetes, asthma and allergies, obesity and the metabolic syndrome, digestive disorders including irritable bowel syndrome (IBS), and autoimmune disorders like multiple sclerosis and rheumatoid arthritis, research shows.

While study of the microbiome is a growing research enterprise that has attracted enthusiastic media attention and venture capital, its findings are largely preliminary. But some laboratorians are already developing a greater appreciation for the microbiome’s contributions to human biochemistry and are considering a future in which they expect to measure changes in the microbiome to monitor disease and inform clinical practice.

Pivot Toward the Microbiome

Following the National Institutes of Health (NIH) Human Genome Project, many scientists noted the considerable genetic signal from microbes in the body and the existence of technology to analyze these microorganisms. That realization led NIH to establish the Human Microbiome Project in 2007, said Lita Proctor, PhD, its program director. In the project’s first phase, researchers studied healthy adults to produce a reference set of microbiomes and a resource of metagenomic sequences of bacteria in the airways, skin, oral cavities, and the gastrointestinal and vaginal tracts, plus a catalog of microbial genome sequences of reference strains. Researchers also evaluated specific diseases associated with disturbances in the microbiome, including gastrointestinal diseases such as Crohn’s disease, ulcerative colitis, IBS, and obesity, as well as urogenital conditions, those that involve the reproductive system, and skin diseases like eczema, psoriasis, and acne.

Phase 1 studies determined the composition of many parts of the microbiome, but did not define how that composition affects health or specific disease. The project’s second phase aims to “answer the question of what microbes actually do,” explained Proctor. Researchers are now examining properties of the microbiome including gene expression, protein, and human and microbial metabolite profiles in studies of pregnant women at risk for preterm birth, the gut hormones of patients at risk for IBS, and nasal microbiomes of patients at risk for type 2 diabetes.

Promising Lines of Research

Cystic fibrosis and microbiology investigator Michael Surette, PhD, sees promising microbiome research not just in terms of evidence of its effects on specific diseases, but also in what drives changes in the microbiome. Surette is Canada research chair in interdisciplinary microbiome research in the Farncombe Family Digestive Health Research Institute at McMaster University
in Hamilton, Ontario.

One type of study on factors driving microbiome change examines how alterations in composition and imbalances in individual patients relate to improving or worsening disease. “IBS, cystic fibrosis, and chronic obstructive pulmonary disease all have periods of instability or exacerbation,” he noted. Surette hopes that one day, tests will provide clinicians the ability to monitor changes in microbial composition over time and even predict when a patient’s condition is about to deteriorate. Monitoring perturbations to the gut microbiome might also help minimize collateral damage to the microbiome during aggressive antibiotic therapy for hospitalized patients, he added.

Monitoring changes to the microbiome also might be helpful for “culture negative” patients, who now may receive multiple, unsuccessful courses of different antibiotics that drive antibiotic resistance. Frustration with standard clinical biology diagnosis of lung infections in cystic fibrosis patients first sparked Surette’s investigations into the microbiome. He hopes that future tests involving the microbiome might also help asthma patients with neutrophilia, community-acquired pneumonia patients who harbor complex microbial lung communities lacking obvious pathogens, and hospitalized patients with pneumonia or sepsis. He envisions microbiome testing that would look for short-term changes indicating whether or not a drug is effective.

Companion Diagnostics

Daniel Peterson, MD, PhD, an assistant professor of pathology at Johns Hopkins University School of Medicine in Baltimore, believes the future of clinical testing involving the microbiome lies in companion diagnostics for novel treatments, and points to companies that are already developing and marketing tests that will require such assays.

Examples of microbiome-focused enterprises abound, including Genetic Analysis, based in Oslo, Norway, with its high-throughput test that uses 54 probes targeted to specific bacteria to measure intestinal gut flora imbalances in inflammatory bowel disease and irritable bowel syndrome patients. Paris, France-based Enterome is developing both novel drugs and companion diagnostics for microbiome-related diseases such as IBS and some metabolic diseases. Second Genome, based in South San Francisco, has developed an experimental drug, SGM-1019, that the company says blocks damaging activity of the microbiome in the intestine. Cambridge, Massachusetts-based Seres Therapeutics has received Food and Drug Administration orphan drug designation for SER-109, an oral therapeutic intended to correct microbial imbalances to prevent recurrent Clostridium difficile infection in adults.

One promising clinical use of the microbiome is fecal transplantation, which both prospective and retrospective studies have shown to be effective in patients with C. difficile infections who do not respond to front-line therapies, said James Versalovic, MD, PhD, director of Texas Children’s Hospital Microbiome Center and professor of pathology at Baylor College of Medicine in Houston. “Fecal transplants and other microbiome replacement strategies can radically change the composition of the microbiome in hours to days,” he explained.

But NIH’s Proctor discourages too much enthusiasm about fecal transplant. “Natural products like stool can have [side] effects,” she pointed out. “The [microbiome research] field needs to mature and we need to verify outcomes before anything becomes routine.”

Hurdles for Lab Testing

While he is hopeful that labs someday will use the microbiome to produce clinically useful information, Surette pointed to several problems that must be solved beforehand. First, molecular methods commonly used right now should be more quantitative and accurate. Additionally, research on the microbiome encompasses a wide variety of protocols, some of which are better at extracting particular types of bacteria and therefore can give biased views of communities living in the body. Also, tests may need to distinguish between dead and live microbes. Another hurdle is that labs using varied bioinfomatic methods may produce different results from the same sample, a problem that Surette sees as ripe for a solution from clinical laboratorians, who have expertise in standardizing robust protocols and in automating tests.

One way laboratorians can prepare for future, routine microbiome testing is to expand their notion of clinical chemistry to include both microbial and human biochemistry. “The line between microbiome science and clinical science is blurring,” said Versalovic. “When developing future assays to detect biochemical changes in disease states, we must consider the contributions of microbial metabolites and proteins and how to tailor tests to detect them.” In the future, clinical labs may test for uniquely microbial metabolites in various disease states, he predicted.

 

Automated Review of Mass Spectrometry Results  

Can We Achieve Autoverification?

Author: Katherine Alexander and Andrea R. Terrell, PhD  // Date: NOV.1.2015  // Source:Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/november/automated-review-of-mass-spectrometry-results-can-we-achieve-autoverification

 

Paralleling the upswing in prescription drug misuse, clinical laboratories are receiving more requests for mass spectrometry (MS) testing as physicians rely on its specificity to monitor patient compliance with prescription regimens. However, as volume has increased, reimbursement has declined, forcing toxicology laboratories both to increase capacity and lower their operational costs—without sacrificing quality or turnaround time. Now, new solutions are available enabling laboratories to bring automation to MS testing and helping them with the growing demand for toxicology and other testing.

What is the typical MS workflow?

A typical workflow includes a long list of manual steps. By the time a sample is loaded onto the mass spectrometer, it has been collected, logged into the lab information management system (LIMS), and prepared for analysis using a variety of wet chemistry techniques.

Most commercial clinical laboratories receive enough samples for MS analysis to batch analyze those samples. A batch consists of a calibrator(s), quality control (QC) samples, and patient/donor samples. Historically, the method would be selected (i.e. “analysis of opiates”), sample identification information would be entered manually into the MS software, and the instrument would begin analyzing each sample. Upon successful completion of the batch, the MS operator would view all of the analytical data, ensure the QC results were acceptable, and review each patient/donor specimen, looking at characteristics such as peak shape, ion ratios, retention time, and calculated concentration.

The operator would then post acceptable results into the LIMS manually or through an interface, and unacceptable results would be rescheduled or dealt with according to lab-specific protocols. In our laboratory we perform a final certification step for quality assurance by reviewing all information about the batch again, prior to releasing results for final reporting through the LIMS.

What problems are associated with this workflow?

The workflow described above results in too many highly trained chemists performing manual data entry and reviewing perfectly acceptable analytical results. Lab managers would prefer that MS operators and certifying scientists focus on troubleshooting problem samples rather than reviewing mounds of good data. Not only is the current process inefficient, it is mundane work prone to user errors. This risks fatigue, disengagement, and complacency by our highly skilled scientists.

Importantly, manual processes also take time. In most clinical lab environments, turnaround time is critical for patient care and industry competitiveness. Lab directors and managers are looking for solutions to automate mundane, error-prone tasks to save time and costs, reduce staff burnout, and maintain high levels of quality.

How can software automate data transfer from MS systems to LIMS?

Automation is not a new concept in the clinical lab. Labs have automated processes in shipping and receiving, sample preparation, liquid handling, and data delivery to the end user. As more labs implement MS, companies have begun to develop special software to automate data analysis and review workflows.

In July 2011, AIT Labs incorporated ASCENT into our workflow, eliminating the initial manual peak review step. ASCENT is an algorithm-based peak picking and data review system designed specifically for chromatographic data. The software employs robust statistical and modeling approaches to the raw instrument data to present the true signal, which often can be obscured by noise or matrix components.

The system also uses an exponentially modified Gaussian (EMG) equation to apply a best-fit model to integrated peaks through what is often a noisy signal. In our experience, applying the EMG results in cleaner data from what might appear to be poor chromatography ultimately allows us to reduce the number of samples we might otherwise rerun.

How do you validate the quality of results?

We’ve developed a robust validation protocol to ensure that results are, at minimum, equivalent to results from our manual review. We begin by building the assay in ASCENT, entering assay-specific information from our internal standard operating procedure (SOP). Once the assay is configured, validation proceeds with parallel batch processing to compare results between software-reviewed data and staff-reviewed data. For new implementations we run eight to nine batches of 30–40 samples each; when we are modifying or upgrading an existing implementation we run a smaller number of batches. The parallel batches should contain multiple positive and negative results for all analytes in the method, preferably spanning the analytical measurement range of the assay.

The next step is to compare the results and calculate the percent difference between the data review methods. We require that two-thirds of the automated results fall within 20% of the manually reviewed result. In addition to validating patient sample correlation, we also test numerous quality assurance rules that should initiate a flag for further review.

What are the biggest challenges during implementation and continual improvement initiatives?

On the technological side, our largest hurdle was loading the sequence files into ASCENT. We had created an in-house mechanism for our chemists to upload the 96-well plate map for their batch into the MS software. We had some difficulty transferring this information to ASCENT, but once we resolved this issue, the technical workflow proceeded fairly smoothly.

The greater challenge was changing our employees’ mindset from one of fear that automation would displace them, to a realization that learning this new technology would actually make them more valuable. Automating a non-mechanical process can be a difficult concept for hands-on scientists, so managers must be patient and help their employees understand that this kind of technology leverages the best attributes of software and people to create a powerful partnership.

We recommend that labs considering automated data analysis engage staff in the validation and implementation to spread the workload and the knowledge. As is true with most technology, it is best not to rely on just one or two super users. We also found it critical to add supervisor level controls on data file manipulation, such as removing a sample that wasn’t run from the sequence table. This can prevent inadvertent deletion of a file, requiring reinjection of the entire batch!

 

Understanding Fibroblast Growth Factor 23

Author: Damien Gruson, PhD  // Date: OCT.1.2015  // Source: Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/october/understanding-fibroblast-growth-factor-23

What is the relationship of FGF-23 to heart failure?

A Heart failure (HF) is an increasingly common syndrome associated with high morbidity, elevated hospital readmission rates, and high mortality. Improving diagnosis, prognosis, and treatment of HF requires a better understanding of its different sub-phenotypes. As researchers gained a comprehensive understanding of neurohormonal activation—one of the hallmarks of HF—they discovered several biomarkers, including natriuretic peptides, which now are playing an important role in sub-phenotyping HF and in driving more personalized management of this chronic condition.

Like the natriuretic peptides, fibroblast growth factor 23 (FGF-23) could become important in risk-stratifying and managing HF patients. Produced by osteocytes, FGF-23 is a key regulator of phosphorus homeostasis. It binds to renal and parathyroid FGF-Klotho receptor heterodimers, resulting in phosphate excretion, decreased 1-α-hydroxylation of 25-hydroxyvitamin D, and decreased parathyroid hormone (PTH) secretion. The relationship to PTH is important because impaired homeostasis of cations and decreased glomerular filtration rate might contribute to the rise of FGF-23. The amino-terminal portion of FGF-23 (amino acids 1-24) serves as a signal peptide allowing secretion into the blood, and the carboxyl-terminal portion (aa 180-251) participates in its biological action.

How might FGF-23 improve HF risk assessment?

Studies have shown that FGF-23 is related to the risk of cardiovascular diseases and mortality. It was first demonstrated that FGF-23 levels were independently associated with left ventricular mass index and hypertrophy as well as mortality in patients with chronic kidney disease (CKD). FGF-23 also has been associated with left ventricular dysfunction and atrial fibrillation in coronary artery disease subjects, even in the absence of impaired renal function.

FGF-23 and FGF receptors are both expressed in the myocardium. It is possible that FGF-23 has direct effects on the heart and participates in the physiopathology of cardiovascular diseases and HF. Experiments have shown that for in vitro cultured rat cardiomyocytes, FGF-23 stimulates pathological hypertrophy by activating the calcineurin-NFAT pathway—and in wild-type mice—the intra-myocardial or intravenous injection of FGF-23 resulted in left ventricular hypertrophy. As such, FGF-23 appears to be a potential stimulus of myocardial hypertrophy, and increased levels may contribute to the worsening of heart failure and long-term cardiovascular death.

Researchers have documented that HF patients have elevated FGF-23 circulating levels. They have also found a significant correlation between plasma levels of FGF-23 and B-type natriuretic peptide, a biomarker related to ventricular stretch and cardiac hypertrophy, in patients with left ventricular hypertrophy. As such, measuring FGF-23 levels might be a useful tool to predict long-term adverse cardiovascular events in HF patients.

Interestingly, researchers have documented a significant relationship between FGF-23 and PTH in both CKD and HF patients. As PTH stimulates FGF-23 expression, it could be that in HF patients, increased PTH levels increase the bone expression of FGF-23, which enhances its effects on the heart.

 

The Past, Present, and Future of Western Blotting in the Clinical Laboratory

Author: Curtis Balmer, PhD  // Date: OCT.1.2015  // Source: Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/october/the-past-present-and-future-of-western-blotting-in-the-clinical-laboratory

Much of the discussion about Western blotting centers around its performance as a biological research tool. This isn’t surprising. Since its introduction in the late 1970s, the Western blot has been adopted by biology labs of virtually every stripe, and become one of the most widely used techniques in the research armamentarium. However, Western blotting has also been employed in clinical laboratories to aid in the diagnosis of various diseases and disorders—an equally important and valuable application. Yet there has been relatively little discussion of its use in this context, or of how advances in Western blotting might affect its future clinical use.

Highlighting the clinical value of Western blotting, Stanley Naides, MD, medical director of Immunology at Quest Diagnostics observed that, “Western blotting has been a very powerful tool in the laboratory and for clinical diagnosis. It’s one of many various methods that the laboratorian brings to aid the clinician in the diagnosis of disease, and the selection and monitoring of therapy.” Indeed, Western blotting has been used at one time or the other to aid in the diagnosis of infectious diseases including hepatitis C (HCV), HIV, Lyme disease, and syphilis, as well as autoimmune disorders such as paraneoplastic disease and myositis conditions.

However, Naides was quick to point out that the choice of assays to use clinically is based on their demonstrated sensitivity and performance, and that the search for something better is never-ending. “We’re constantly looking for methods that improve detection of our target [protein],” Naides said. “There have been a number of instances where we’ve moved away from Western blotting because another method proves to be more sensitive.” But this search can also lead back to Western blotting. “We’ve gone away from other methods because there’s been a Western blot that’s been developed that’s more sensitive and specific. There’s that constant movement between methods as new tests are developed.”

In recent years, this quest has been leading clinical laboratories away from Western blotting toward more sensitive and specific diagnostic assays, at least for some diseases. Using confirmatory diagnosis of HCV infection as an example, Sai Patibandla, PhD, director of the immunoassay group at Siemens Healthcare Diagnostics, explained that movement away from Western blotting for confirmatory diagnosis of HCV infection began with a technical modification called Recombinant Immunoblotting Assay (RIBA). RIBA streamlines the conventional Western blot protocol by spotting recombinant antigen onto strips which are used to screen patient samples for antibodies against HCV. This approach eliminates the need to separate proteins and transfer them onto a membrane.

The RIBA HCV assay was initially manufactured by Chiron Corporation (acquired by Novartics Vaccines and Diagnostics in 2006). It received Food and Drug Administration (FDA) approval in 1999, and was marketed as Chiron RIBA HCV 3.0 Strip Immunoblot Assay. Patibandla explained that, at the time, the Chiron assay “…was the only FDA-approved confirmatory testing for HCV.” In 2013 the assay was discontinued and withdrawn from the market due to reports that it was producing false-positive results.

Since then, clinical laboratories have continued to move away from Western blot-based assays for confirmation of HCV in favor of the more sensitive technique of nucleic acid testing (NAT). “The migration is toward NAT for confirmation of HCV [diagnosis]. We don’t use immunoblots anymore. We don’t even have a blot now to confirm HCV,” Patibandla said.

Confirming HIV infection has followed a similar path. Indeed, in 2014 the Centers for Disease Control and Prevention issued updated recommendations for HIV testing that, in part, replaced Western blotting with NAT. This change was in response to the recognition that the HIV-1 Western blot assay was producing false-negative or indeterminable results early in the course of HIV infection.

At this juncture it is difficult to predict if this trend away from Western blotting in clinical laboratories will continue. One thing that is certain, however, is that clinicians and laboratorians are infinitely pragmatic, and will eagerly replace current techniques with ones shown to be more sensitive, specific, and effective. This raises the question of whether any of the many efforts currently underway to improve Western blotting will produce an assay that exceeds the sensitivity of currently employed techniques such as NAT.

Some of the most exciting and groundbreaking work in this area is being done by Amy Herr, PhD, a professor of bioengineering at University of California, Berkeley. Herr’s group has taken on some of the most challenging limitations of Western blotting, and is developing techniques that could revolutionize the assay. For example, the Western blot is semi-quantitative at best. This weakness dramatically limits the types of answers it can provide about changes in protein concentrations under various conditions.

To make Western blotting more quantitative, Herr’s group is, among other things, identifying losses of protein sample mass during the assay protocol. About this, Herr explains that the conventional Western blot is an “open system” that involves lots of handling of assay materials, buffers, and reagents that makes it difficult to account for protein losses. Or, as Kevin Lowitz, a senior product manager at Thermo Fisher Scientific, described it, “Western blot is a [simple] technique, but a really laborious one, and there are just so many steps and so many opportunities to mess it up.”

Herr’s approach is to reduce the open aspects of Western blot. “We’ve been developing these more closed systems that allow us at each stage of the assay to account for [protein mass] losses. We can’t do this exactly for every target of interest, but it gives us a really good handle [on protein mass losses],” she said. One of the major mechanisms Herr’s lab is using to accomplish this is to secure proteins to the blot matrix with covalent bonding rather than with the much weaker hydrophobic interactions that typically keep the proteins in place on the membrane.

Herr’s group also has been developing microfluidic platforms that allow Western blotting to be done on single cells, “In our system we’re doing thousands of independent Westerns on single cells in four hours. And, hopefully, we’ll cut that down to one hour over the next couple years.”

Other exciting modifications that stand to dramatically increase the sensitivity, quantitation, and through-put of Western blotting also are being developed and explored. For example, the use of capillary electrophoresis—in which proteins are conveyed through a small electrolyte-filled tube and separated according to size and charge before being dropped onto a blotting membrane—dramatically reduces the amount of protein required for Western blot analysis, and thereby allows Westerns to be run on proteins from rare cells or for which quantities of sample are extremely limited.

Jillian Silva, PhD, an associate specialist at the University of California, San Francisco Helen Diller Family Comprehensive Cancer Center, explained that advances in detection are also extending the capabilities of Western blotting. “With the advent of fluorescence detection we have a way to quantitate Westerns, and it is now more quantitative than it’s ever been,” said Silva.

Whether or not these advances produce an assay that is adopted by clinical laboratories remains to be seen. The emphasis on Western blotting as a research rather than a clinical tool may bias advances in favor of the needs and priorities of researchers rather than clinicians, and as Patibandla pointed out, “In the research world Western blotting has a certain purpose. [Researchers] are always coming up with new things, and are trying to nail down new proteins, so you cannot take Western blotting away.” In contrast, she suggested that for now, clinical uses of Western blotting remain “limited.”

 

Adapting Next Generation Technologies to Clinical Molecular Oncology Service

Author: Ronald Carter, PhD, DVM  // Date: OCT.1.2015  // Source: Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/october/adapting-next-generation-technologies-to-clinical-molecular-oncology-service

Next generation technologies (NGT) deliver huge improvements in cost efficiency, accuracy, robustness, and in the amount of information they provide. Microarrays, high-throughput sequencing platforms, digital droplet PCR, and other technologies all offer unique combinations of desirable performance.

As stronger evidence of genetic testing’s clinical utility influences patterns of patient care, demand for NGT testing is increasing. This presents several challenges to clinical laboratories, including increased urgency, clinical importance, and breadth of application in molecular oncology, as well as more integration of genetic tests into synoptic reporting. Laboratories need to add NGT-based protocols while still providing old tests, and the pace of change is increasing.What follows is one viewpoint on the major challenges in adopting NGTs into diagnostic molecular oncology service.

Choosing a Platform

Instrument selection is a critical decision that has to align with intended test applications, sequencing chemistries, and analytical software. Although multiple platforms are available, a mainstream standard has not emerged. Depending on their goals, laboratories might set up NGTs for improved accuracy of mutation detection, massively higher sequencing capacity per test, massively more targets combined in one test (multiplexing), greater range in sequencing read length, much lower cost per base pair assessed, and economy of specimen volume.

When high-throughput instruments first made their appearance, laboratories paid more attention to the accuracy of base-reading: Less accurate sequencing meant more data cleaning and resequencing (1). Now, new instrument designs have narrowed the differences, and test chemistry can have a comparatively large impact on analytical accuracy (Figure 1). The robustness of technical performance can also vary significantly depending upon specimen type. For example, LifeTechnologies’ sequencing platforms appear to be comparatively more tolerant of low DNA quality and concentration, which is an important consideration for fixed and processed tissues.

https://www.aacc.org/~/media/images/cln/articles/2015/october/carter_fig1_cln_oct15_ed.jpg

Figure 1 Comparison of Sequencing Chemistries

Sequence pile-ups of the same target sequence (2 large genes), all performed on the same analytical instrument. Results from 4 different chemistries, as designed and supplied by reagent manufacturers prior to optimization in the laboratory. Red lines represent limits of exons. Height of blue columns proportional to depth of coverage. In this case, the intent of the test design was to provide high depth of coverage so that reflex Sanger sequencing would not be necessary. Courtesy B. Sadikovic, U. of Western Ontario.

 

In addition, batching, robotics, workload volume patterns, maintenance contracts, software licenses, and platform lifetime affect the cost per analyte and per specimen considerably. Royalties and reagent contracts also factor into the cost of operating NGT: In some applications, fees for intellectual property can represent more than 50% of the bench cost of performing a given test, and increase substantially without warning.

Laboratories must also deal with the problem of obsolescence. Investing in a new platform brings the angst of knowing that better machines and chemistries are just around the corner. Laboratories are buying bigger pieces of equipment with shorter service lives. Before NGTs, major instruments could confidently be expected to remain current for at least 6 to 8 years. Now, a major instrument is obsolete much sooner, often within 2 to 3 years. This means that keeping it in service might cost more than investing in a new platform. Lease-purchase arrangements help mitigate year-to-year fluctuations in capital equipment costs, and maximize the value of old equipment at resale.

One Size Still Does Not Fit All

Laboratories face numerous technical considerations to optimize sequencing protocols, but the test has to be matched to the performance criteria needed for the clinical indication (2). For example, measuring response to treatment depends first upon the diagnostic recognition of mutation(s) in the tumor clone; the marker(s) then have to be quantifiable and indicative of tumor volume throughout the course of disease (Table 1).

As a result, diagnostic tests need to cover many different potential mutations, yet accurately identify any clinically relevant mutations actually present. On the other hand, tests for residual disease need to provide standardized, sensitive, and accurate quantification of a selected marker mutation against the normal background. A diagnostic panel might need 1% to 3% sensitivity across many different mutations. But quantifying early response to induction—and later assessment of minimal residual disease—needs a test that is reliably accurate to the 10-4 or 10-5 range for a specific analyte.

Covering all types of mutations in one diagnostic test is not yet possible. For example, subtyping of acute myeloid leukemia is both old school (karyotype, fluorescent in situ hybridization, and/or PCR-based or array-based testing for fusion rearrangements, deletions, and segmental gains) and new school (NGT-based panel testing for molecular mutations).

Chemistries that cover both structural variants and copy number variants are not yet in general use, but the advantages of NGTs compared to traditional methods are becoming clearer, such as in colorectal cancer (3). Researchers are also using cell-free DNA (cfDNA) to quantify residual disease and detect resistance mutations (4). Once a clinically significant clone is identified, enrichment techniques help enable extremely sensitive quantification of residual disease (5).

Validation and Quality Assurance

Beyond choosing a platform, two distinct challenges arise in bringing NGTs into the lab. The first is assembling the resources for validation and quality assurance. The second is keeping tests up-to-date as new analytes are needed. Even if a given test chemistry has the flexibility to add analytes without revalidating the entire panel, keeping up with clinical advances is a constant priority.

Due to their throughput and multiplexing capacities, NGT platforms typically require considerable upfront investment to adopt, and training staff to perform testing takes even more time. Proper validation is harder to document: Assembling positive controls, documenting test performance criteria, developing quality assurance protocols, and conducting proficiency testing are all demanding. Labs meet these challenges in different ways. Laboratory-developed tests (LDTs) allow self-determined choice in design, innovation, and control of the test protocol, but can be very expensive to set up.

Food and Drug Administration (FDA)-approved methods are attractive but not always an option. More FDA-approved methods will be marketed, but FDA approval itself brings other trade-offs. There is a cost premium compared to LDTs, and the test methodologies are locked down and not modifiable. This is particularly frustrating for NGTs, which have the specific attraction of extensive multiplexing capacity and accommodating new analytes.

IT and the Evolution of Molecular Oncology Reporting Standards

The options for information technology (IT) pipelines for NGTs are improving rapidly. At the same time, recent studies still show significant inconsistencies and lack of reproducibility when it comes to interpreting variants in array comparative genomic hybridization, panel testing, tumor expression profiling, and tumor genome sequencing. It can be difficult to duplicate published performances in clinical studies because of a lack of sufficient information about the protocol (chemistry) and software. Building bioinformatics capacity is a key requirement, yet skilled people are in short supply and the qualifications needed to work as a bioinformatician in a clinical service are not yet clearly defined.

Tumor biology brings another level of complexity. Bioinformatic analysis must distinguish tumor-specific­ variants from genomic variants. Sequencing of paired normal tissue is often performed as a control, but virtual normal controls may have intriguing advantages (6). One of the biggest challenges is to reproducibly interpret the clinical significance of interactions between different mutations, even with commonly known, well-defined mutations (7). For multiple analyte panels, such as predictive testing for breast cancer, only the performance of the whole panel in a population of patients can be compared; individual patients may be scored into different risk categories by different tests, all for the same test indication.

In large scale sequencing of tumor genomes, which types of mutations are most informative in detecting, quantifying, and predicting the behavior of the tumor over time? The amount and complexity of mutation varies considerably across different tumor types, and while some mutations are more common, stable, and clinically informative than others, the utility of a given tumor marker varies in different clinical situations. And, for a given tumor, treatment effect and metastasis leads to retesting for changes in drug sensitivities.

These complexities mean that IT must be designed into the process from the beginning. Like robotics, IT represents a major ancillary decision. One approach many labs choose is licensed technologies with shared databases that are updated in real time. These are attractive, despite their cost and licensing fees. New tests that incorporate proprietary IT with NGT platforms link the genetic signatures of tumors to clinically significant considerations like tumor classification, recommended methodologies for monitoring response, predicted drug sensitivities, eligible clinical trials, and prognostic classifications. In-house development of such solutions will be difficult, so licensing platforms from commercial partners is more likely to be the norm.

The Commercial Value of Health Records and Test Data

The future of cancer management likely rests on large-scale databases that link hereditary and somatic tumor testing with clinical outcomes. Multiple centers have such large studies underway, and data extraction and analysis is providing increasingly refined interpretations of clinical significance.

Extracting health outcomes to correlate with molecular test results is commercially valuable, as the pharmaceutical, insurance, and healthcare sectors focus on companion diagnostics, precision medicine, and evidence-based health technology assessment. Laboratories that can develop tests based on large-scale integration of test results to clinical utility will have an advantage.

NGTs do offer opportunities for net reductions in the cost of healthcare. But the lag between availability of a test and peer-evaluated demon­stration of clinical utility can be considerable. Technical developments arise faster than evidence of clinical utility. For example, immuno­histochemistry, estrogen receptor/progesterone receptor status, HER2/neu, and histology are still the major pathological criteria for prognostic evaluation of breast cancer at diagnosis, even though multiple analyte tumor profiling has been described for more than 15 years. Healthcare systems need a more concerted assessment of clinical utility if they are to take advantage of the promises of NGTs in cancer care.

Disruptive Advances

Without a doubt, “disruptive” is an appropriate buzzword in molecular oncology, and new technical advances are about to change how, where, and for whom testing is performed.

• Predictive Testing

Besides cost per analyte, one of the drivers for taking up new technologies is that they enable multiplexing many more analytes with less biopsy material. Single-analyte sequential testing for epidermal growth factor receptor (EGFR), anaplastic lymphoma kinase, and other targets on small biopsies is not sustainable when many more analytes are needed, and even now, a significant proportion of test requests cannot be completed due to lack of suitable biopsy material. Large panels incorporating all the mutations needed to cover multiple tumor types are replacing individual tests in companion diagnostics.

• Cell-Free Tumor DNA

Challenges of cfDNA include standardizing the collection and processing methodologies, timing sampling to minimize the effect of therapeutic toxicity on analytical accuracy, and identifying the most informative sample (DNA, RNA, or protein). But for more and more tumor types, it will be possible to differentiate benign versus malignant lesions, perform molecular subtyping, predict response, monitor treatment, or screen for early detection—all without a surgical biopsy.

cfDNA technologies can also be integrated into core laboratory instrumentation. For example, blood-based EGFR analysis for lung cancer is being developed on the Roche cobas 4800 platform, which will be a significant change from the current standard of testing based upon single tests of DNA extracted from formalin-fixed, paraffin-embedded sections selected by a pathologist (8).

• Whole Genome and Whole Exome Sequencing

Whole genome and whole exome tumor sequencing approaches provide a wealth of biologically important information, and will replace individual or multiple gene test panels as the technical cost of sequencing declines and interpretive accuracy improves (9). Laboratories can apply informatics selectively or broadly to extract much more information at relatively little increase in cost, and the interpretation of individual analytes will be improved by the context of the whole sequence.

• Minimal Residual Disease Testing

Massive resequencing and enrichment techniques can be used to detect minimal residual disease, and will provide an alternative to flow cytometry as costs decline. The challenge is to develop robust analytical platforms that can reliably produce results in a high proportion of patients with a given tumor type, despite using post-treatment specimens with therapy-induced degradation, and a very low proportion of target (tumor) sequence to benign background sequence.

The tumor markers should remain informative for the burden of disease despite clonal evolution over the course of multiple samples taken during progression of the clinical course and treatment. Quantification needs to be accurate and sensitive down to the 10-5 range, and cost competitive with flow cytometry.

• Point-of-Care Test Methodologies

Small, rapid, cheap, and single use point-of-care (POC) sequencing devices are coming. Some can multiplex with analytical times as short as 20 minutes. Accurate and timely testing will be possible in places like pharmacies, oncology clinics, patient service centers, and outreach programs. Whether physicians will trust and act on POC results alone, or will require confirmation by traditional laboratory-based testing, remains to be seen. However, in the simplest type of application, such as a patient known to have a particular mutation, the advantages of POC-based testing to quantify residual tumor burden are clear.

Conclusion

Molecular oncology is moving rapidly from an esoteric niche of diagnostics to a mainstream, required component of integrated clinical laboratory services. While NGTs are markedly reducing the cost per analyte and per specimen, and will certainly broaden the scope and volume of testing performed, the resources required to choose, install, and validate these new technologies are daunting for smaller labs. More rapid obsolescence and increased regulatory scrutiny for LDTs also present significant challenges. Aligning test capacity with approved clinical indications will require careful and constant attention to ensure competitiveness.

References

1. Liu L, Li Y, Li S, et al. Comparison of next-generation sequencing systems. J Biomed Biotechnol 2012; doi:10.1155/2012/251364.

2. Brownstein CA, Beggs AH, Homer N, et al. An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge. Genome Biol 2014;15:R53.

3. Haley L, Tseng LH, Zheng G, et al. Performance characteristics of next-generation sequencing in clinical mutation detection of colorectal ­cancers. [Epub ahead of print] Modern Pathol July 31, 2015 as doi:10.1038/modpathol.2015.86.

4. Butler TM, Johnson-Camacho K, Peto M, et al. Exome sequencing of cell-free DNA from metastatic cancer patients identifies clinically actionable mutations distinct from primary ­disease. PLoS One 2015;10:e0136407.

5. Castellanos-Rizaldos E, Milbury CA, Guha M, et al. COLD-PCR enriches low-level variant DNA sequences and increases the sensitivity of genetic testing. Methods Mol Biol 2014;1102:623–39.

6. Hiltemann S, Jenster G, Trapman J, et al. Discriminating somatic and germline mutations in tumor DNA samples without matching normals. Genome Res 2015;25:1382–90.

7. Lammers PE, Lovly CM, Horn L. A patient with metastatic lung adenocarcinoma harboring concurrent EGFR L858R, EGFR germline T790M, and PIK3CA mutations: The challenge of interpreting results of comprehensive mutational testing in lung cancer. J Natl Compr Canc Netw 2015;12:6–11.

8. Weber B, Meldgaard P, Hager H, et al. Detection of EGFR mutations in plasma and biopsies from non-small cell lung cancer patients by allele-specific PCR assays. BMC Cancer 2014;14:294.

9. Vogelstein B, Papadopoulos N, Velculescu VE, et al. Cancer genome landscapes. Science 2013;339:1546–58.

10. Heitzer E, Auer M, Gasch C, et al. Complex tumor genomes inferred from single circulating tumor cells by array-CGH and next-generation sequencing. Cancer Res 2013;73:2965–75.

11. Healy B. BRCA genes — Bookmaking, fortunetelling, and medical care. N Engl J Med 1997;336:1448–9.

 

 

 

Read Full Post »

What about Theranos?

Curator: Larry H. Bernstein, MD, FCAP

Is Theranos Situation False Crowdfunding Claims at Scale or ‘Outsider’ Naivety?

http://www.mdtmag.com/blog/2015/11/theranos-situation-false-crowdfunding-claims-scale-or-outsider-naivety

If you’ve been following the Theranos situation that involves several damning articles from the Wall Street Journal on the company (see sidebar below video), you know that “something is rotten in the state of Denmark.” That is to say, regardless of whether or not you believe the WSJ articles 100%, believe Theranos 100%, or land somewhere in between, it’s hard not to see that something at the company is definitely creating questions about their original claims. In fact, the company has apparently even tempered some language with regard to its capabilities while “debating” the accuracy of the WSJ articles. It’s really a big mess for a company that was supposedly making significant changes in the way we’d conduct blood testing and the way patients controlled and accessed their own health data (although, I think the idea behind that specific aspect is a very good one).

Due to FDA inspections and findings of concern with Thernos practices, the company is currently only collecting blood for one test using its revolutionary proprietary technology. While the company’s CEO Elizabeth Holmes continues to assure the public that the problems are tied to FDA related procedures and not an issue with the technology itself, stakeholders such as Walgreens put any further interactions with the company on hold.

In the following video from Fortune’s Global Forum, you can see Ms. Holmes discussing the situation over the FDA inspections and the changes that are currently in place with regard to the testing that’s happening at the company.

https://youtu.be/A8qgmGtRMsY

So what’s the story behind this story? Is this a deliberate attempt to deceive on the part of Theranos or is it an example of what can happen when an “outsider” gets involved in the highly regulated medical device industry and faces off with the FDA without the proper experience in place to address potential areas of concern?

In a recent blog, I looked at the crowdfunding of medical devices and what can happen when claims made don’t live up to the reality of the product that’s actually developed. Once enthusiastic investors can quickly (and loudly) turn on a company or project, venting their frustration even directly on the crowdfunding page for all to see. Unfortunately, with the way these sites seem to be set-up, the money is still provided to the company that produces a product, albeit one that does not live up to the initial concept.

Is that what Theranos ultimately is? Were the technology claims taken at face value by significant investment backers? It would seem very unlikely, but given some of the accusations of former Theranos employees in the WSJ articles, it wouldn’t be the only instance of Theranos trying to manipulate testing protocols for the sake of appearing more impressive. Theranos counters those claims by saying the former employees were actually unfamiliar with the actual testing the company performs. Whether or not you believe that is entirely up to you.

Another alternative to blatant deceit on the part of Theranos is the possibility that the company was simply playing in an industry it wasn’t truly experienced enough to handle. In other words, how many FDA savy employees work for Theranos? Did they seek consultants to help with the regulatory processes? Or were they simply naïve to the ways of the regulated industry in which they were entering?

Again, this scenario too seems unlikely, but it also brings in the debate over lab-developed tests and the FDA’s regulation of them. If Theranos testing protocols fall under the realm of LDTs, then they aren’t necessary under the oversight of the FDA. Sure, the blood collection device is (and that’s why changes are currently occurring at the company), but does the FDA have the authority to inspect the company’s tests if they are LDTs?

Ultimately, I think everyone (with the exception of competitors to Theranos perhaps) wants the company to be successful. The ideas and hope embedded within the original claims the company made will only enhance the quality of care that we are able to achieve within our healthcare system. Further, empowering patients to make decisions and get involved with their own healthcare management would likely improve their overall health.

Unfortunately, before any of that will be possible, Theranos is going to have an uphill battle in defending itself, its technology, and its CEO in this very public debate over the realistic capabilities it can provide. Hopefully, it learns from this experience and if the technology truly functions the way they’ve claimed, they’ll bring on the necessary regulatory experts and better navigate the troubled waters in which they currently find themselves.

Single Blood Drop Diagnostics Key to Resolving Healthcare Challenges

At TEDMED 2014, President and CEO of Theranos, Elizabeth Holmes, talked about the importance of enabling early detection of disease through new diagnostic tools and empowering individuals to make educated decisions about their healthcare.

Read Full Post »

Sequence the Human Genome, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Sequence the Human Genome

Curator: Larry H Bernstein, MD, FCAP

 

 

Geneticist Craig Venter helped sequence the human genome. Now he wants yours.

By CARL ZIMMER   NOVEMBER 5, 2015   http://www.statnews.com/2015/11/05/geneticist-craig-venter-helped-sequence-the-human-genome-now-he-wants-yours/

If you enter Health Nucleus, a new facility in San Diego cofounded by J. Craig Venter, one of the world’s best-known living scientists, you will get a telling glimpse into the state of medical science in 2015.

Your entire genome will be sequenced with extraordinary resolution and accuracy. Your body will be scanned in fine, three-dimensional detail. Thousands of compounds in your blood will be measured. Even the microbes that live inside you will be surveyed. You will get a custom-made iPad app to navigate data about yourself. Also, your wallet will be at least $25,000 lighter.

Venter, who came to the world’s attention in the 1990s when he led a campaign to produce the first draft of a human genome, launched Health Nucleus last month as part of his new company, Human Longevity. He has made clear that his aim is just as lofty as it was when he and his team sequenced the human genome or built a flu vaccine from a genetic sequence delivered to them over the Internet.

“We’re trying to show the value of actual scientific data that can change people’s lives,” Venter told STAT in some of his most extensive remarks yet about the project. “Our goal is to interpret everything in the genome that we can.”

Still, the initiative is drawing deep suspicion among some doctors who question whether Venter’s existing tests can tell patients anything meaningful at all. In interviews, they said they see Health Nucleus as the latest venture that could lead consumers to believe that more testing means improved health. That notion, they say, could drive customers to get procedures they don’t need, which might even be harmful.

“I think there is absolutely no evidence that any of those tests have any benefit for healthy people,” Dr. Rita Redberg, a cardiologist at the University of California at San Diego and the editor-in-chief of JAMA Internal Medicine, said when asked about Venter’s new project.

Venter has a black belt in media savvy — he can make the details of molecular biology alluring for viewers of 60 Minutes and TED talks alike — but off screen he has earned a reputation even from his critics for serious scientific achievements. His non-profit J. Craig Venter Institute, which he founded in 1992, now has a staff of 300. Scientists at the institute have explored everything from the ocean’s biodiversity to the Ebola virus.

Last year, at age 67, Venter cofounded Human Longevity, a company based in San Diego with branches in Mountain View, Calif., and Singapore that is building the largest human genome-sequencing operation on Earth, equipped with massive computing resources to analyze the data being generated. The firm’s database now contains highly accurate genome sequences from 20,000 people; another 3,000 genomes are being added each month.

Franz Och, the former head of Google Translate and an expert on machine learning, is leading a team that’s teaching computers to recognize patterns in the company’s databases that scientists themselves may not be able to see. To demonstrate the power of this approach, Human Longevity researchers are using machine learning to discover how genetic variations shape the human face.

“We can determine a good resemblance of your photograph straight from your genetic code,” said Venter.

Venter and his colleagues will be publishing the results of that study soon — most likely generating another round of headlines. But headlines don’t pay the bills, and at a company that’s got $70 million in funding from private investors, bills matter. The company is now exploring a number of avenues for generating income from its database. It has partnered with Discovery, an insurance company in England and South Africa, to read the DNA of their clients. For $250 apiece, it will sequence the protein-coding regions of the genome, known as exomes, and offer an interpretation of the data.

Health Nucleus could become yet another source of income for Human Longevity. The San Diego facility can handle eight to 12 people a day. There are plans to open more sites both in the United States and abroad. “You can do the math,” Venter said.

Read Full Post »

Laser Technology

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Laser Focus World   www.laserfocusworld.com

Ultrafast lasers simplify fabrication of 3D hydrogel tissue scaffolds

Multimode holographic waveguides tackle in vivo biological imaging

Mid-infrared Lasers CMOS silicon-on-sapphire process produces broad mid-IR supercontinuum

Looking Back/Looking Forward: Positioning equipment—the challenge of building a solid foundation for optics
Stability and precision have been crucial for optics since the 19th century.
Jeff Hecht

Monolithic DFB QCL array aims at handheld IR spectral analysis
Many QCLs combined on a single chip demonstrate fully electronic wavelength tuning for stand-off IR spectroscopy of explosives and other materials.
Mark F. Witinski, Romain Blanchard, Christian Pfluegl, Laurent Diehl, Biao Li, Benjamin Pancy, Daryoosh Vakhshoori, and Federico Capasso

Quantum dots and silicon photonics combine in broadband tunable laser
A new wavelength-tunable laser diode combines quantum-dot technology and silicon photonics with large optical gains around the 1310 nm telecom window.
Tomohiro Kita and Naokatsu Yamamoto

Computer modeling boosts laser device development
A full quantitative understanding of laser devices is boosted by computer modeling, which is not only essential for efficient development processes, but also for identifying the causes of unexpected behavior.
Rüdiger Paschotta

 

 

Monolithic DFB QCL array aims at handheld IR spectral analysis
MARK F. WITINSKI, ROMAIN BLANCHARD, CHRISTIAN PFLUEGL, LAURENT DIEHL, BIAO LI, BENJAMIN PANCY, DARYOOSH VAKHSHOORI, and FEDERICO CAPASSO

Advances in infrared (IR) laser sources, optics, and detectors promise major new advances in areas of chemical analysis such as trace-gas monitoring, IR microscopy, industrial safety, and security.

One key type of photonic device that has yet to reach its full potential is a truly portable noncontact (standoff), chemically versatile analyzer for fast Fourier-transform infrared (FTIR)quality spectral examination of nearly any condensed-phase material. The unique challenges of standoff IR spectroscopy actually extend beyond advances in IR hardware, requiring the proper combination of several areas of expertise: cutting-edge optical design and laser fabrication, integrated laser electronics, thermally efficient hermetic packaging, statistical signal processing methods, and deep chemical knowledge.

At the core of the approach we have taken at Pendar Technologies is the monolithic distributed feedback (DFB) quantum-cascade laser (QCL) array. Invented in Federico Capasso’s group at Harvard University (Cambridge, MA) and licensed exclusively to Pendar, the continuously wavelength-tunable QCL array source is a highly stable broadband source that can be used for illumination in reflectance spectroscopy. Each element of the array is individually addressable and emits at a different wavelength by design.

The advantages of these QCL arrays over external-cavity (EC) QCLs stem from (1) the monolithic structure of QCL arrays and (2) their fully electronic wavelength tuning— that is, no moving gratings, allowing for much-higher-speed acquisition through improved amplitude and wavelength stability. When integrated into a system, the result is robust, stable, and field-deployable.

One of the key advances that has enabled this technology to be fielded is the high-yield fabrication of each laser ridge in the QCL array from a single wafer such that every channel simultaneously meets the specified wavelength, power, and single-mode suppression ratio. Each of these parameters is critical to both efficient beam combining and to obtaining high-quality molecular spectroscopy once integrated.

With these hurdles largely overcome, the payoff in terms of spectrometer performance lies largely in a demonstrated shot-to-shot amplitude stability in pulsed mode of <0.1%—a factor of 50 more stable than is typical for EC QCLs, even when used in the lab. Most importantly, the DFB QCL noise is random, and averages toward an Allan variance limit quickly such that detector-noise-limited, high-quality spectra can be obtained for trace levels (for example, 1–50 µg/cm2) of typical powders in just 100 ms.

More DFB array advantages While the stability advantage of DFBs vs. EC configurations has been well established, there are a few less-obvious aspects to DFB arrays that make them more suitable to real-world spectroscopy tools and, in particular, portable spectroscopy tools. For one, the laser array as a whole can maintain a 100% duty cycle while each laser in the array requires operation only over a 100/n (%) duty cycle, where n is the number of lasers in the array. Put another way, a laser array consisting of only pulsed QCLs can operate as a truly continuous-wave (CW) system, allowing for high-measurement duty cycle while possibly reducing the cost of fabrication.

In a related way, generating light for an array that has a 100% aggregate duty cycle (by using, for instance, 32 lasers at 3% duty cycle), the thermal heat-sinking requirements of the source are dramatically reduced. Indeed, our packaged prototypes do not even require active cooling to keep the system cool enough to run. A thermoelectric cooler is built into the package only to stabilize the temperature, which therefore stabilizes the 32 wavelengths (see Fig. 1).
FIGURE 1. A 200 cm-1 prototype QCL array with 32 QCLs is shown prior to beam combining and packaging (a), and experimental spectra from 32 adjacent QCLs are seen (b). (Courtesy of Pendar Technologies)
Finally, the arbitrary programmability of the QCL array opens up many new possibilities for experimental optimization. Certain lasers can be skipped, multiple lasers can fire at once, repetition rates and pulse durations can be set for each element, and so on. These advantages are only truly realized when the QCL array is instrumented into a full system.

Looking holistically at how best to integrate this new capability into a full system, it is critical to draft the link equations that govern the use of electrons to produce photons, the collection of photons scattered back, and finally the conversion from raw spectral information to chemical identification. In the case of mid-IR material identification, it becomes clear that three aspects are particularly consequential: (1) How broad a wavelength range is needed for the tool to be of maximum specificity without producing redundant or useless chemical information (that is, how many laser channels should be used, how should they be spaced with respect to one another, and over what total wavelength regime should they be spaced); (2) the mechanical and electro-optical design of the instrument; and (3) how to get the highest performance regressions against reference spectra while maintaining the high-speed identification that the QCL array actually enables.

With regard to the wavelength regions of interest (see Fig. 2), most of the spectral richness of an IR spectrum is centered in two bands, generally referred to as the functional group region (about 3.3–5.5 µm) and the fingerprint region (about 7–11 µm). The first is typically dominated by the stretch modes of certain common bond groups, while the latter includes bending modes of some functional groups as well as lower frequency modes that are characteristic of the macromolecule “backbone”—for instance, the torsional modes of a toluene ring found in many highly energetic materials. With support from the Department of Homeland Security (DHS)’s Widely Tunable Infrared Source (WTIRS) program and from the Army Research Lab, Pendar is developing a compact array module that fully covers 7–11 µm (900–1430 cm-1).

FIGURE 2. An assemblage of IR spectra of many common explosives shows that each has at least one unique absorption feature in the wavelength ranges selected. The blue shaded box indicates strong water interference in the troposphere. The figure intentionally spans beyond 1800 cm-1 so as to illustrate that no new information is gained for this chemical class by shifting the longwave-IR (LWIR) source further to the blue until the midwave-IR (MWIR) is reached.

 

System architecture drivers To maximize signal-to-noise (SNR) while minimizing the required acquisition time, the system architecture is driven by the following first-order considerations: 1. Increasing the laser power enabled by relaxed thermal constraints as the heat load is distributed over several modules (arrays) and laser waveguides. 2. Maximization of the measurement duty cycle enabled by the fast purely electronic control of the array, allowing close to zero-delay switching between lasers— that is, a laser is on at any time. This is also enabled by the distributed heat load among the laser units. 3. Improved source stability, wavelength accuracy, pulse-to-pulse amplitude, and frequency repeatability—all of which are needed to ensure that the source noise is not the limiting form of noise (compared to detector or speckle noise). Other researchers have studied the source-noise problem of commercial EC QCLs as well and concluded that the order-of-magnitude advantage in minimum detectable absorbance (MDA) offered by a DFB QCL carries through the full experiment.

Finally, once the spectra are digitized, the system must use complex chemometrics algorithms to ensure confident identification of threats in the presence of chemical clutter, deliberate interferents, and unknown backgrounds, without the intervention of an expert user. Our approach to real-time chemometrics is centered on the fact that for chemically cluttered situations, spectral libraries alone—no matter how large—cannot constitute the sole basis for chemometric analysis. Microphysics modeling and experimentation are also required, particularly in regard to crystal size distribution, clutter interactions, and chemical photolysis/reactions.

The key advance lies in the incorporation of chemical and physical understanding of the targets and their co-indicators. We are currently developing a four-tiered approach to the spectroscopic algorithms challenge:

1. Physics-based models. Reliable chemical detection from standoff measurements will involve transformation of the chemical signatures in the reference spectral library to reflect the physical and environmental conditions of the experiment. A physics-based model will thus be included in the detection algorithm to help us model the variability in a reference spectrum as a function of effects such as vapor pressure, deliquescence, photochemical lifetime, reactive lifetime, decomposition products, and so on to facilitate better comparison with the measured spectrum.

2. Situational effects. Effects of different substrates and their properties on the chemical signatures and the angular dependence of spectra that are not clearly linked to equations of physics and chemistry will be experimentally evaluated and included in the detection algorithm. In particular, experimentally measuring such variability will help us algorithmically model the variability of chemical signatures from some “gold standard” reference signature, which—in
addition to the physical model—will enable better detection strategies.

3. Feature-based classification. Extraction of relevant feature vectors from the reference library spectra and the knowledge of the chemistry to form a hierarchical decision tree that will help us provide different levels of classification based on the customer requirements. For instance, if a customer is only interested in finding out whether a given chemical is an explosive, then we might save on computational cost by avoiding searching through the leaves of the decision tree to find out the exact chemical.

4. Real-time atmospheric measurements. Once validated, the model will be suitable for field implementation by the inclusion of an integrated sensor suite that simultaneously records atmospheric pressure, temperature, relative humidity, solar flux, wind magnitude, and water-vapor mixing ratio. With these design drivers considered, Pendar recently completed the build of a handheld demonstration system.

Figure 3 shows the experimentally obtained spectra for two nonhazardous chemical targets as a function of stand-off distance. The yellow line in each panel shows the library FTIR (“true”) spectrum for each. Agreements of r2 > 0.9 were typical. With the prototype system as an extrapolation point, continued, focused advances in the technology are now underway to open myriad frontiers in molecular spectroscopy.

 

FIGURE 3. Standoff spectra of of acetaminophen and ibuprofen for three target distances. The black line shows the FTIR of the same using a diffuse reflectance accessory. The only data processing shown is the normalization of the curve areas to a common value.

 

ACKNOWLEDGEMENT Pendar Technologies was formed in August 2015 through a merger between Pendar Medical (Cambridge, MA), a portable spectroscopy company founded by Daryoosh Vakhshoori (who was previously at Ahura Scientific and CoreTek), and QCL sensing startup Eos Photonics (Cambridge, MA), a Harvard spinoff founded by professor Federico Capasso and his postdocs.

 

Quantum dots and silicon photonics combine in broadband tunable laser
TOMOHIRO KITA and NAOKATSU YAMAMOTO

A new wavelength-tunable laser diode combines quantum-dot (QD) technology and silicon photonics with large optical gains around the 1310 nm telecom window and is amenable to integration of other passive and active components towards a truly integrated photonic platform.

A new heterogeneous wavelength-tunable laser diode, configured using quantum dot (QD) and silicon photonics technology, leverages large optical gains in the 1000–1300 nm wavelength region using a scalable platform for highly integrated photonics devices. A cooperative research effort between Tohoku University (Sendai, Japan) and the National Institution of Information and Communication Technology (NICT; Tokyo, Japan) has resulted in the demonstration of broadband tuning of 44 nm around a 1230 nm center wavelength with an ultrasmall device footprint, with many more configurations with various performance metrics possible.

Recently developed high-capacity optical transmission systems use wavelength-division multiplexing (WDM) systems with dense frequency channels. Because the frequency channels in the conventional band (C-band) at 1530–1565 nm are overcrowded, the frequency utilization efficiency of such WDM systems becomes saturated. However, extensive and unexploited frequency resources are buried in the near-infrared (NIR) wavelength regions such as the thousand (T) and original (O) bands between 1000 and 1260 nm and 1260 and 1350 nm, respectively. Quantum dot-based optical gain media have various attractive characteristics, including ultrabroad optical gain bandwidths, high-temperature device stability, and small line width enhancement factors, as well as silicon photonic wire waveguides based on silicon-on-insulator (SOI) structures that are easily amenable to constructing highly integrated photonics devices.1-4

Quantum dot-based optical gain media have various attractive characteristics, including ultrabroad optical gain bandwidths, high-temperature device stability, and small linewidth enhancement factors, as well as silicon photonic wire waveguides based on silicon-on-insulator (SOI) structures that are easily amenable to constructing highly integrated photonics devices.1-4

The photonic devices used for shortrange data transmission are required to have a small footprint and low power consumption. Therefore, compact, low-power wavelength-tunable laser diodes are key devices for use in higher-capacity data transmission systems that have been designed to use these undeveloped frequency bands, and our heterogeneous tunable wavelength laser diode consisting of a QD optical gain medium and a silicon photonics external cavity is a promising candidate.5

Quantum dot optical amplifier Ultrabroadband optical gain media spanning the T- and O-band are effectively fabricated by using QD growth techniques on large-diameter gallium-arsenide (GaAs) substrates. Our sandwiched sub-nano-separator (SSNS) growth technique is a simple and efficient method for obtaining high-quality QDs (see Fig. 1).

 

FIGURE 1. A cross-section (a) shows a quantum dot (QD) device grown using the SSNS technique, resulting in a high-density, highquality QD structure (b) that is used to create a typical SOA (c) using QD optical gain.

 

In the SSNS method, three monolayers (each around 0.85 nm thick) of GaAs thin film are grown in an indium GaAs (InGaAs) quantum well (QW) under the QDs. We had previously observed many large, coalescent dots that could induce crystal defects in QD devices using a conventional growth technique without SSNS. Now, we can obtain high-density (8.2 × 1010 cm-2), high-quality QD structures since the SSNS technique successfully suppresses the formation of coalescent dots.

For single-mode transmission, a ridgetype semiconductor waveguide was fabricated for single-mode transmission. The cross-section of the semiconductor optical amplifier (SOA) has an anti-reflection (AR) coating facet to connect a silicon photonics chip with low reflection and a cleaved facet used as a reflecting mirror in the laser cavity.

To fabricate the SOA, the SSNS growth technique was combined with molecular beam epitaxy. Quantum dots comprised of indium arsenide (InAs) with 20–30 nm diameters were grown within an InGaAs QW. Seven of these QD layers are stacked to achieve broadband optical gain. Subsequently, this QD-SOA is used as an optical gain medium for the heterogeneous laser, which can be complemented by other communication technology devices such as a high-speed modulator, a two-mode laser, and a photoreceiver.6, 7

Silicon photonics ring resonator filter With the QD-SOA fabricated, a wavelength filter is fabricated next using silicon photonics techniques. It includes a spot-size converter that has a silicon oxide (SiOx) core and a tapered Si waveguide that connects the QD-SOA to the Si photonic wire waveguide while minimizing optical reflections and coupling losses (see Fig. 2).

 

FIGURE 2. A microscope image (a) shows a silicon-photonicsbased wavelength-tunable filter. In a transmittance analysis (b), the red and blue dotted lines indicate the transmittance of a small ring resonator with free spectral range FSR1 and a large ring resonator with FSR2, respectively, and the solid line indicates the product of each transmittance. The tuning wavelength range is determined from the FSR difference of the two rings. A smaller difference in the FSR provides a wider wavelength tuning range, even when the transmittance difference between the main and side peaks is small.

 

The wavelength-tunable filter consists of two ring resonators of different size. The Vernier effect of these two ring resonators allows only light of a specific wavelength to reflect to the QD-SOA. Furthermore, Tantalum micro-heaters formed above the resonators provide a means whereby the laser wavelength can be tuned through application of the thermooptic effect.

Essentially, the wavelength tuning operation of the double ring resonator wavelength filter is achieved through Vernier effects wherein a ring resonator acts as a wavelength filter with constant wavelength interval called the free spectral range (FSR), which is inversely proportional to the circumference of the ring. The tuning wavelength range is determined from the FSR difference of the two rings with FSR1 and FSR2.

A smaller difference in the FSR provides a wider wavelength tuning range, even when the transmittance difference between the main and side peaks is small. On the other hand, a sufficiently large transmittance difference is required to achieve stable single-mode lasing and is obtained using large FSR ring resonators.

Silicon photonics allows us to fabricate an ultrasmall ring resonator with large FSR because of the strong light confinement in the waveguide. The ring resonator consists of four circle quadrants and four straight lines and the radius of the circle was chosen to be 10 µm to avoid bending losses. The FSRs of the ring resonators and the coupling efficiency between the bus-waveguide and the ring resonator are optimized to obtain wide wavelength tuning range and sufficient transmittance difference.

The FSRs and the coupling efficiencies of the double ring resonators are designed to obtain a 50 nm wavelength tuning range and 1 dB transmittance difference. We have since fabricated various wavelength-tunable laser diodes, including a broadband tunable laser diode, a narrow spectral-linewidth tunable laser diode, and a high-power integrated tunable laser diode by using a silicon photonics wavelength filter and a commercially available C-band SOA.8, 9

The tunable laser diode Using stepper motor controllers, the QD-SOA—kept at approximately 25°C using a thermoelectric cooler—and the silicon photonics wavelength filter are butt-jointed (see Fig. 3). The lasing wavelength is controlled by the temperature of a micro-heater placed on the ring resonators. With physical footprints of 600 µm × 1 mm and 1 × 2 mm for the wavelength filter and the QD-SOA, respectively, the total device size of the tunable laser diode is just 1 × 3 mm.

 

FIGURE 3. A schematic shows how the heterogeneous wavelengthtunable laser diode is constructed.

 

Measured using a lensed fiber, the laser output from the cleaved facet of the QD-SOA shows single-mode lasing characteristics with a laser oscillation threshold current of 230 mA. Maximum fiber-coupled output power is 0.4 mW when the QD-SOA injection current is 500 mA. As the ring resonator temperature is increased by a heater with 2.1 mW/nm power consumption, the superimposed lasing spectra show a 44 nm wavelength tuning range with more than a 37 dB side-mode-suppression ratio between the ring resonator’s modes. The 44 nm wavelength tuning range of our heterogeneous QD/Si photonics wavelength-tunable laser is, to our knowledge, the broadest achieved to date. The 44 nm tuning range around 1230 nm corresponds to 8.8 THz in the frequency domain, which is far larger than the 4.4 THz frequency that is available within the C-band.

Our heterogeneous laser is suitable for use as a light source on a silicon photonics platform that includes other optical components such as high-speed modulators and germanium (Ge)-based detectors. In addition to application as a single-chip broadband optical transceiver for telecommunications, the laser could also be applied to biomedical imaging applications such as optical coherence tomography (OCT), considering the low absorption of NIR light at 1310 nm in the presence of water.

ACKNOWLEDGEMENTS This research was partially supported by the Strategic Information and Communications R&D Promotion Program (SCOPE), of Japan’s Ministry of Internal Affairs and Communications and a Grant-in-Aid for Scientific Research of the Japan Society for the Promotion of Science.

REFERENCES

1. Y. Arakawam and H. Sakaki, Appl. Phys. Lett., 40, 11, 939–941 (1982).

2. D. L. Huffaker et al., Appl. Phys. Lett., 73, 18, 2564–2566 (1998).

3. R. A. Soref, Proc. IEEE, 81, 12, 1687–1706 (1993).

4. B. Jalai and S. Fathpour, J. Lightwave Technol., 24, 12, 4600–4615 (2006).

5. T. Kita et al., Appl. Phys. Express, 8, 6, 062701 (2015).

6. N. Yamamoto et al., Jpn. J. Appl. Phys., 51, 2S, 02BG08 (2012).

7. N. Yamamoto et al., Proc. OFC, Los Angeles, CA, paper W2A.24 (Mar. 2015).

8. T. Kita et al., Appl. Phys. Lett., 106, 11, 111104 (2015).

9. N. Kobayashi et al., J. Lightwave Technol., 33, 6, 1241–1246 (2015).

 

Computer modeling boosts laser device development
RÜDIGER PASCHOTTA

A full quantitative understanding of laser devices is boosted by computer modeling, which is not only essential for efficient development processes, but also for identifying the causes of unexpected behavior.

Computer modeling can give valuable insight into the function of laser devices. It can even reveal internal details that could not be observed in experiments, and thus allows one to develop a comprehensive understanding from which laser development can enormously profit. For example, the performance potentials of certain technologies can be fully exploited and time-consuming and expensive iterations in the development process can be avoided. Some typical examples clarify the benefits of computer modeling for improved laser device development.

Example 1: Q-switched lasers

FIGURE 1. Evolution of the transverse beam profile (shown with a color scale) and the optical power (black circles, in arbitrary units) in an actively Q-switched laser is simulated with RP Fiber Power software using numerical beam propagation. The color scale is normalized for each round trip according to the timedependent optical power so that the variation of the beam diameter can be seen.

Example 2: Mode-locked lasers

Example 3: Ultrashortpulse fiber amplifiers

FIGURE 2. The evolution of pulse energy and forward ASE powers in a four-stage fiber amplifier system with various types of ASE suppression between the stages, calculated with a comprehensive computer model

FIGURE 3. Form-based software can be used to model laser devices such as a fiber amplifier. It is essential that such forms be made or modified by the user or by technical support, so that they can be tailored to specific applications.

…. more

Documentation and support For any modeling task, documentation of methods and results is essential. The documentation must not only explain details of the user interface, but must inform the user what kind of physical model was used, what simplifying assumptions were made, and what limitations need to be considered. Unfortunately, software documentation is often neglected. In case of doubt, competent technical support should be available—not only for helping with the handling of the software, but also offering detailed technical and scientific advice. For example, a beginner may find it difficult to decide which kind of model should be implemented for a certain purpose and which possibly disturbing effects need to be considered. Such support should come from a competent expert in the field rather than just a programmer.
Rüdiger Paschotta is founder and executive of RP Photonics Consulting, Bad Dürrheim, Germany; e-mail: paschotta@rp-photonics.com; www.rp-photonics.com

 

 

 

Read Full Post »

Dense Breast Mammogram

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

The Problem With Mammograms

http://forward.com/culture/324003/the-problem-with-mammograms/#ixzz3queBnx00

 

Hallie Leighton had dense breasts — a fact she discovered only in her late 30s, via a mammogram. She grew up in an Ashkenazi family in New York, pursued a career in writing and worked with organizations promoting peace between Israelis and Arabs. By 2013 she was making a documentary on her father Jan Leighton, an actor who set the record as an actor for appearing in the most roles (2,407 according to the 1985 Guinness Book of World Records). She was never able to complete it. She died that year, at the age of 42.

Every woman in Leighton’s family had breast cancer, so she began getting annual mammograms at 35 — five years earlier than the recommended age. In 2009 the results of Leighton’s mammogram came in as “negative” or “normal”; by 2013 she was bedridden, undergoing her final days of chemotherapy.

When Leighton was first diagnosed in 2010, her doctor told her, “You have breast cancer, and it was there in 2009.” The tumor in Leighton’s breast went undiscovered until it was palpable — and at that point, the cancer was already in stage 4.

Happygram,” a documentary which exposes some of the shortcomings in mammography, chronicles Leighton’s struggle with cancer and the implications of having dense breasts.

“Most women simply aren’t informed that they have dense breast tissue,” said Leighton’s best friend Julie Marron. She wrote and directed the documentary, which is currently screening at film festivals around the country.

Breast density is defined by the relative amount of fat in relation to the amount of connective and epithelial tissue (tissue that lines blood vessels and cavities). When more than 50% of breast tissue is connective and epithelial tissue, instead of fatty tissue, the breasts are considered dense. Mammography is the only way to determine breast density.

“If you have dense breasts, what looks dense on a mammogram looks the same as a cancer would look. It tends to confuse or confound the physician, and reduces the sensitivity of the mammogram,” said Gerald Kolb, founder and president of The Breast Group, which counsels clients on different technologies in breast care. “Hallie Leighton’s breasts looked like snowballs; there was no chance they were going to find anything with the mammogram.”

Forty percent of women who are screened for breast cancer have dense breast tissue. These women also account for more than 70% of all invasive cancers. “Mammograms are not very effective screening tools for these women, as they miss between 50% and 75% of all invasive cancers in dense breast tissue,” Marron said. “This is obviously a very critical issue when you are dealing with a population that is more likely to develop cancer.”

Ashkenazi women are even more at risk. They are 1.6 times more likely than the general population to have dense breast tissue, according to Kolb. Moreover, one in 40 Ashkenazi women will test positive for one or both of BRCA gene mutations responsible for breast cancer. For the general population, that number is between one in 350 and one in 800.The BRCA 1 or 2 genes don’t cause cancer, they fight cancer, Kolb says. But if the gene is mutated, the body is not as well equipped to fight the cancer.

“A woman with a BRCA mutation has a lifetime risk of around 33% to 87%, depending on the gene and mutation,” Marron said. “Compare this to a lifetime risk of 12% for developing breast cancer for the overall population.” BRCA gene mutations can be inherited from either or both parents, and therefore they can be present in men as well as in women.

Breast density and BRCA gene mutations are not directly related, but both independently present an increased susceptibility to breast cancer.

“The biggest risk is that a doctor is not going to find the cancer when it’s really small,” Kolb said. When a tumor is detected at a centimeter or smaller, there’s a 95% cure rate. But if the cancer is the size of a golf ball by the time it’s detected, Kolb says, the woman has a 60% chance of living for five years, and then her mortality increases dramatically.

The good news is that mammography isn’t the only method of detecting breast cancer; the bad news is that very few people know this. “What we’re trying to do in the density movement is give women enough information so they can ask appropriate questions of a doctor,” Kolb said.

Kolb advises high-risk women to get a genetic risk analysis, which can be performed by a genetic counselor or a radiologist. He advises getting the risk analysis as early as age 25, but doing so is a personal decision. Not every woman is emotionally prepared to know the results.

“Mammography is a starting point,” said Dr. Dennis McDonald, a California-based women’s imager. Additionally, doctors recommend that women with dense breasts get an MRI, which McDonald says is reserved for high-risk women. It’s an expensive, invasive and time-consuming procedure that requires the injection of fluid in order to read the MRI. As of yet, doctors do not know the side effects of getting an annual MRI.

“A doctor should have started [Leighton] on an MRI right away. She was high risk and they chose to just monitor with a mammogram,” Kolb said. “That’s insufficient.”

Breast ultrasound is another alternative for women with dense breast tissue. “Most of the time, breast density doesn’t present a problem [with ultrasounds],” McDonald said. Though the ultrasound is effective in detecting cancer, he says the downside is that radiologists are often not that comfortable with the technology, simply because they have little experience with it. There are also a lot of false positives, he adds, which result in unnecessary exams or biopsies.

As “Happygram” documents, informing women of their breast density and of alternatives to mammography is a highly charged political issue.

“The whole breast cancer industry has grown up around mammograms,” Marron said. “Physicians weren’t educated on [breast density], deliberately so to a certain extent, and refused to inform patients on this issue, which is really outrageous if you think about it.” Marron says that doctors are required by law and ethical guidelines to inform patients of “material” medical information. “There is no legitimate reason that women have not been informed of this information,” she noted.

After Leighton’s diagnosis, she wanted to ensure that other women didn’t suffer the same misfortune of all-too-late tumor discovery on account of dense breast tissue. She gave media interviews, lobbied in Albany and starred in “Happygram,” all the while undergoing chemotherapy. She died four months after the Breast Density Information Bill passed in New York.

The law requires that every mammography report given to a patient with dense breasts inform the patient in plain language that she has dense breast tissue and that she should talk to her physician about the possible benefits of additional screenings. In New York, the first state in the nation to pass this kind of law, at least 2,500 women with dense breasts and invasive breast cancer received “normal” or “negative” results on their mammograms.

Similar legislation has been passed in more than 20 states throughout the country, but not without objection. Many well-intentioned radiologists, poorly informed about alternative screening options, feared that telling women the limitations of mammography would cause them to lose faith in it altogether and not get tested. Others argued that the information would make women anxious, and that it wouldn’t be fair for those who couldn’t afford additional testing. And still further arguments against informing women were possibly impacted by financial considerations, Marron added.

“Women aren’t getting the benefit of full notification across the board yet,” Marron said. “I think that has to change through education. That’s the primary reason we made this movie. There’s been so much resistance within the medical community to telling women. Change isn’t going to come from the medical community, it has to come from the patients.”

Ashkenazi women shouldn’t panic, Kolb says, but they need to carefully examine their breast density and alternative screening options: “Anytime you have a preventative tragedy like that, you have to do everything in your power to stop it from happening.”

Madison Margolin is a freelance writer based in New York. She writes frequently for the Village Voice.

Read more: http://forward.com/culture/324003/the-problem-with-mammograms/#ixzz3qufQOSmn

Read Full Post »

Inadequacy of EHRs

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

EHRs need better workflows, less ‘chartjunk’

By Marla Durben Hirsch

Electronic health records currently handle data poorly and should be enhanced to better collect, display and use it to support clinical care, according to a new study published in JMIR Medical Informatics.

The authors, from Beth Israel Deaconess Medical Center and elsewhere, state that the next generation of EHRs need to improve workflow, clinical decision-making and clinical notes. They decry some of the problems with existing EHRs, including data that is not displayed well, under-networked, underutilized and wasted. The lack of available data causes errors, creates inefficiencies and increases costs. Data is also “thoughtlessly carried forward or copied and pasted into the current note” creating “chartjunk,” the researchers say.

They suggest ways that future EHRs can be improved, including:

  • Integrating bedside and telemetry monitoring systems with EHRs to provide data analytics that could support real time clinical assessments
  • Formulating notes in real-time using structured data and natural language processing on the free text being entered
  • Formulating treatment plans using information in the EHR plus a review of population data bases to identify similar patients, their treatments and outcomes
  • Creating a more “intelligent” design that capitalizes on the note writing process as well as the contents of the note.

“We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose,” the researchers say. “The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them.”

Many have pointed out the flaws in current EHR design that impede the optimum use of data and hinder workflow. Researchers have suggested that EHRs can be part of a learning health system to better capture and use data to improve clinical practice, create new evidence, educate, and support research efforts.

 

Disrupting Electronic Health Records Systems: The Next Generation

1Beth Israel Deaconess Medical Center, Division of Pulmonary, Critical Care, and Sleep Medicine, Boston, MA, US; 2Yale University, Yale-New Haven Hospital, Department of Pulmonary and Critical Care, New Haven, CT, US; 3Center for Urban Science and Progress, New York University, New York, NY, US; 4Center for Wireless Health, Departments of Anesthesiology and Neurological Surgery, University of Virginia, Charlottesville, VA, US

*these authors contributed equally

JMIR  23.10.15 Vol 3, No 4 (2015): Oct-Dec

This paper is in the following e-collection/theme issue:
Electronic Health Records
Clinical Information and Decision Making
Viewpoints on and Experiences with Digital Technologies in Health
Decision Support for Health Professionals

The health care system suffers from both inefficient and ineffective use of data. Data are suboptimally displayed to users, undernetworked, underutilized, and wasted. Errors, inefficiencies, and increased costs occur on the basis of unavailable data in a system that does not coordinate the exchange of information, or adequately support its use. Clinicians’ schedules are stretched to the limit and yet the system in which they work exerts little effort to streamline and support carefully engineered care processes. Information for decision-making is difficult to access in the context of hurried real-time workflows. This paper explores and addresses these issues to formulate an improved design for clinical workflow, information exchange, and decision making based on the use of electronic health records.   JMIR Med Inform 2015;3(4):e34   http://dx.doi.org:/10.2196/medinform.4192

KEYWORDS

Celi LA, Marshall JD, Lai Y, Stone DJ. Disrupting Electronic Health Records Systems: The Next Generation. JMIR Med Inform 2015;3(4):e34  DOI: 10.2196/medinform.4192  PMID: 26500106

Weed introduced the “Subjective, Objective, Assessment, and Plan” (SOAP) note in the late 1960s [1]. This note entails a high-level structure that supports the thought process that goes into decision-making: subjective data followed by ostensibly more reliable objective data employed to formulate an assessment and subsequent plan. The flow of information has not fundamentally changed since that time, but the complexities of the information, possible assessments, and therapeutic options certainly have greatly expanded. Clinicians have not heretofore created anything like an optimal data system for medicine [2,3]. Such a system is essential to streamline workflow and support decision-making rather than adding to the time and frustration of documentation [4].

What this optimal data system offers is not a radical departure from the traditional thought processes that go into the production of a thoughtful and useful note. However, in the current early stage digitized medical system, it is still incumbent on the decision maker/note creator to capture the relevant priors, and to some extent, digitally scramble to collect all the necessary updates. The capture of these priors is a particular challenge in an era where care is more frequently turned over among different caregivers than ever before. Finally, based on a familiarity of the disease pathophysiology, the medical literature and evidence-based medicine (EBM) resources, the user is tasked with creating an optimal plan based on that assessment. In this so-called digital age, the amount of memorization, search, and assembly can be minimized and positively supported by a well-engineered system purposefully designed to assist clinicians in note creation and, in the process, decision-making.

Since 2006, use of electronic health records (EHRs) by US physicians increased by over 160% with 78% of office-based physicians and 59% of hospitals having adopted an EHR by 2013 [5,6]. With implementation of federal incentive programs, a majority of EHRs were required to have some form of built-in clinical decision support tools by the end of 2012 with further requirements mandated as the Affordable Care Act (ACA) rolls out [7]. These requirements recognize the growing importance of standardization and systematization of clinical decision-making in the context of the rapidly changing, growing, and advancing field of medical knowledge. There are already EHRs and other technologies that exist, and some that are being implemented, that integrate clinical decision support into their functionality, but a more intelligent and supportive system can be designed that capitalizes on the note writing process itself. We should strive to optimize the note creation process as well as the contents of the note in order to best facilitate communication and care coordination. The following sections characterize the elements and functions of this decision support system (Figure 1).

http://medinform.jmir.org/article/viewFile/4192/1/68490

Figure 1. Clinician documentation with fully integrated data systems support. Prior notes and data are input for the following note and decisions. Machine analyzes input and displays suggested diagnoses and problem list, and test and treatment recommendations based on various levels of evidence: CPG – clinical practice guidelines, UTD – Up to Date®, DCDM – Dynamic Clinical Data Mining.View this figure

Incorporating Data

Overwhelmingly, the most important characteristic of the electronic note is its potential for the creation and reception of what we term “bidirectional data streams” to inform both decision-making and research. By bidirectional data exchange, we mean that electronic notes have the potential to provide data streams to the entirety of the EHR database and vice versa. The data from the note can be recorded, stored, accessed, retrieved, and mined for a variety of real-time and future uses. This process should be an automatic and intrinsic property of clinical information systems. The incoming data stream is currently produced by the data that is slated for import into the note according to the software requirements of the application and the locally available interfaces [8]. The provision of information from the note to the system has both short- and long-term benefits: in the short term, this information provides essential elements for functions such as benchmarking and quality reporting; and in the long term, the information provides the afferent arm of the learning system that will identify individualized best practices that can be applied to individual patients in future formulations of plans.

Current patient data should include all the electronically interfaced elements that are available and pertinent. In addition to the usual elements that may be imported into notes (eg, laboratory results and current medications), the data should include the immediate prior diagnoses and treatment items, so far as available (especially an issue for the first note in a care sequence such as in the ICU), the active problem list, as well as other updates such as imaging, other kinds of testing, and consultant input. Patient input data should be included after verification (eg, updated reviews of systems, allergies, actual medications being taken, past medical history, family history, substance use, social/travel history, and medical diary that may include data from medical devices). These data priors provide a starting point that is particularly critical for those note writers who are not especially (or at all) familiar with the patient. They represent historical (and yet dynamic) evidence intended to inform decision-making rather than “text” to be thoughtlessly carried forward or copied and pasted into the current note.

Although the amount and types of data collected are extremely important, how it is used and displayed are paramount. Many historical elements of note writing are inexcusably costly in terms of clinician time and effort when viewed at a level throughout the entire health care system. Redundant items such as laboratory results and copy-and-pasted nursing flow sheet data introduce a variety of “chartjunk” that clutters documentation and makes the identification of truly important information more difficult and potentially even introduces errors that are then propagated throughout the chart [9,10]. Electronic systems are poised to automatically capture the salient components of care so far as these values are interfaced into the system and can even generate an active problem list for the providers. With significant amounts of free text and “unstructured data” being entered, EHRs will need to incorporate more sophisticated processes such as natural language processing and machine learning to provide accurate interpretation of text entered by a variety of different users, from different sources, and in different formats, and then translated into structured data that can be analyzed by the system.

Optimally, a fully functional EHR would be able to provide useful predictive data analytics including the identification of patterns that characterize a patient’s normal physiologic state (thereby enabling detection of significant change from that state), as well as mapping of the predicted clinical trajectory, such as prognosis of patients with sepsis under a number of different clinical scenarios, and with the ability to suggest potential interventions to improve morbidity or mortality [11]. Genomic and other “-omic” information will eventually be useful in categorizing certain findings on the basis of individual susceptibilities to various clinical problems such as sepsis, auto-immune disease, and cancer, and in individualizing diagnostic and treatment recommendations. In addition, an embedded data analytic function will be able to recognize a constellation of relatively subtle changes that are difficult or impossible to detect, especially in the presence of chronic co-morbidities (eg, changes consistent with pulmonary embolism, which can be a subtle and difficult diagnosis in the presence of long standing heart and/or lung disease) [12,13].

The data presentation section must be thoughtfully displayed so that the user is not overwhelmed, but is still aware of what elements are available, and directed to those aspects that are most important. The user then has the tools at hand to construct the truly cognitive sections of the note: the assessment and plan. Data should be displayed in a fashion that efficiently and effectively provides a maximally informationally rich and minimally distracting graphic display. The fundamental principle should result in a thoughtfully planned data display created on the ethos of “just enough and no more,” as well as the incorporation of clinical elements such as severity, acuity, stability, and reversibility. In addition to the now classic teachings of Edward Tufte in this regard, a number of new data artists have entered the field [14]. There is room for much innovation and improvement in this area, as medicine transitions from paper to a digital format that provides enormous potential and capability for new types of displays.

Integrating the Monitors

Bedside and telemetry monitoring systems have become an element of the clinical information system but they do not yet interact with the EHR in a bidirectional fashion to provide decision support. In addition to the raw data elements, the monitors can provide data analytics that could support real-time clinical assessment as well as material for predictive purposes apart from the traditional noisy alarms [15,16]. It may be less apparent how the reverse stream (EHR to bedside monitor) would work, but the EHR can set the context for the interpretation of raw physiologic signals based on previously digitally captured vital signs, patient co-morbidities and current medications, as well as the acute clinical context.

In addition, the display could provide an indication of whether technically ”out of normal range” vital signs (or labs in the emergency screen described below) are actually “abnormal” for this particular patient. For example, a particular type of laboratory value for a patient may have been chronically out of normal range and not represent a change requiring acute investigation and/or treatment. This might be accomplished by displaying these types of ”normally abnormal” values in purple or green rather than red font for abnormal, or via some other designating graphic. The purple font (or whatever display mode was utilized) would designate the value as technically abnormal, but perhaps notcontextually abnormal. Such designations are particularly important for caregivers who are not familiar with the patient.

It also might be desirable to use a combination of accumulated historical data from the monitor and the EHR to formulate personalized alarm limits for each patient. Such personalized alarm limits would provide a smarter range of acceptable values for each patient and perhaps also act to reduce the unacceptable number of false positive alarms that currently plague bedside caregivers (and patients) [17]. These alarm limits would be dynamically based on the input data and subject to reformulation as circumstances changed. We realize that any venture into alarm settings becomes a regulatory and potentially medico-legal issue, but these intimidating factors should not be allowed to grind potentially beneficial innovations to a halt. For example, “hard” limits could be built into the alarm machine so that the custom alarm limits could not fall outside certain designated values.

Supporting the Formulation of the Assessment

Building on both prior and new, interfaced and manually entered data as described above, the next framework element would consist of the formulation of the note in real time. This would consist of structured data so far as available and feasible, but is more likely to require real-time natural language processing performed on the free text being entered. Different takes on this kind of templated structure have already been introduced into several electronic systems. These include note templates created for specific purposes such as end-of-life discussions, or documentation of cardiopulmonary arrest. The very nature of these note types provides a robust context for the content. We also recognize that these shorter and more directed types of notes are not likely to require the kind of extensive clinical decision support (CDS) from which an admission or daily progress note may benefit.

Until the developers of EHRs find a way to fit structured data selection seamlessly and transparently into workflow, we will have to do the best we can with the free text that we have available. While this is a bit clunky in terms of data utilization purposes, perhaps it is not totally undesirable, as free text inserts a needed narrative element into the otherwise storyless EHR environment. Medical care can be described as an ongoing story and free text conveys this story in a much more effective and interesting fashion than do selected structured data bits. Furthermore, stories tend to be more distinctive than lists of structured data entries, which sometimes seem to vary remarkably little from patient to patient. But to extract the necessary information, the computer still needs a processed interpretation of that text. More complex systems are being developed and actively researched to act more analogously to our own ”human” form of clinical problem solving [18], but until these systems are integrated into existing EHRs, clinicians may be able to help by being trained to minimize the potential confusion engendered by reducing completely unconstrained free text entries and/or utilizing some degree of standardization within the use of free text terminologies and contextual modifiers.

Employing the prior data (eg, diagnoses X, Y, Z from the previous note) and new data inputs (eg, laboratory results, imaging reports, and consultants’ recommendations) in conjunction with the assessment being entered, the system would have the capability to check for inconsistencies and omissions based on analysis of both prior and new entries. For example, a patient in the ICU has increasing temperature and heart rate, and decreasing oxygen saturation. These continuous variables are referenced against other patient features and risk factors to suggest the possibility that the patient has developed a pulmonary embolism or an infectious ventilator-associated complication. The system then displays these possible diagnoses within the working assessment screen with hyperlinks to the patient’s flow sheets and other data supporting the suggested problems (Figure 2). The formulation of the assessment is clearly not as potentially evidence-based as that of the plan; however, there should still be dynamic, automatic and rapid searches performed for pertinent supporting material in the formulation of the assessment. These would include the medical literature, including textbooks, online databases, and applications such as WebMD. The relevant literature that the system has identified, supporting the associations listed in the assessment and plan, can then be screened by the user for accuracy and pertinence to the specific clinical context. Another potentially useful CDS tool for assessment formulation is a modality we have termed dynamic clinical data mining (DCDM) [19]. DCDM draws upon the power of large sets of population health data to provide differential diagnoses associated with groupings or constellations of symptoms and findings. Similar to the process just described, the clinician would then have the ability to review and incorporate these suggestions or not.

An optional active search function would also be provided throughout the note creation process for additional flexibility—clinicians are already using search engines, but doing so sometimes in the absence of specific clinical search algorithms (eg, a generic search engine such Google). This may produce search results that are not always of the highest possible quality [20,21]. The EHR-embedded search engine would have its algorithm modified to meet the task as Google has done previously for its search engine [22]. The searchable TRIP database provides a search engine for high-quality clinical evidence, as do the search modalities within Up to Date, Dynamed, BMJ Clinical Evidence, and others [23,24].

http://medinform.jmir.org/article/viewFile/4192/1/68491

Figure 2. Mock visualization of symptoms, signs, laboratory results, and other data input and systems suggestion for differential diagnoses.

Supporting the Formulation of the Plan

With the assessment formulated, the system would then formulate a proposed plan using EBM inputs and DCDM refinements for issues lying outside EBM knowledge. Decision support for plan formulation would include items such as randomized control trials (RCTs), observational studies, clinical practice guidelines (CPGs), local guidelines, and other relevant elements (eg, Cochrane reviews). The system would provide these supporting modalities in a hierarchical fashion using evidence of the highest quality first before proceeding down the chain to lower quality evidence. Notably, RCT data are not available for the majority of specific clinical questions, or it is not applicable because the results cannot be generalized to the patient at hand due to the study’s inclusion and exclusion criteria [25]. Sufficiently reliable observational research data also may not be available, although we expect that the holes in the RCT literature will be increasingly filled by observational studies in the near future [16,26]. In the absence of pertinent evidence-based material, the system would include the functionality which we have termed DCDM, and our Stanford colleagues have termed the “green button” [19,27]. This still-theoretical process is described in detail in the references, but in brief, DCDM would utilize a search engine type of approach to examine a population database to identify similar patients on the basis of the information entered in the EHR. The prior treatments and outcomes of these historical patients would then be analyzed to present options for the care of the current patient that were, to a large degree, based on prior data. The efficacy of DCDM would depend on, among other factors, the availability of a sufficiently large population EHR database, or an open repository that would allow for the sharing of patient data between EHRs. This possibility is quickly becoming a reality with the advent of large, deidentified clinical databases such as that being created by the Patient Centered Outcomes Research Institute [26].

The tentative plan could then be modified by the user on the basis of her or his clinical “wetware” analysis. The electronic workflow could be designed in a number of ways that were modifiable per user choice/customization. For example, the user could first create the assessment and plan which would then be subject to comment and modification by the automatic system. This modification might include suggestions such as adding entirely new items, as well as the editing of entered items. In contrast, as described, the system could formulate an original assessment and plan that was subject to final editing by the user. In either case, the user would determine the final output, but the system would record both system and final user outputs for possible reporting purposes (eg, consistency with best practices). Another design approach might be to display the user entry in toto on the left half of a computer screen and a system-formulated assessment (Figure 3) and plan on the right side for comparison. Links would be provided throughout the system formulation so that the user could drill into EHR-provided suggestions for validation and further investigation and learning. In either type of workflow, the system would comparatively evaluate the final entered plan for consistency, completeness, and conformity with current best practices. The system could display the specific items that came under question and why. Users may proceed to adopt or not, with the option to justify their decision. Data reporting analytics could be formulated on the basis of compliance with EBM care. Such analytics should be done and interpreted with the knowledge that EBM itself is a moving target and many clinical situations do not lend themselves to resolution with the current tools supplied by EBM.

Since not all notes call for this kind of extensive decision support, the CDS material could be displayed in a separate columnar window adjacent to the main part of the screen where the note contents were displayed so that workflow is not affected. Another possibility would be an “opt-out” button by which the user would choose not to utilize these system resources. This would be analogous but functionally opposite to the “green button” opt-in option suggested by Longhurst et al, and perhaps be designated the “orange button” to clearly make this distinction [27]. Later, the system would make a determination as to whether this lack of EBM utilization was justified, and provide a reminder if the care was determined to be outside the bounds of current best practices. While the goal is to keep the user on the EBM track as much as feasible, the system has to “realize” that real care will still extend outside those bounds for some time, and that some notes and decisions simply do not require such machine support.

There are clearly still many details to be worked out regarding the creation and use of a fully integrated bidirectional EHR. There currently are smaller systems that use some components of what we propose. For example, a large Boston hospital uses a program called QPID which culls all previously collected patient data and uses a Google-like search to identify specific details of relevant prior medical history which is then displayed in a user-friendly fashion to assist the clinician in making real-time decisions on admission [28]. Another organization, the American Society of Clinical Oncology, has developed a clinical Health IT tool called CancerLinQ which utilizes large clinical databases of cancer patients to trend current practices and compare the specific practices of individual providers with best practice guidelines [29]. Another hospital system is using many of the components discussed in a new, internally developed platform called Fluence that allows aggregation of patient information, and applies already known clinical practice guidelines to patients’ problem lists to assist practitioners in making evidenced-based decisions [30]. All of these efforts reflect inadequacies in current EHRs and are important pieces in the process of selectively and wisely incorporating these technologies into EHRs, but doing so universally will be a much larger endeavor.

http://medinform.jmir.org/article/viewFile/4192/1/68492

Figure 3. Mock screenshot for the “Assessment and Plan” screen with background data analytics. Based on background analytics that are being run by the system at all times, a series of “problems” are identified and suggested by the system, which are then displayed in the EMR in the box on the left. The clinician can then select problems that are suggested, or input new problems that are then displayed in the the box on the right of the EMR screen, and will now be apart of ongoing analytics for future assessment.  View this figure

Conclusions

Medicine has finally entered an era in which clinical digitization implementations and data analytic systems are converging. We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose (personal communication from Peter Szolovits, The Unreasonable Effectiveness of Clinical Data. Challenges in Big Data for Data Mining, Machine Learning and Statistics Conference, March 2014). The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them. The Institute of Medicine demands a “learning health care system” where analysis of patient data is a key element in continuously improving clinical outcomes [31]. This is also an age of increasing medical complexity bound up in increasing financial and time constraints. The latter dictate that medical practice should become more standardized and evidence-based in order to optimize outcomes at the lowest cost. Current EHRs, mostly implemented over the past decade, are a first step in the digitization process, but do not support decision-making or streamline the workflow to the extent to which they are capable. In response, we propose a series of information system enhancements that we hope can be seized, improved upon, and incorporated into the next generation of EHRs.

There is already government support for these advances: The Office of the National Coordinator for Health IT recently outlined their 6-year and 10-year plans to improve EHR and health IT interoperability, so that large-scale realizations of this idea can and will exist. Within 10 years, they envision that we “should have an array of interoperable health IT products and services that allow the health care system to continuously learn and advance the goal of improved health care.” In that, they envision an integrated system across EHRs that will improve not just individual health and population health, but also act as a nationwide repository for searchable and researchable outcomes data [32]. The first step to achieving that vision is by successfully implementing the ideas and the system outlined above into a more fully functional EHR that better supports both workflow and clinical decision-making. Further, these suggested changes would also contribute to making the note writing process an educational one, thereby justifying the very significant time and effort expended, and would begin to establish a true learning system of health care based on actual workflow practices. Finally, the goal is to keep clinicians firmly in charge of the decision loop in a “human-centered” system in which technology plays an essential but secondary role. As expressed in a recent article on the issue of automating systems [33]:

In this model (human centered automation)…technology takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

Key Concepts and Terminology

A number of concepts and terms were introduced throughout this paper, and some clarification and elaboration of these follows:

  • Affordable Care Act (ACA): Legislation passed in 2010 that constitutes two separate laws including the Patient Protection and Affordable Care Act and the Health Care and Education Reconciliation Act. These two pieces of legislation act together for the expressed goal of expanding health care coverage to low-income Americans through expansion of Medicaid and other federal assistance programs [34].
  • Clinical Decision Support (CDS) is defined by CMS as “a key functionality of health information technology” that encompasses a variety of tools including computerized alerts and reminders, clinical guidelines, condition-specific order sets, documentations templates, diagnostic support, and other tools that “when used effectively, increases quality of care, enhances health outcomes, helps to avoid errors and adverse events, improves efficiency, reduces costs, and boosts provider and patient satisfaction” [35].
  • Cognitive Computing is defined as “the simulation of human thought processes in a computerize model…involving self learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works” [36]. Defined by IBM as computer systems that “are trained using artificial intelligence and machine learning algorithms to sense, predict, infer and, in some ways, think” [37].
  • Deep learning is a form of machine learning (a more specific subgroup of cognitive computing) that utilizes multiple levels of data to make hierarchical connections and recognize more complex patterns to be able to infer higher level concepts from lower levels of input and previously inferred concepts [38]. Figure 3 demonstrates how this concept relates to patients illustrating the system recognizing patterns of signs and symptoms experienced by a patient, and then inferring a diagnosis (higher level concept) from those lower level inputs. The next level concept would be recognizing response to treatment for proposed diagnosis, and offering either alternative diagnoses, or change in therapy, with the system adapting as the patient’s course progresses.
  • Dynamic clinical data mining (DCDM): First, data mining is defined as the “process of discovering patterns, automatically or semi-automatically, in large quantities of data” [39]. DCDM describes the process of mining and interpreting the data from large patient databases that contain prior and concurrent patient information including diagnoses, treatments, and outcomes so as to make real-time treatment decisions [19].
  • Natural Language Processing (NLP) is a process based on machine learning, or deep learning, that enables computers to analyze and interpret unstructured human language input to recognize and even act upon meaningful patterns [39,40].

References

  1. Weed LL. Medical records, patient care, and medical education. Ir J Med Sci 1964 Jun;462:271-282. [Medline]
  2. Celi L, Csete M, Stone D. Optimal data systems: the future of clinical predictions and decision support. Curr Opin Crit Care 2014 Oct;20(5):573-580. [CrossRef] [Medline]
  3. Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS One 2013 Nov;8(11):e80318 [FREE Full text] [CrossRef] [Medline]
  4. Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study. JAMA Intern Med 2013 Nov 25;173(21):1962-1969. [CrossRef] [Medline]

more ….

The Electronic Health Record: How far we have travelled, and where is journey’s end?

http://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-have-travelled-and-where-is-journeys-end/

A focus of the Accountable Care Act is improved delivery of quality, efficiency and effectiveness to the patients who receive healthcare in US from the providers in a coordinated system.  The largest confounder in all of this is the existence of silos that are not readily crossed, handovers, communication lapses, and a heavy paperwork burden.  We can add to that a large for profit insurance overhead that is disinterested in the patient-physician encounter.  Finally, the knowledge base of medicine has grown sufficiently that physicians are challenged by the amount of data and the presentation in the Medical Record.

I present a review of the problems that have become more urgent to fix in the last decade.  The administration and paperwork necessitated by health insurers, HMOs and other parties today may account for 40% of a physician’s practice, and the formation of large physician practice groups and alliances of the hospital and hospital staffed physicians (as well as hospital system alliances) has increased in response to the need to decrease the cost of non-patient care overhead.   I discuss some of the points made by two innovators from the healthcare and  the communications sectors.

I also call attention to the New York Times front page article calling attention to a sharp rise in inflation-adjusted Medicare payments for emergency-room services since 2006 due to upcoding at the highest level, partly related to the ability to physician ability to overstate the claim for service provided by correctible improvements I discuss below.  (NY Times, 9/22/2012).  The solution still has another built in step that requires quality control of both the input and the output, achievable today.  This also comes at a time that there is a nationwide implementation of ICD-10 to replace ICD-9 coding.

US medical groups' adoption of EHR (2005)

US medical groups’ adoption of EHR (2005) (Photo credit: Wikipedia)

The first finding by Robert S Didner, on “Decision Making in the Clinical Setting”, concludes that the gathering of information has large costs while reimbursements for the activities provided have decreased, detrimental to the outcomes that are measured.  He suggests that this data can be gathered and reformatted to improve its value in the clinical setting by leading to decisions with optimal outcomes.  He outlines how this can be done.

The second is a discussion by Emergency Medicine  physicians, Thomas A Naegele and harry P Wetzler,  who have developed a Foresighted Practice Guideline (FPG) (“The Foresighted Practice Guideline Model: A Win-Win Solution”).   They focus on collecting data from similar patients, their interventions, and treatments to better understand the value of alternative courses of treatment.  Using the FPG model will enable physicians to elevate their practice to a higher level and they will have hard information on what works.  These two views are more than 10 years old, and they are complementary.

Didner points out that there is no one sequence of tests and questions that can be optimal for all presenting clusters.  Even as data and test results are acquired, the optimal sequence of information gathering is changed, depending on the gathered information.  Thus, the dilemma is created of how to collect clinical data.  Currently, the way information is requested and presented does not support the way decisions are made.   Decisions are made in a “path-dependent” way, which is influenced by the sequence in which the components are considered.    Ideally, it would require a separate form for each combination of presenting history and symptoms, prior to ordering tests, which is unmanageable.   The blank paper format is no better, as the data is not collected in the way it would be used, and it constitutes separate clusters (vital signs, lab work{also divided into CBC, chemistry panel, microbiology, immunology, blood bank, special tests}].   Improvements have been made in the graphical presentation of a series of tests. Didner presents another means of gathering data in machine manipulable form that improves the expected outcomes.  The basis for this model is that at any stage of testing and information gathering there is an expected outcome from the process, coupled with a metric, or hierarchy of values to determine the relative desirability of the possible outcomes.

He creates a value hierarchy:

  1. Minimize the likelihood that a treatable, life-threatening disorder is not treated.
  2. Minimize the likelihood that a treatable, permanently-disabling or disfiguring disorder is not treated.
  3. Minimize the likelihood that a treatable, discomfort causing disorder is not treated.
  4. Minimize the likelihood that a risky procedure, (treatment or diagnostic procedure) is inappropriately administered.
  5. Minimize the likelihood that a discomfort causing procedure is inappropriately administered.
  6. Minimize the likelihood that a costly procedure is inappropriately administered.
  7. Minimize the time of diagnosing and treating the patient.
  8. Minimize the cost of diagnosing and treating the patient.

In reference to a way of minimizing the number, time and cost of tests, he determines that the optimum sequence could be found using Claude  Shannon’s Information theory.  As to a hierarchy of outcome values, he refers to the QALY scale as a starting point. At any point where a determination is made there is disparate information that has to be brought together, such as, weight, blood pressure, cholesterol, etc.  He points out, in addition, that the way the clinical information is organized is not opyimal for the way to display information to enhance human cognitive performance in decision support.  Furthermore, he looks at the limit of short term memory as 10 chunks of information at any time, and he compares the positions of chess pieces on the board with performance of a grand master, if the pieces are in an order commensurate with a “line of attack”.  The information has to be ordered in the way it is to be used! By presenting information used for a particular decision component in a compact space the load on short term memory is reduced, and there is less strain in searching for the relevant information.

He creates a Table to illustrate the point.

Correlation of weight with other cardiac risk factors

Chol                       0.759384
HDL                        -0.53908
LDL                         0.177297
bp-syst                 0.424728
bp-dia                   0.516167

Triglyc                   0.637817

The task of the information system designer is to provide or request the right information, in the best form, at each stage of the procedure.

The FPG concept as deployed by Naegele and Wetzler is a model for design of a more effective health record that has already shown substantial proof of concept in the emergency room setting.  In principle, every clinical encounter is viewed as a learning experience that requires the collection of data , learning from similar patients, and comparing the value of alternative courses of treatment.  The framework for standard data collection is the FPG model. The FPG is distinguished from hindsighted guidelines which are utilized by utilization and peer review organizations.  Over time, the data forms patient clusters and enables the physician to function at a higher level.

Hypothesis construction is experiential, and hypothesis generation and testing is required to go from art to science in the complex practice of medicine.  In every encounter there are 3 components: patient, process, and outcome.  The key to the process is to collect data on patients, processes and outcomes in a standard way.  The main problem with a large portion of the chart is that the description is not uniform.  This is not fully resolved with good natural language encryption.  The standard words and phrases that may be used for a particular complaint or condition constitute a guideline.  This type of “guided documentation” is a step in moving toward a guided practice.  It enables physicians to gather data on patients, processes and outcomes of care in routine settings, and they can be reviewed and updated.  This is a higher level of methodology than basing guidelines on “consensus and opinion”.
When Lee Goldman, et al., created the guideline for classifying chest pain in the emergency room, the characteristics of the chest pain was problematic. In dealing with this he determined that if the chest pain was “stabbing”, or if it radiated to the right foot, heart attack is excluded.

The IOM is intensely committed to practice guidelines for care.  The guidelines are the data bases of the science of medical decision making and disposition processing, and are related to process-flow.  However, the hindsighted  or retrospective approach is diagnosis or procedure oriented.  HPGs are the tool used in utilization review.  The FPG model focuses on the physician-patient encounter and is problem oriented.   We can go back further and remember the contribution by Lawrence Weed to the “structured medical record”.
The physicians today use an FPG framework in looking at a problem or pathology (especially in pathology, which extends the classification by used of biomarker staining).  The Standard Patient File Format (SPPF) was developed by Weed and includes: 1. Patient demographics; 2. Front of the chart; 3. Subjective: Objective; Assessment/diagnosis;6. Plan; Back of the chart.  The FPG retains the structure of the SPPF  All of the words and phrases in the FPG are the data base for the problem or condition. The current construct of the chart is uninviting: nurses notes, medications, lab results, radiology, imaging.

Realtime Clinical Expert Support and Validation System
Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.

The introduction of a DASHBOARD has allowed a presentation of drug reactions, allergies, primary and secondary diagnoses, and critical information about any patient the care giver needing access to the record. The advantage of this innovation is obvious. The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success. It is also imperative that the extraction of data from disparate sources will, in the long run, further improve the diagnostic process.  For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin). Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.  Characteristics expressed as measurements of size, density, and concentration, resulting in more than a dozen composite variables, including the mean corpuscular volume (MCV), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH), total white cell count (WBC), total lymphocyte count, neutrophil count (mature granulocyte count and bands), monocytes, eosinophils, basophils, platelet count, and mean platelet volume (MPV), blasts, reticulocytes and platelet clumps, as well as other features of classification.   This has been described in a previous post.

It is beyond comprehension that a better construct has not be created for common use.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments, & Computers 2004; 36 (3): 506–515.
De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms. Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts.
Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories. Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

The potential contribution of informatics to healthcare is more than currently estimated

http://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-informatics-to-healthcare-is-more-than-currently-estimated/

 The estimate of improved costsavings in healthcare and diagnostic accuracy is extimated to be substantial.   I have written about the unused potential that we have not yet seen.  In short, there is justification in substantial investment in resources to this, as has been proposed as a critical goal.  Does this mean a reduction in staffing?  I wouldn’t look at it that way.  The two huge benefits that would accrue are:

  1. workflow efficiency, reducing stress and facilitating decision-making.
  2. scientifically, primary knowledge-based  decision-support by well developed algotithms that have been at the heart of computational-genomics.
 Can computers save health care? IU research shows lower costs, better outcomes

Cost per unit of outcome was $189, versus $497 for treatment as usual

 Last modified: Monday, February 11, 2013
BLOOMINGTON, Ind. — New research from Indiana University has found that machine learning — the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems — can drastically improve both the cost and quality of health care in the United States.
 Physicians using an artificial intelligence framework that predicts future outcomes would have better patient outcomes while significantly lowering health care costs.
Using an artificial intelligence framework combining Markov Decision Processes and Dynamic Decision Networks, IU School of Informatics and Computing researchers Casey Bennett and Kris Hauser show how simulation modeling that understands and predicts the outcomes of treatment could
  • reduce health care costs by over 50 percent while also
  • improving patient outcomes by nearly 50 percent.
The work by Hauser, an assistant professor of computer science, and Ph.D. student Bennett improves upon their earlier work that
  • showed how machine learning could determine the best treatment at a single point in time for an individual patient.
By using a new framework that employs sequential decision-making, the previous single-decision research
  • can be expanded into models that simulate numerous alternative treatment paths out into the future;
  • maintain beliefs about patient health status over time even when measurements are unavailable or uncertain; and
  • continually plan/re-plan as new information becomes available.

In other words, it can “think like a doctor.”  (Perhaps better because of the limitation in the amount of information a bright, competent physician can handle without error!)

“The Markov Decision Processes and Dynamic Decision Networks enable the system to deliberate about the future, considering all the different possible sequences of actions and effects in advance, even in cases where we are unsure of the effects,” Bennett said.  Moreover, the approach is non-disease-specific — it could work for any diagnosis or disorder, simply by plugging in the relevant information.  (This actually raises the question of what the information input is, and the cost of inputting.)
The new work addresses three vexing issues related to health care in the U.S.:
  1. rising costs expected to reach 30 percent of the gross domestic product by 2050;
  2. a quality of care where patients receive correct diagnosis and treatment less than half the time on a first visit;
  3. and a lag time of 13 to 17 years between research and practice in clinical care.

 

Framework for Simulating Clinical Decision-Making

“We’re using modern computational approaches to learn from clinical data and develop complex plans through the simulation of numerous, alternative sequential decision paths,” Bennett said. “The framework here easily out-performs the current treatment-as-usual, case-rate/fee-for-service models of health care.”  (see the above)
Bennett is also a data architect and research fellow with Centerstone Research Institute, the research arm of Centerstone, the nation’s largest not-for-profit provider of community-based behavioral health care. The two researchers had access to clinical data, demographics and other information on over 6,700 patients who had major clinical depression diagnoses, of which about 65 to 70 percent had co-occurring chronic physical disorders like diabetes, hypertension and cardiovascular disease.  Using 500 randomly selected patients from that group for simulations, the two
  • compared actual doctor performance and patient outcomes against
  • sequential decision-making models

using real patient data.

They found great disparity in the cost per unit of outcome change when the artificial intelligence model’s
  1. cost of $189 was compared to the treatment-as-usual cost of $497.
  2. the AI approach obtained a 30 to 35 percent increase in patient outcomes
Bennett said that “tweaking certain model parameters could enhance the outcome advantage to about 50 percent more improvement at about half the cost.”
While most medical decisions are based on case-by-case, experience-based approaches, there is a growing body of evidence that complex treatment decisions might be effectively improved by AI modeling.  Hauser said “Modeling lets us see more possibilities out to a further point –  because they just don’t have all of that information available to them.”  (Even then, the other issue is the processing of the information presented.)
Using the growing availability of electronic health records, health information exchanges, large public biomedical databases and machine learning algorithms, the researchers believe the approach could serve as the basis for personalized treatment through integration of diverse, large-scale data passed along to clinicians at the time of decision-making for each patient. Centerstone alone, Bennett noted, has access to health information on over 1 million patients each year. “Even with the development of new AI techniques that can approximate or even surpass human decision-making performance, we believe that the most effective long-term path could be combining artificial intelligence with human clinicians,” Bennett said. “Let humans do what they do well, and let machines do what they do well. In the end, we may maximize the potential of both.”
Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach” was published recently in Artificial Intelligence in Medicine. The research was funded by the Ayers Foundation, the Joe C. Davis Foundation and Indiana University.
For more information or to speak with Hauser or Bennett, please contact Steve Chaplin, IU Communications, at 812-856-1896 or stjchap@iu.edu.
IBM Watson Finally Graduates Medical School
It’s been more than a year since IBM’s Watson computer appeared on Jeopardy and defeated several of the game show’s top champions. Since then the supercomputer has been furiously “studying” the healthcare literature in the hope that it can beat a far more hideous enemy: the 400-plus biomolecular puzzles we collectively refer to as cancer.
Anomaly Based Interpretation of Clinical and Laboratory Syndromic Classes

Larry H Bernstein, MD, Gil David, PhD, Ronald R Coifman, PhD.  Program in Applied Mathematics, Yale University, Triplex Medical Science.

 Statement of Inferential  Second Opinion  
 Realtime Clinical Expert Support and Validation System

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides
  • empirical medical reference and suggests quantitative diagnostics options.

Background

The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record by
  • services, by
  • diagnostic method, and by
  • date, to cite examples.

This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports in a workstation entered by keying to icons.  This requires that the medical practitioner finds

  • the history,
  • medications,
  • laboratory reports,
  • cardiac imaging and EKGs, and
  • radiology
in different workspaces.  The introduction of a DASHBOARD has allowed a presentation of
  • drug reactions,
  • allergies,
  • primary and secondary diagnoses, and
  • critical information about any patient the care giver needing access to the record.
 The advantage of this innovation is obvious.  The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success.

Proposal

We are proposing an innovation that supercedes the main design elements of a DASHBOARD and
  • utilizes the conjoined syndromic features of the disparate data elements.
So the important determinant of the success of this endeavor is that it facilitates both
  1. the workflow and
  2. the decision-making process
  • with a reduction of medical error.
 This has become extremely important and urgent in the 10 years since the publication “To Err is Human”, and the newly published finding that reduction of error is as elusive as reduction in cost.  Whether they are counterproductive when approached in the wrong way may be subject to debate.
We initially confine our approach to laboratory data because it is collected on all patients, ambulatory and acutely ill, because the data is objective and quality controlled, and because
  • laboratory combinatorial patterns emerge with the development and course of disease.  Continuing work is in progress in extending the capabilities with model data-sets, and sufficient data.
It is true that the extraction of data from disparate sources will, in the long run, further improve this process.  For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin) above a level determined by a receiver operator curve (ROC) analysis, particularly in the absence of substantially reduced renal function.
The conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.  Traditionally this has been accomplished by an intuitive interpretation of the data by the individual clinician.  Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion.
The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates the review of a peripheral smear.  While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the production, release or suppression of the formed elements from the blood-forming organ to the circulation.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.
Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of
  • size,
  • density, and
  • concentration,
resulting in more than a dozen composite variables, including the
  1. mean corpuscular volume (MCV),
  2. mean corpuscular hemoglobin concentration (MCHC),
  3. mean corpuscular hemoglobin (MCH),
  4. total white cell count (WBC),
  5. total lymphocyte count,
  6. neutrophil count (mature granulocyte count and bands),
  7. monocytes,
  8. eosinophils,
  9. basophils,
  10. platelet count, and
  11. mean platelet volume (MPV),
  12. blasts,
  13. reticulocytes and
  14. platelet clumps,
  15. perhaps the percent immature neutrophils (not bands)
  16. as well as other features of classification.
The use of such variables combined with additional clinical information including serum chemistry analysis (such as the Comprehensive Metabolic Profile (CMP)) in conjunction with the clinical history and examination complete the traditional problem-solving construct. The intuitive approach applied by the individual clinician is limited, however,
  1. by experience,
  2. memory and
  3. cognition.
The application of rules-based, automated problem solving may provide a more reliable and valid approach to the classification and interpretation of the data used to determine a knowledge-based clinical opinion.
The classification of the available hematologic data in order to formulate a predictive model may be accomplished through mathematical models that offer a more reliable and valid approach than the intuitive knowledge-based opinion of the individual clinician.  The exponential growth of knowledge since the mapping of the human genome has been enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.  In a univariate universe the individual has significant control in visualizing data because unlike data may be identified by methods that rely on distributional assumptions.  As the complexity of statistical models has increased, involving the use of several predictors for different clinical classifications, the dependencies have become less clear to the individual.  The powerful statistical tools now available are not dependent on distributional assumptions, and allow classification and prediction in a way that cannot be achieved by the individual clinician intuitively. Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
In the diagnosis of anemia the variables MCV,MCHC and MCH classify the disease process  into microcytic, normocytic and macrocytic categories.  Further consideration of
proliferation of marrow precursors,
  • the domination of a cell line, and
  • features of suppression of hematopoiesis

provide a two dimensional model.  Several other possible dimensions are created by consideration of

  • the maturity of the circulating cells.
The development of an evidence-based inference engine that can substantially interpret the data at hand and convert it in real time to a “knowledge-based opinion” may improve clinical problem solving by incorporating multiple complex clinical features as well as duration of onset into the model.
An example of a difficult area for clinical problem solving is found in the diagnosis of SIRS and associated sepsis.  SIRS (and associated sepsis) is a costly diagnosis in hospitalized patients.   Failure to diagnose sepsis in a timely manner creates a potential financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria (temperature, heart rate, respiratory rate and WBC count) by the clinician.   The application of those clinical criteria, however, defines the condition after it has developed and has not provided a reliable method for the early diagnosis of SIRS.  The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including transthyretin, C-reactive protein and procalcitonin.  Immature granulocyte (IG) measurement has been proposed as a more readily available indicator of the presence of
  • granulocyte precursors (left shift).
The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, may provide a mechanism to enhance workflow and decision making.
An accurate classification based on the multiplicity of available data can be provided by an innovative system that utilizes  the conjoined syndromic features of disparate data elements.  Such a system has the potential to facilitate both the workflow and the decision-making process with an anticipated reduction of medical error.

This study is only an extension of our approach to repairing a longstanding problem in the construction of the many-sided electronic medical record (EMR).  On the one hand, past history combined with the development of Diagnosis Related Groups (DRGs) in the 1980s have driven the technology development in the direction of “billing capture”, which has been a focus of epidemiological studies in health services research using data mining.

In a classic study carried out at Bell Laboratories, Didner found that information technologies reflect the view of the creators, not the users, and Front-to-Back Design (R Didner) is needed.  He expresses the view:

“Pre-printed forms are much more amenable to computer-based storage and processing, and would improve the efficiency with which the insurance carriers process this information.  However, pre-printed forms can have a rather severe downside. By providing pre-printed forms that a physician completes
to record the diagnostic questions asked,
  • as well as tests, and results,
  • the sequence of tests and questions,
might be altered from that which a physician would ordinarily follow.  This sequence change could improve outcomes in rare cases, but it is more likely to worsen outcomes. “
Decision Making in the Clinical Setting.   Robert S. Didner
 A well-documented problem in the medical profession is the level of effort dedicated to administration and paperwork necessitated by health insurers, HMOs and other parties (ref).  This effort is currently estimated at 50% of a typical physician’s practice activity.  Obviously this contributes to the high cost of medical care.  A key element in the cost/effort composition is the retranscription of clinical data after the point at which it is collected.  Costs would be reduced, and accuracy improved, if the clinical data could be captured directly at the point it is generated, in a form suitable for transmission to insurers, or machine transformable into other formats.  Such data capture, could also be used to improve the form and structure of how this information is viewed by physicians, and form a basis of a more comprehensive database linking clinical protocols to outcomes, that could improve the knowledge of this relationship, hence clinical outcomes.
  How we frame our expectations is so important that
  • it determines the data we collect to examine the process.
In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.   This has meaning for
  • hospital operations, for
  • nonhospital laboratory operations, for
  • companies in the diagnostic business, and
  • for planning of health systems.
In 1983, a vision for creating the EMR was introduced by Lawrence Weed and others.  This is expressed by McGowan and Winstead-Fry.
J J McGowan and P Winstead-Fry. Problem Knowledge Couplers: reengineering evidence-based medicine through interdisciplinary development, decision support, and research.
Bull Med Libr Assoc. 1999 October; 87(4): 462–470.   PMCID: PMC226622    Copyright notice

Example of Markov Decision Process (MDP) trans...

Example of Markov Decision Process (MDP) transition automaton (Photo credit: Wikipedia)

Control loop of a Markov Decision Process

Control loop of a Markov Decision Process (Photo credit: Wikipedia)

English: IBM's Watson computer, Yorktown Heigh...

English: IBM’s Watson computer, Yorktown Heights, NY (Photo credit: Wikipedia)

English: Increasing decision stakes and system...

English: Increasing decision stakes and systems uncertainties entail new problem solving strategies. Image based on a diagram by Funtowicz, S. and Ravetz, J. (1993) “Science for the post-normal age” Futures 25:735–55 (http://dx.doi.org/10.1016/0016-3287(93)90022-L). (Photo credit: Wikipedia)

Read Full Post »

« Newer Posts - Older Posts »