Feeds:
Posts
Comments

Archive for the ‘Biomarkers & Medical Diagnostics’ Category

What about Theranos?

Curator: Larry H. Bernstein, MD, FCAP

Is Theranos Situation False Crowdfunding Claims at Scale or ‘Outsider’ Naivety?

http://www.mdtmag.com/blog/2015/11/theranos-situation-false-crowdfunding-claims-scale-or-outsider-naivety

If you’ve been following the Theranos situation that involves several damning articles from the Wall Street Journal on the company (see sidebar below video), you know that “something is rotten in the state of Denmark.” That is to say, regardless of whether or not you believe the WSJ articles 100%, believe Theranos 100%, or land somewhere in between, it’s hard not to see that something at the company is definitely creating questions about their original claims. In fact, the company has apparently even tempered some language with regard to its capabilities while “debating” the accuracy of the WSJ articles. It’s really a big mess for a company that was supposedly making significant changes in the way we’d conduct blood testing and the way patients controlled and accessed their own health data (although, I think the idea behind that specific aspect is a very good one).

Due to FDA inspections and findings of concern with Thernos practices, the company is currently only collecting blood for one test using its revolutionary proprietary technology. While the company’s CEO Elizabeth Holmes continues to assure the public that the problems are tied to FDA related procedures and not an issue with the technology itself, stakeholders such as Walgreens put any further interactions with the company on hold.

In the following video from Fortune’s Global Forum, you can see Ms. Holmes discussing the situation over the FDA inspections and the changes that are currently in place with regard to the testing that’s happening at the company.

https://youtu.be/A8qgmGtRMsY

So what’s the story behind this story? Is this a deliberate attempt to deceive on the part of Theranos or is it an example of what can happen when an “outsider” gets involved in the highly regulated medical device industry and faces off with the FDA without the proper experience in place to address potential areas of concern?

In a recent blog, I looked at the crowdfunding of medical devices and what can happen when claims made don’t live up to the reality of the product that’s actually developed. Once enthusiastic investors can quickly (and loudly) turn on a company or project, venting their frustration even directly on the crowdfunding page for all to see. Unfortunately, with the way these sites seem to be set-up, the money is still provided to the company that produces a product, albeit one that does not live up to the initial concept.

Is that what Theranos ultimately is? Were the technology claims taken at face value by significant investment backers? It would seem very unlikely, but given some of the accusations of former Theranos employees in the WSJ articles, it wouldn’t be the only instance of Theranos trying to manipulate testing protocols for the sake of appearing more impressive. Theranos counters those claims by saying the former employees were actually unfamiliar with the actual testing the company performs. Whether or not you believe that is entirely up to you.

Another alternative to blatant deceit on the part of Theranos is the possibility that the company was simply playing in an industry it wasn’t truly experienced enough to handle. In other words, how many FDA savy employees work for Theranos? Did they seek consultants to help with the regulatory processes? Or were they simply naïve to the ways of the regulated industry in which they were entering?

Again, this scenario too seems unlikely, but it also brings in the debate over lab-developed tests and the FDA’s regulation of them. If Theranos testing protocols fall under the realm of LDTs, then they aren’t necessary under the oversight of the FDA. Sure, the blood collection device is (and that’s why changes are currently occurring at the company), but does the FDA have the authority to inspect the company’s tests if they are LDTs?

Ultimately, I think everyone (with the exception of competitors to Theranos perhaps) wants the company to be successful. The ideas and hope embedded within the original claims the company made will only enhance the quality of care that we are able to achieve within our healthcare system. Further, empowering patients to make decisions and get involved with their own healthcare management would likely improve their overall health.

Unfortunately, before any of that will be possible, Theranos is going to have an uphill battle in defending itself, its technology, and its CEO in this very public debate over the realistic capabilities it can provide. Hopefully, it learns from this experience and if the technology truly functions the way they’ve claimed, they’ll bring on the necessary regulatory experts and better navigate the troubled waters in which they currently find themselves.

Single Blood Drop Diagnostics Key to Resolving Healthcare Challenges

At TEDMED 2014, President and CEO of Theranos, Elizabeth Holmes, talked about the importance of enabling early detection of disease through new diagnostic tools and empowering individuals to make educated decisions about their healthcare.

Read Full Post »

Cancer Companion Diagnostics

Curator: Larry H. Bernstein, MD, FCAP

 

Companion Diagnostics for Cancer: Will NGS Play a Role?

Patricia Fitzpatrick Dimond, Ph.D.

http://www.genengnews.com/insight-and-intelligence/companion-diagnostics-for-cancer/77900554/

Companion diagnostics (CDx), in vitro diagnostic devices or imaging tools that provide information essential to the safe and effective use of a corresponding therapeutic product, have become indispensable tools for oncologists.  As a result, analysts expect the global CDx market to reach $8.73 billion by 2019, up from from $3.14 billion in 2014.

Use of CDx during a clinical trial to guide therapy can improve treatment responses and patient outcomes by identifying and predicting patient subpopulations most likely to respond to a given treatment.

These tests not only indicate the presence of a molecular target, but can also reveal the off-target effects of a therapeutic, predicting toxicities and adverse effects associated with a drug.

For pharma manufacturers, using CDx during drug development improves the success rate of drugs being tested in clinical trials. In a study estimating the risk of clinical trial failure during non-small cell lung cancer drug development in the period between 1998 and 2012 investigators analyzed trial data from 676 clinical trials with 199 unique drug compounds.

The data showed that Phase III trial failure proved the biggest obstacle to drug approval, with an overall success rate of only 28%. But in biomarker-guided trials, the success rate reached 62%. The investigators concluded from their data analysis that the use of a CDx assay during Phase III drug development substantially improves a drug’s chances of clinical success.

The Regulatory Perspective

According to Patricia Keegen, M.D., supervisory medical officer in the FDA’s Division of Oncology Products II, the agency requires a companion diagnostic test if a new drug works on a specific genetic or biological target that is present in some, but not all, patients with a certain cancer or disease. The test identifies individuals who would benefit from the treatment, and may identify patients who would not benefit but could also be harmed by use of a certain drug for treatment of their disease. The agency classifies companion diagnosis as Class III devices, a class of devices requiring the most stringent approval for medical devices by the FDA, a Premarket Approval Application (PMA).

On August 6, 2014, the FDA finalized its long-awaited “Guidance for Industry and FDA Staff: In Vitro Companion Diagnostic Devices,” originally issued in July 2011. The final guidance stipulates that FDA generally will not approve any therapeutic product that requires an IVD companion diagnostic device for its safe and effective use before the IVD companion diagnostic device is approved or cleared for that indication.

Close collaboration between drug developers and diagnostics companies has been a key driver in recent simultaneous pharmaceutical-CDx FDA approvals, and partnerships between in vitro diagnostics (IVD) companies have proliferated as a result.  Major test developers include Roche Diagnostics, Abbott Laboratories, Agilent Technologies, QIAGEN), Thermo Fisher Scientific, and Myriad Genetics.

But an NGS-based test has yet to make it to market as a CDx for cancer.  All approved tests include PCR–based tests, immunohistochemistry, and in situ hybridization technology.  And despite the very recent decision by the FDA to grant marketing authorization for Illumina’s MiSeqDx instrument platform for screening and diagnosis of cystic fibrosis, “There still seems to be a number of challenges that must be overcome before we see NGS for targeted cancer drugs,” commented Jan Trøst Jørgensen, a consultant to DAKO, commenting on presentations at the European Symposium of Biopathology in June 2013.

Illumina received premarket clearance from the FDA for its MiSeqDx system, two cystic fibrosis assays, and a library prep kit that enables laboratories to develop their own diagnostic test. The designation marked the first time a next-generation sequencing system received FDA premarket clearance. The FDA reviewed the Illumina MiSeqDx instrument platform through its de novo classification process, a regulatory pathway for some novel low-to-moderate risk medical devices that are not substantially equivalent to an already legally marketed device.

Dr. Jørgensen further noted that “We are slowly moving away from the ‘one biomarker: one drug’ scenario, which has characterized the first decades of targeted cancer drug development, toward a more integrated approach with multiple biomarkers and drugs. This ‘new paradigm’ will likely pave the way for the introduction of multiplexing strategies in the clinic using gene expression arrays and next-generation sequencing.”

The future of CDxs therefore may be heading in the same direction as cancer therapy, aimed at staying ahead of the tumor drug resistance curve, and acknowledging the reality of the shifting genomic landscape of individual tumors. In some cases, NGS will be applied to diseases for which a non-sequencing CDx has already been approved.

Illumina believes that NGS presents an ideal solution to transforming the tumor profiling paradigm from a series of single gene tests to a multi-analyte approach to delivering precision oncology. Mya Thomae, Illumina’s vice president, regulatory affairs, said in a statement that Illumina has formed partnerships with several drug companies to develop a universal next-generation sequencing-based oncology test system. The collaborations with AstraZeneca, Janssen, Sanofi, and Merck-Serono, announced in 2014 and 2015 respectively, seek to  “redefine companion diagnostics for oncology  focused on developing a system for use in targeted therapy clinical trials with a goal of developing and commercializing a multigene panel for therapeutic selection.”

On January 16, 2014 Illumina and Amgen announced that they would collaborate on the development of a next-generation sequencing-based companion diagnostic for colorectal cancer antibody Vectibix (panitumumab). Illumina will develop the companion test on its MiSeqDx instrument.

In 2012, the agency approved Qiagen’s Therascreen KRAS RGQ PCR Kit to identify best responders to Erbitux (cetuximab), another antibody drug in the same class as Vectibix. The label for Vectibix, an EGFR-inhibiting monoclonal antibody, restricts the use of the drug for those metastatic colorectal cancer patients who harbor KRAS mutations or whose KRAS status is unknown.

The U.S. FDA, Illumina said, hasn’t yet approved a companion diagnostic that gauges KRAS mutation status specifically in those considering treatment with Vectibix.  Illumina plans to gain regulatory approval in the U.S. and in Europe for an NGS-based companion test that can identify patients’ RAS mutation status. Illumina and Amgen will validate the test platform and Illumina will commercialize the test.

Treatment Options

Foundation Medicine says its approach to cancer genomic characterization will help physicians reveal the alterations driving the growth of a patient’s cancer and identify targeted treatment options that may not have been otherwise considered.

FoundationOne, the first clinical product from Foundation Medicine, interrogates the entire coding sequence of 315 cancer-related genes plus select introns from 28 genes often rearranged or altered in solid tumor cancers.  Based on current scientific and clinical literature, these genes are known to be somatically altered in solid cancers.

These genes, the company says, are sequenced at great depth to identify the relevant, actionable somatic alterations, including single base pair change, insertions, deletions, copy number alterations, and selected fusions. The resultant fully informative genomic profile complements traditional cancer treatment decision tools and often expands treatment options by matching each patient with targeted therapies and clinical trials relevant to the molecular changes in their tumors.

As Foundation Medicine’ s NGS analyses are increasingly applied, recent clinical reports describe instances in which comprehensive genomic profiling with the FoundationOne NGS-based assay result in diagnostic reclassification that can lead to targeted drug therapy with a resulting dramatic clinical response. In several reported instances, NGS found, among the spectrum of aberrations that occur in tumors, changes unlikely to have been discovered by other means, and clearly outside the range of a conventional CDx that matches one drug to a specific genetic change.

TRK Fusion Cancer

In July 2015, the University of Colorado Cancer Center and Loxo Oncology published a research brief in the online edition of Cancer Discovery describing the first patient with a tropomyosin receptor kinase (TRK) fusion cancer enrolled in a LOXO-101 Phase I trial. LOXO-101 is an orally administered inhibitor of the TRK kinase and is highly selective only for the TRK family of receptors.

While the authors say TRK fusions occur rarely, they occur in a diverse spectrum of tumor histologies. The research brief described a patient with advanced soft tissue sarcoma widely metastatic to the lungs. The patient’s physician submitted a tumor specimen to Foundation Medicine for comprehensive genomic profiling with FoundationOne Heme, where her cancer was demonstrated to harbor a TRK gene fusion.

Following multiple unsuccessful courses of treatment, the patient was enrolled in the Phase I trial of LOXO-101 in March 2015. After four months of treatment, CT scans demonstrated almost complete tumor disappearance of the largest tumors.

The FDA’s Elizabeth Mansfield, Ph.D., director, personalized medicine staff, Office of In Vitro Diagnostics and Radiological Health, said in a recent article,  “FDA Perspective on Companion Diagnostics: An Evolving Paradigm” that “even as it seems that many questions about co-development have been resolved, the rapid accumulation of new knowledge about tumor biology and the rapid evolution of diagnostic technology are challenging FDA to continually redefine its thinking on companion diagnostics.” It seems almost inevitable that a consolidation of diagnostic testing should take place, to enable a single test or a few tests to garner all the necessary information for therapeutic decision making.”

Whether this means CDx testing will begin to incorporate NGS sequencing remains to be seen.

Read Full Post »

Sequence the Human Genome, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Sequence the Human Genome

Curator: Larry H Bernstein, MD, FCAP

 

 

Geneticist Craig Venter helped sequence the human genome. Now he wants yours.

By CARL ZIMMER   NOVEMBER 5, 2015   http://www.statnews.com/2015/11/05/geneticist-craig-venter-helped-sequence-the-human-genome-now-he-wants-yours/

If you enter Health Nucleus, a new facility in San Diego cofounded by J. Craig Venter, one of the world’s best-known living scientists, you will get a telling glimpse into the state of medical science in 2015.

Your entire genome will be sequenced with extraordinary resolution and accuracy. Your body will be scanned in fine, three-dimensional detail. Thousands of compounds in your blood will be measured. Even the microbes that live inside you will be surveyed. You will get a custom-made iPad app to navigate data about yourself. Also, your wallet will be at least $25,000 lighter.

Venter, who came to the world’s attention in the 1990s when he led a campaign to produce the first draft of a human genome, launched Health Nucleus last month as part of his new company, Human Longevity. He has made clear that his aim is just as lofty as it was when he and his team sequenced the human genome or built a flu vaccine from a genetic sequence delivered to them over the Internet.

“We’re trying to show the value of actual scientific data that can change people’s lives,” Venter told STAT in some of his most extensive remarks yet about the project. “Our goal is to interpret everything in the genome that we can.”

Still, the initiative is drawing deep suspicion among some doctors who question whether Venter’s existing tests can tell patients anything meaningful at all. In interviews, they said they see Health Nucleus as the latest venture that could lead consumers to believe that more testing means improved health. That notion, they say, could drive customers to get procedures they don’t need, which might even be harmful.

“I think there is absolutely no evidence that any of those tests have any benefit for healthy people,” Dr. Rita Redberg, a cardiologist at the University of California at San Diego and the editor-in-chief of JAMA Internal Medicine, said when asked about Venter’s new project.

Venter has a black belt in media savvy — he can make the details of molecular biology alluring for viewers of 60 Minutes and TED talks alike — but off screen he has earned a reputation even from his critics for serious scientific achievements. His non-profit J. Craig Venter Institute, which he founded in 1992, now has a staff of 300. Scientists at the institute have explored everything from the ocean’s biodiversity to the Ebola virus.

Last year, at age 67, Venter cofounded Human Longevity, a company based in San Diego with branches in Mountain View, Calif., and Singapore that is building the largest human genome-sequencing operation on Earth, equipped with massive computing resources to analyze the data being generated. The firm’s database now contains highly accurate genome sequences from 20,000 people; another 3,000 genomes are being added each month.

Franz Och, the former head of Google Translate and an expert on machine learning, is leading a team that’s teaching computers to recognize patterns in the company’s databases that scientists themselves may not be able to see. To demonstrate the power of this approach, Human Longevity researchers are using machine learning to discover how genetic variations shape the human face.

“We can determine a good resemblance of your photograph straight from your genetic code,” said Venter.

Venter and his colleagues will be publishing the results of that study soon — most likely generating another round of headlines. But headlines don’t pay the bills, and at a company that’s got $70 million in funding from private investors, bills matter. The company is now exploring a number of avenues for generating income from its database. It has partnered with Discovery, an insurance company in England and South Africa, to read the DNA of their clients. For $250 apiece, it will sequence the protein-coding regions of the genome, known as exomes, and offer an interpretation of the data.

Health Nucleus could become yet another source of income for Human Longevity. The San Diego facility can handle eight to 12 people a day. There are plans to open more sites both in the United States and abroad. “You can do the math,” Venter said.

Read Full Post »

Complex Cancer Genetics Testing, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Complex Cancer Genetics Testing

Curator: Larry H. Bernstein, MD, FCAP

 

 

Cellnetix Selects GeneInsight To Support Complex Cancer Testing   

Software will support next-generation sequencing panels to identify more targeted treatments for tumor types

http://www.laboratorynetwork.com/doc/cellnetix-selects-geneinsight-to-support-complex-cancer-testing-0001

 

Sunquest Information Systems Inc., and Partners HealthCare today announced that CellNetix Pathology and Laboratories has selected GeneInsight to streamline and enhance their somatic testing program.

Designed to efficiently manage large volumes of genetic data, GeneInsight will enable CellNetix’s diagnostics team to more quickly identify targeted treatments for a patient’s specific tumor type. The scalable system infrastructure and web-based deployment will also allow Cellnetix to build its own genetic knowedgebase for standardized interpretations across its network of laboratories.

“CellNetix has designed a custom assay Symgene™, a 68-gene panel to effectively target tumor therapies. Our upcoming, expanded 200-gene panel will provide an even greater level of granularity; and we needed a solution that could scale to support the increased data set,” said Anna Berry, M.D., medical director of molecular diagnostics at CellNetix and scientific director, personalized medicine program, Swedish Cancer Institute.

 

GeneInsight To Improve Genetic Testing For Cancer

CAMBRIDGE, MASS.–(BUSINESS WIRE)–

GeneInsight Inc. today announced a relationship with Brigham and Women’s Hospital (BWH) to integrate high-quality cancer content with GeneInsight functionality to make complex genetic test interpretation easier and more accessible to laboratories everywhere.

Over the last several years, BWH and Dana-Farber Cancer Institute have developed a vast cancer mutation knowledgebase through rigorous research initiatives and clinical applications. Working collaboratively, BWH and Dana-Farber have carefully curated cancer variants and the associated classifications from thousands of clinical cases. Through this new relationship with GeneInsight, somatic testing labs can access this content within their own case interpretation workflow; making precision medicine and complex genetic testing more attainable for labs with a limited pool of geneticists and pathologists. At the same time, it will improve delivery of care for patients and decrease the cost of care through more efficient interpretation of critical cancer tests.

“The challenges faced by genetic testing laboratories are getting more complex,” said Neal Lindeman, M.D., Director of Molecular Diagnostics, BWH, and Associate Professor of Pathology, Harvard Medical School. “Labs need access to sophisticated IT systems and up-to-date, accurate information regarding the clinical significance of genetic variants identified in patients through testing. Further, they need to understand how test results will impact treatment decisions which are increasingly informed by the genetic profiles of individual tumors. Moreover, genetic testing for inherited diseases and genetic testing for cancers are profoundly different; each having a unique set of IT needs and requiring access to a different base of knowledge to support test interpretation and reporting.”

Limited access to highly-annotated and transparently-sourced content has hindered laboratories in their efforts to leverage next generation sequencing. Clinical laboratories require access to high-quality content, preferably clinically validated and evidence based information that is not widely available today. The data that is available through various public and commercial databases is often siloed with few parameters to ensure quality. Additionally, a 2013 report published by the American College of Medical Genetics and Genomics, warns that few, if any, of these databases are curated to a level necessary for clinical use.

“It is our belief that IT solutions must empower laboratory and clinical professionals to network, share and continuously improve knowledge about the genetic variants identified in patients,” said Sandy Aronson, Executive Director of IT for Partners HealthCare and an officer of GeneInsight. “To this end, GeneInsight is committed to addressing the content gap through participation in national data sharing efforts like ClinVar, by establishing its own data sharing network called VariantWire, and through meaningful relationships with leading academic institutions like BWH.”

BWH and GeneInsight have an ongoing relationship to further define enhancements to the software to meet the needs of the continuously evolving cancer field. Working closely with Sunquest Information Systems, the laboratory information systems provider for BWH and the larger Partners HealthCare system, including Massachusetts General Hospital, GeneInsight seeks to provide seamless genetic testing workflow capabilities to clinical geneticists and pathologists focused on oncology.

GeneInsight is additionally supported through a strategic alliance formed in 2014 between Partners HealthCare and Sunquest to accelerate genomic-based medicine.

View source version on businesswire.com:http://www.businesswire.com/news/home/20151029006037/en/

 

“Pathology labs of all sizes are starting cancer genetic testing programs across the country. This collaboration with CellNetix will, in turn, help many such labs scale and grow their own, in-house NGS testing programs,” said Matthew Hawkins, president of Sunquest Information Systems.

CellNetix also sought a solution that could integrate with their various information systems. As a user of Sunquest’s anatomic pathology (AP) information system since 2007, CellNetix will now have integrated AP and genetic data for more efficient and complete clinical reports. Furthermore, GeneInsight will integrate genetic cancer content from CellNetix’s third-party curated content providers.

“Ultimately, we chose GeneInsight because of the strength of the product and its promise to meet some of our current challenges,” said Pat Cooke, chief information officer at CellNetix. “We also chose GeneInsight because of our relationship with Sunquest. We are impressed by the Sunquest leadership team and direction of the company. We believe that Sunquest will be able to deliver innovative solutions to accommodate our future needs as we continue to evolve our diagnostic services.”

GeneInsight is a genetics information system developed at the Partners HealthCare Laboratory for Molecular Medicine. In clinical use since 2005, its ongoing development is supported through a strategic alliance between Partners HealthCare and Sunquest.

 

About Sunquest Information Systems

Sunquest Information Systems Inc. provides diagnostic and laboratory information systems to more than 1,700 laboratories. For the past 30 years, Sunquest has delivered solutions that optimize financial results, enhance efficiency and improve the quality of patient care. The company’s pathology-focused mission, outreach awareness and point of care solutions establish Sunquest as a leader in the healthcare technology industry. Headquartered in Tuscon, AZ, Sunquest also has offices in the United Kingdom and India.

About GeneInsight

GeneInsight, Inc. is an IT platform company focused on streamlining the analysis, interpretation, and reporting of complex genetic results. GeneInsight Suite® was developed by Partners HealthCare in collaboration wth leading geneticists, laboratory operations personnel, practicing physicians and IT professionals. GeneInsight has been in clinical use since 2005 and has supported the interpretation and reporting workflow for more than 40,000 clinical genetic tests across multiple diagnostic reference laboratories, including the Partners HealthCare Laboratory for Molecular Medicine. More information can be obtained at http://www.geneinsight.com.

About Partners HealthCare

Partners HealthCare is an integrated health system founded by Brigham and Women’s Hospital and Massachusetts General Hospital. In addition to its two academic medical centers, the Partners system includes community and specialty hospitals, a managed care organization, community health centers, a physician network, home health and long-term care services, and other health-related entities. Partners HealthCare is one of the nation’s leading biomedical research organizations and a principal teaching affiliate of Harvard Medical School. Partners HealthCare is a non-profit organization.

About CellNetix

CellNetix Pathology & Laboratories, LLC is a dynamic, rapidly growing private pathology company headquartered in Seattle, Washington, serving hospitals and clinics throughout Washington, Oregon, Idaho and Alaska. CellNetix employs over 50 pathologists and provides comprehensive subspecialized anatomic pathology services at its central state-of-the-art anatomic pathology laboratory in Seattle.

SOURCE: PRWeb

View original release here: http://www.prweb.com/releases/2015/11/prweb13058938.htm

Read Full Post »

developments in medical spectroscopy

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

Using QCLs for MIR-Based Spectral Imaging — Applications in Tissue Pathology
A quantum cascade laser (QCL) microscope allows for fast data acquisition, real-time chemical imaging and the ability to collect only spectral frequencies of interest. Due to their high-quality, highly tunable illumination characteristics and excellent signal-to-noise performance, QCLs are paving the way for the next generation of mid-infrared (MIR) imaging methodologies.

Using QCLs for MIR-Based Spectral Imaging — Applications in Tissue Pathology

http://www.photonics.com/images/Web/Articles/2015/9/8/Imaging_Prostate.png

 

 

Efficient Spectroscopic Imaging Demonstrated In Vivo
Although optical spectroscopy is routinely used study molecules in cell samples, it is currently not practical to perform in vivo. Now, a converted Raman spectroscopy system has been used to reveal the chemical composition of living tissues in seconds.

Efficient Spectroscopic Imaging Demonstrated In Vivo

http://www.photonics.com/images2/EmailBlasts%5CSpectroscopy/2015/11/Efficient_Spectroscopic_Imaging_Demonstrated_In_Vivo.jpg

 

 

Broadband Laser Aimed at Cancer Detection
Covering a wide swath of the mid-infrared region, a new laser system offers greater spectral sensitivity.

Broadband Laser Aimed at Cancer Detection

http://www.photonics.com/images/Web/Articles/2015/9/25/REAS_molecular_1.jpg

 

 

Using QCLs for MIR-Based Spectral Imaging — Applications in Tissue Pathology

A quantum cascade laser (QCL) microscope allows for fast data acquisition, real-time chemical imaging and the ability to collect only spectral frequencies of interest. Due to their high-quality, highly tunable illumination characteristics and excellent signal-to-noise performance, QCLs are paving the way for the next generation of mid-infrared (MIR) imaging methodologies.

MICHAEL WALSH, UNIVERSITY OF ILLINOIS AT CHICAGO; MATTHEW BARRE & BENJAMIN BIRD, DAYLIGHT SOLUTIONS

H. Sreedhar*1, V. Varma*2, A. Graham3, Z. Richards1, F. Gambacorata4, A. Bhatt1,
P. Nguyen1, K. Meinke1, L. Nonn1, G. Guzman1, E. Fotheringham5, M. Weida5,
D. Arnone5, B. Mohar5, J. Rowlette5
1 Department of Bioengineering, University of Illinois at Chicago
2 Department of Pathology, University of Illinois at Chicago
3 Department of Bioengineering, University of Illinois at Urbana-Champaign
4 Department of Chemical Engineering, University of Illinois at Chicago
5 Daylight Solutions, San Diego
*Contributed Equally

Real-time, MIR chemical imaging microscopes could soon become powerful frontline screening tools for practicing pathologists. The ability to see differences in the biochemical makeup across a tissue sample greatly enhances a practioner’s ability to detect early stages of disease or disease variants. Today, this is accomplished much as it was 100 years ago — through the use of specially formulated stains and dyes in combination with white light microscopy. A new MIR, QCL-based microscope from Daylight Solutions enables real-time, nondestructive biochemical imaging of tissues without the need to perturb the sample with chemical or heat treatments, thus preserving the sample for follow-on fluorescence tagging, histochemical staining or other “omics” testing within the workflow.
MIR chemical imaging is a well-established absorbance spectroscopy technique; it senses the relative amount of light that molecules absorb due to their unique vibrational resonances falling within the MIR portion of the electromagnetic spectrum (i.e., wavelengths from approximately 2 to 15 µm). This absorption can be detected with a variety of MIR detector types and can provide detailed information about the sample’s chemical composition.

The most common instrument for this type of measurement is known as a Fourier transform infrared (FTIR) spectrometer. FTIR systems use a broadband MIR light source, known as a globar, to illuminate a sample; the absorption spectrum is generated by the use of interferometry. Throughout the past decade, FTIR systems have incorporated linear arrays and 2D focal plane arrays (FPAs) in a microscope configuration to enable a technique known as chemical imaging.

With this approach, the illumination beam is expanded across a sample area, and the data produced is transformed into a hyperspectral data cube — a 2D image of the sample with an absorption profile associated with every pixel. This is a very versatile technique that allows the detailed spatial distribution of chemical content to be analyzed across a sample. Recently, this technique has proved to be very useful within the biomedical imaging sector for label-free, biochemical analyses of cells, tissue and biofluids.

While FTIR microscopy now is established as a powerful technique for a wide variety of applications, the instruments used for this methodology are fundamentally limited by the brightness of the globar source. Users looking to maximize the signal-to-noise ratios, and the associated resolutions of the images produced are forced to use synchrotron facilities, which replace the globar light source with a MIR beam generated by a particle accelerator. This approach can yield excellent results but clearly is not practical for benchtop applications; it is particularly unfit for biomedical imaging applications within clinical settings.

The recent advent of QCLs has provided an ideal light source for next-generation MIR microscopy. They are compact, semiconductor-based lasers that produce high-brightness light in the MIR region. The devices can be manufactured in an external cavity configuration to provide broadly tunable output with a narrow spectral bandwidth at each frequency. In this configuration, a QCL can be tuned across the MIR spectrum to sequentially capture an absorption profile for chemical identification.

Daylight Solutions’ IR microscope incorporates a broadly tunable and high-brightness QCL light source (it is an order of magnitude brighter than a synchrotron), a set of high numerical aperture (NA) diffraction-limited objectives, and an uncooled microbolometer FPA into a compact, benchtop instrument, as shown in Figure 1. The instrument provides rapid, high-resolution chemical images across very large fields of view and also provides a real-time chemical imaging mode. By overcoming the physical size, camera cooling and data collection time requirements of FTIR-based instruments, the microscope is positioned to bring MIR microscopy beyond research settings and into clinical use.

Schematic of a quantum cascade laser (QCL) microscope.


Figure 1. Schematic of a quantum cascade laser (QCL) microscope. Courtesy of Daylight Solutions.


Dr. Michael Walsh of the University of Illinois at Chicago (UIC) conducts research within the pathology department’s Spectral Pathology Lab, which has been using the IR microscope for the past several months. Walsh has been focused on developing chemical imaging techniques, with the ultimate goal of improving diagnoses within the field of tissue pathology.

Currently, the state-of-the-art method-ology used for the diagnosis of most solid-organ diseases is to extract a tissue sample via a biopsy. Tissue inherently has very little contrast and needs to be stained with dyes or probes to visualize and identify cell types and tissue structures. The field of pathology is based on examining the stained tissues, typically using white light, to determine if the tissue morphology deviates from a normal pattern. If the tissue looks abnormal, the disease state may be further subclassified by grade or by predicted outcome. However, the field of pathology is limited by the information that can be derived from the stained tissues and the subjective interpretation of the tissue by a highly trained pathologist.

Spero microscope.


Spero microscope. Courtesy of Daylight Solutions.


UIC’s Spectral Pathology Lab is focused on identifying areas in pathology where current techniques fail, or where there is a need for additional diagnostic or prognostic information that can help improve patient care. Potentially, MIR imaging is a very valuable adjunct to the current practice of pathology. Rather than using only stains, MIR imaging can interrogate the entire biochemistry of the tissue and render a diagnosis in an objective fashion. Traditionally, MIR imaging with an FTIR system has been limited by slow data acquisition speeds and the need to collect the entire spectral data cube. QCL imaging with the Spero microscope has the potential to speed up the data acquisition of images obtained from a tissue sample and to collect only the spectral frequencies of interest. The device also provides real-time imaging of samples at 30 fps, which could allow pathologists to very rapidly identify areas of interest on a tissue biopsy in a manner that is similar to their current clinical workflows. Some examples of the comparison of FTIR-derived and QCL-derived images from multiple organ tissues of interest are presented.

H&E-stained image of a mouse brain section on IR reflective slide, with selected regions labeled: hypothalamus, thalamus, and dentate gyrus.
Figure 2.
(a) H&E-stained image of a mouse brain section on IR reflective slide, with selected regions labeled: hypothalamus, thalamus, and dentate gyrus. (b) Transflectance QCL IR image of same region, prior to staining, at 1652 cm−1, in which the thalamus is clearly distinguished from surrounding regions. (c) Same region at 1548 cm−1. (d) Same region at 1500 cm−1. Courtesy of University of Illinois at Chicago (UIC)/Spectral Pathology Lab.


A tissue section from a mouse brain was scanned using the Spero microscope’s high-magnification objective (12.5×; 0.7 NA; 1.4 × 1.4-µm pixels) at various MIR frequencies in transflection mode, as shown in Figure 2. The tissue then was stained using hematoxylin and eosin (H&E), the most common stain in histopathology, and is displayed in Figure 2a. Using the H&E stain, regions were identified in the brain (thalamus, dentate gyrus and hypothalamus) that correlated with structures in the IR image. By illuminating the tissue at various wavelengths, discrete tissue features exhibit contrast due to the difference in absorption, as highlighted in the IR images taken at 1652, 1548 and 1500 cm−1 in Figure 2b-d, respectively. The microscope also makes it possible to visualize tissue at these individual wavelengths in real time. The identification of cell types and their biochemical changes is of particular interest in neuropathology.

Transmission FTIR image of a 4-µm thick section from a human liver tissue microarray on barium fluoride at 1650 cm-1.
Figure 3.
(a) Transmission FTIR image of a 4-µm thick section from a human liver tissue microarray on barium fluoride at 1650 cm-1. The image was taken with 64 coadditions of successive scans. (b) Transmission image from the Spero microscope of the same tissue at 1652 cm-1, both baseline corrected between 1796 cm-1 and 904 cm-1. In both images, the bright white stripe dividing the tissue core roughly in half is a region of fibrosis (red arrow), while the rest of the tissue on either side is composed primarily of hepatocytes (blue arrow). Courtesy of UIC/Spectral Pathology Lab.


A single biopsy core obtained from human liver tissue was scanned in transmission mode on a barium fluoride substrate by an Agilent Cary 600 Series FTIR microscope (Figure 3a). The FTIR image was acquired using a 36× Cassegrain collecting objective and a 15× Cassegrain condenser for a pixel size of 2.2 × 2.2 µm. Figure 3b shows the same liver core acquired using the Spero microscope with the high-magnification collecting objective (12.5×, 0.7 NA) and condenser objective for a pixel size of 1.4 × 1.4 µm. High-definition IR imaging enables clear contrast and identification of the band of fibrosis in the center of the core and the surrounding regions of liver cells, known as hepatocytes, and is indicated within Figure 3a-b. Acquisition of IR imaging data at the diffraction limit enables chemical information to be recorded from tissue structures at the single-cell level, allowing accurate characterization of individual tissue components, different cell types, varied disease states or other aspects of a tissue section.

Averaged spectra for regions of interest corresponding to the hepatocytes and the fibrotic area on the FTIR image in Figure 3a.
Figure 4.
(a) Averaged spectra for regions of interest corresponding to the hepatocytes and the fibrotic area on the FTIR image in Figure 3a. Spectra have been truncated from 1800 to 900 cm-1, normalized to 1650 cm-1, and baseline corrected between 1796 and 904 cm-1. (b) Averaged spectra for regions of interest corresponding to the hepatocytes and the fibrotic area on the Spero microscope image in Figure 3b. Spectra have been normalized to 1652 cm-1 and baseline corrected between 1796 and 904 cm-1.


Figure 4 displays average spectra calculated from homogenous tissue regions that describe hepatocytes and fibrosis within the liver tissue core shown in Figure 3. The spectra acquired from both FTIR and QCL systems are very similar. Walsh is focused on developing spectral classifiers that can aid pathologists in making very difficult diagnoses in the precancerous stages of liver cancer.

H&E-stained section of human colon tissue, and FTIR (with 16 coadditions) and Spero microscope transmission images of a 4-µm thick serial section of the same sample on barium fluoride. FTIR image shown at 1650 cm-1, Spero microscope image shown at 1652 cm-1.
Figure 5.
H&E-stained section of human colon tissue, and FTIR (with 16 coadditions) and Spero microscope transmission images of a 4-µm thick serial section of the same sample on barium fluoride. FTIR image shown at 1650 cm-1, Spero microscope image shown at 1652 cm-1. The red circle indicates mucin, the green circle indicates malignant colon carcinoma epithelium, and the blue circle indicates fibroblastic stroma. The raw spectra (taken from single pixels in approximately the same location for each of the three tissue features) are shown below their respective IR images. The FTIR spectra were truncated to match the Spero microscope’s spectral range of 1800 to 900 cm-1. Courtesy of UIC/Spectral Pathology Lab.


Point spectra from individual pixels were obtained and compared from a human colon sample on barium fluoride scanned in transmission on the same FTIR and QCL systems, which is shown in Figure 5. A serial section was obtained and stained with H&E to identify the different tissue structures. Using the H&E image as a reference, spectra from mucin (red), malignant colon carcinoma epithelium (green) and fibroblastic stroma (blue) were collected from a single pixel at approximately the same location. The unprocessed QCL and FTIR spectra are shown directly beneath their respective images. The FTIR system has an FPA size of 128 × 128 detector elements, while the Spero system has a microbolometer of 480 × 480 detector elements. Therefore, the FTIR image was collected as a mosaic and then stitched together.

FTIR and Spero microscope spectra from a single pixel of mucin, from the tissue shown in Figure 5.
Figure 6.
(a) FTIR and Spero microscope spectra from a single pixel of mucin, from the tissue shown in Figure 5. (b) FTIR and Spero microscope spectra from a single pixel of malignant colon carcinoma epithelium, from the same tissue. (c) FTIR and Spero microscope spectra from a single pixel of fibroblastic stroma. All spectra have been normalized (FTIR to 1650 cm-1, Spero to 1652 cm-1) and baseline corrected between 1796 and 904 cm-1, with the FTIR spectra truncated to match the Spero microscope’s spectral range of 1800 to 900 cm-1. Note that pixels for each tissue feature were located in approximately the same region, and that the two images have different pixel sizes (2.2 × 2.2 µm for FTIR, 1.4 × 1.4 µm for Spero microscope). Courtesy of UIC/Spectral Pathology Lab.


The spectra obtained from the regions of interest depicted in Figure 5 were preprocessed, as shown in Figure 6. The data was peak height normalized to the Amide I band. The FTIR data and QCL data were processed using a simple, two-point linear baseline correction between 1796 and 904 cm−1. Figure 6a-c shows the processed data from single pixels looking at the biochemistry of mucin, malignant colon carcinoma epithelium and fibroblastic stroma, respectively. The spectra from the QCL and FTIR systems are very similar on an individual-pixel level.

Finally, Figure 7 shows the scan of a frozen prostate tissue section captured with the microscope. Once thawed, the system can quickly image these sections at a single frequency of interest. The real-time capabilities of the system combined with the capacity for scanning frozen samples could someday allow for the analysis of samples in a time-critical intraoperative setting.

Transflectance scan of a 5-µm frozen human prostate tissue section on Kevley low-emissivity substrate captured with the Spero microscope.


Figure 7. Transflectance scan of a 5-µm frozen human prostate tissue section on Kevley low-emissivity substrate captured with the Spero microscope. Visualized with a false color map at 1640 cm-1. Data was baseline corrected between 1796 and 904 cm-1. Courtesy of UIC/Spectral Pathology Lab. University of Illinois at Chicago — Spectral Pathology Lab members, from left to right: David Martinez, Francesca Gambacorta, Vishal Varma, Andrew Graham and Michael Walsh. Courtesy of Daylight Solutions.


While there has been significant interest in MIR imaging for pathology applications for a number of years1-5, the technology has lacked the maturity to be ready for clinical implementation due to slow scanning speeds, low spatial resolutions and by a lack of computational power to fully handle large multispectral datasets. The Spero microscope, coupled with modern computing power, overcomes these limitations. The information detailed above demonstrates that the quality of the images and spectra obtained from the instrument are similar to those offered by FTIR imaging methods but with the additional benefits associated with the use of a QCL-based system. Recent advances in large multielement FPAs6-8) and high-resolution imaging approaches9-11 for tissue pathology have made this a much more attractive approach for fast and detailed image acquisition. QCLs represent the next step toward clinical implementation — they have demonstrated fast data acquisition, live-imaging capabilities and the ability to collect only spectral frequencies of diagnostic value.

Meet the authors

Michael Walsh holds a PhD in biological sciences and is an assistant professor at the University of Illinois at Chicago in Chicago; email: walshm@uic.edu. Matthew Barre is the business development manager at Daylight Solutions in San Diego; email: mbarre@daylightsolutions.com. Benjamin Bird is an applications scientist at Daylight Solutions in San Diego; email: bbird@daylightsolutions.com.

References

1. D.C. Fernandez et al. (2005). Infrared spectroscopic imaging for histopathologic recognition. Nat Biotechnol, Vol. 23, Issue 4, pp. 469-474.

2. C. Matthaus et al. (2008). Chapter 10: Infrared and Raman microscopy in cell biology. Methods Cell Biol, Vol. 89, pp. 275-308.

3. C. Kendall et al. (2009). Vibrational spectroscopy: a clinical tool for cancer diagnostics. Analyst, Vol. 134, Issue 6, pp. 1029-1045.

4. C. Krafft et al. (2009). Disease recognition by infrared and Raman spectroscopy. J Biophotonics, Vol. 2, Issue 1-2, pp. 13-28.

5. F.L. Martin et al. (2010). Distinguishing cell types or populations based on the computational analysis of their infrared spectra. Nat Protoc, Vol. 5, Issue 11, pp. 1748-1760.

 

 

Broadband Laser Aimed at Cancer Detection

Covering a wide swath of the mid-infrared, a new system offers greater spectral sensitivity

BY JAMES F. LOWE, WEB MANAGING EDITOR, JAMES.LOWE@PHOTONICS.COM

MUNICH, Sept. 25, 2015 — Mid-infrared (MIR) light is rich with molecular “fingerprint” information that can be used to detect substances from atmospheric pollutants to cancer cells.

While some lasers already operate in this region, enabling a variety of spectroscopy applications, their linewidth is relatively narrow, which limits the types of substances they can detect at any given moment.

Now a team of researchers from Germany and Spain has developed a laser system with phase-coherent emission from 6.8 to 16.4 μm and output power of 0.1 W. That is broad and powerful enough, they said, to detect subtle signs of cancer early in its development.

Molecules absorb portions of the MIR spectrum in ways that are unique to their atomic structures, and their absorption patterns provide a means of identifying the molecules with great specificity, even in low concentrations.

 

Emission spectrum


The emission spectrum of the laser and corresponding molecular fingerprint regions. Courtesy of the Institute of Photonic Sciences (ICFO).


“Cancer causes subtle modification in protein structure and content within a cell,” said professor Dr. Jens Biegert, a group leader at the Institute of Photonic Sciences (ICFO) in Barcelona. “Looking at only a few nanometer range, the probability of detection is extremely low. But comparing many of such intervals, one can have an extremely high confidence level.”

The new laser system generates MIR pulses via difference-frequency generation driven by the nonlinearly compressed pulses of a Kerr-lens mode-locked Yb:YAG thin-disc oscillator. It features a repetition rate of 100 MHz and pulse durations of 66 fs — so short that the electric field oscillates only twice per pulse.

Lab


Staff scientist Dr. Ioachim Pupeza (left) and postdoctoral researcher Oleg Pronin helped develop a laser system that emits ultrashort pulses of mid-infrared light. These pulses can be used to detect trace molecules in gaseous and liquid media. Courtesy of Thorsten Naeser/Ludwig Maximilian University.


“Since we now possess a compact source of high-intensity and coherent infrared light, we have a tool that can serve as an extremely sensitive sensor for the detection of molecules, and is suitable for serial production,” said project leader Dr. Ioachim Pupeza, a staff scientist at Ludwig Maximilian University of Munich (LMU).

The LMU and ICFO researchers aim to use their MIR laser to identify and quantify disease markers in exhaled air. Many diseases, including some types of cancer, are thought to produce specific molecules that end up in the air expelled from the lungs.

“We assume that exhaled breath contains well over 1000 different molecular species,” said Dr. Alexander Apolonskiy, an LMU group leader.

However, the amount of molecular biomarkers present in exhaled breath is extraordinarily low, meaning a diagnostic tool would need to be capable of detecting concentrations of at least one part per billion. The next step will be to couple the new laser system with a novel amplifier that would increase its brightness and boost sensitivity one part per trillion.

Detecting MIR signatures

The laser’s output spans more than one octave. Until now, the researchers said, such broadband emission has only been available from large-scale synchrotron sources.

Other more compact MIR sources, such as quantum cascade lasers (QCLs), have narrower linewidths. Tuning them to different sensing bands is time consuming, and combining multiple QCLs emitting in different parts of the MIR would be cost-prohibitive, Biegert said.

Meanwhile, the laser system’s 100-MHz pulse train is hundreds to thousands of times more powerful than state-of-the-art frequency combs that emit in the same range, the researchers said.

Detecting broadband MIR signals presents its own problems, however. Detectors for this region have poor signal-to-noise ratios unless cooled with liquid nitrogen, the researchers said.

In this case, electro-optical sampling proved to be a better option. Well-established for the terahertz range, the technique is less common in the fingerprint region.

“In the MIR range, there are not many groups who have implemented this already, because you need a broadband, phase-stable MIR pulse and an ultrashort sample pulse at the same time, which is quite challenging,” Pupeza said.

Having solved that problem with their broadband laser, the team now could use electro-optical sampling to extract the data they wanted.

In a nutshell, the process works like this: The electric field of an MIR pulse alters the birefringence of a crystal. This change can be measured by observing how the polarization of slightly shorter near-infrared (NIR) pulse is changed while propagating through the same crystal at the same time. In the end, only the NIR pulse is measured directly.

“Therefore, one big advantage is low-noise detection in the NIR, even though one obtains information on spectral components in the MIR,” said Ioachim Pupeza. “You only need to perform a Fourier transform numerically to get the spectrum of the pulse once you have its electric field.”

http://www.photonics.com/Article.aspx?AID=57757
The research was published in Nature Photonics (doi: 10.1038/nphoton.2015.179).

 

 

Read Full Post »

Inadequacy of EHRs

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

EHRs need better workflows, less ‘chartjunk’

By Marla Durben Hirsch

Electronic health records currently handle data poorly and should be enhanced to better collect, display and use it to support clinical care, according to a new study published in JMIR Medical Informatics.

The authors, from Beth Israel Deaconess Medical Center and elsewhere, state that the next generation of EHRs need to improve workflow, clinical decision-making and clinical notes. They decry some of the problems with existing EHRs, including data that is not displayed well, under-networked, underutilized and wasted. The lack of available data causes errors, creates inefficiencies and increases costs. Data is also “thoughtlessly carried forward or copied and pasted into the current note” creating “chartjunk,” the researchers say.

They suggest ways that future EHRs can be improved, including:

  • Integrating bedside and telemetry monitoring systems with EHRs to provide data analytics that could support real time clinical assessments
  • Formulating notes in real-time using structured data and natural language processing on the free text being entered
  • Formulating treatment plans using information in the EHR plus a review of population data bases to identify similar patients, their treatments and outcomes
  • Creating a more “intelligent” design that capitalizes on the note writing process as well as the contents of the note.

“We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose,” the researchers say. “The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them.”

Many have pointed out the flaws in current EHR design that impede the optimum use of data and hinder workflow. Researchers have suggested that EHRs can be part of a learning health system to better capture and use data to improve clinical practice, create new evidence, educate, and support research efforts.

 

Disrupting Electronic Health Records Systems: The Next Generation

1Beth Israel Deaconess Medical Center, Division of Pulmonary, Critical Care, and Sleep Medicine, Boston, MA, US; 2Yale University, Yale-New Haven Hospital, Department of Pulmonary and Critical Care, New Haven, CT, US; 3Center for Urban Science and Progress, New York University, New York, NY, US; 4Center for Wireless Health, Departments of Anesthesiology and Neurological Surgery, University of Virginia, Charlottesville, VA, US

*these authors contributed equally

JMIR  23.10.15 Vol 3, No 4 (2015): Oct-Dec

This paper is in the following e-collection/theme issue:
Electronic Health Records
Clinical Information and Decision Making
Viewpoints on and Experiences with Digital Technologies in Health
Decision Support for Health Professionals

The health care system suffers from both inefficient and ineffective use of data. Data are suboptimally displayed to users, undernetworked, underutilized, and wasted. Errors, inefficiencies, and increased costs occur on the basis of unavailable data in a system that does not coordinate the exchange of information, or adequately support its use. Clinicians’ schedules are stretched to the limit and yet the system in which they work exerts little effort to streamline and support carefully engineered care processes. Information for decision-making is difficult to access in the context of hurried real-time workflows. This paper explores and addresses these issues to formulate an improved design for clinical workflow, information exchange, and decision making based on the use of electronic health records.   JMIR Med Inform 2015;3(4):e34   http://dx.doi.org:/10.2196/medinform.4192

KEYWORDS

Celi LA, Marshall JD, Lai Y, Stone DJ. Disrupting Electronic Health Records Systems: The Next Generation. JMIR Med Inform 2015;3(4):e34  DOI: 10.2196/medinform.4192  PMID: 26500106

Weed introduced the “Subjective, Objective, Assessment, and Plan” (SOAP) note in the late 1960s [1]. This note entails a high-level structure that supports the thought process that goes into decision-making: subjective data followed by ostensibly more reliable objective data employed to formulate an assessment and subsequent plan. The flow of information has not fundamentally changed since that time, but the complexities of the information, possible assessments, and therapeutic options certainly have greatly expanded. Clinicians have not heretofore created anything like an optimal data system for medicine [2,3]. Such a system is essential to streamline workflow and support decision-making rather than adding to the time and frustration of documentation [4].

What this optimal data system offers is not a radical departure from the traditional thought processes that go into the production of a thoughtful and useful note. However, in the current early stage digitized medical system, it is still incumbent on the decision maker/note creator to capture the relevant priors, and to some extent, digitally scramble to collect all the necessary updates. The capture of these priors is a particular challenge in an era where care is more frequently turned over among different caregivers than ever before. Finally, based on a familiarity of the disease pathophysiology, the medical literature and evidence-based medicine (EBM) resources, the user is tasked with creating an optimal plan based on that assessment. In this so-called digital age, the amount of memorization, search, and assembly can be minimized and positively supported by a well-engineered system purposefully designed to assist clinicians in note creation and, in the process, decision-making.

Since 2006, use of electronic health records (EHRs) by US physicians increased by over 160% with 78% of office-based physicians and 59% of hospitals having adopted an EHR by 2013 [5,6]. With implementation of federal incentive programs, a majority of EHRs were required to have some form of built-in clinical decision support tools by the end of 2012 with further requirements mandated as the Affordable Care Act (ACA) rolls out [7]. These requirements recognize the growing importance of standardization and systematization of clinical decision-making in the context of the rapidly changing, growing, and advancing field of medical knowledge. There are already EHRs and other technologies that exist, and some that are being implemented, that integrate clinical decision support into their functionality, but a more intelligent and supportive system can be designed that capitalizes on the note writing process itself. We should strive to optimize the note creation process as well as the contents of the note in order to best facilitate communication and care coordination. The following sections characterize the elements and functions of this decision support system (Figure 1).

http://medinform.jmir.org/article/viewFile/4192/1/68490

Figure 1. Clinician documentation with fully integrated data systems support. Prior notes and data are input for the following note and decisions. Machine analyzes input and displays suggested diagnoses and problem list, and test and treatment recommendations based on various levels of evidence: CPG – clinical practice guidelines, UTD – Up to Date®, DCDM – Dynamic Clinical Data Mining.View this figure

Incorporating Data

Overwhelmingly, the most important characteristic of the electronic note is its potential for the creation and reception of what we term “bidirectional data streams” to inform both decision-making and research. By bidirectional data exchange, we mean that electronic notes have the potential to provide data streams to the entirety of the EHR database and vice versa. The data from the note can be recorded, stored, accessed, retrieved, and mined for a variety of real-time and future uses. This process should be an automatic and intrinsic property of clinical information systems. The incoming data stream is currently produced by the data that is slated for import into the note according to the software requirements of the application and the locally available interfaces [8]. The provision of information from the note to the system has both short- and long-term benefits: in the short term, this information provides essential elements for functions such as benchmarking and quality reporting; and in the long term, the information provides the afferent arm of the learning system that will identify individualized best practices that can be applied to individual patients in future formulations of plans.

Current patient data should include all the electronically interfaced elements that are available and pertinent. In addition to the usual elements that may be imported into notes (eg, laboratory results and current medications), the data should include the immediate prior diagnoses and treatment items, so far as available (especially an issue for the first note in a care sequence such as in the ICU), the active problem list, as well as other updates such as imaging, other kinds of testing, and consultant input. Patient input data should be included after verification (eg, updated reviews of systems, allergies, actual medications being taken, past medical history, family history, substance use, social/travel history, and medical diary that may include data from medical devices). These data priors provide a starting point that is particularly critical for those note writers who are not especially (or at all) familiar with the patient. They represent historical (and yet dynamic) evidence intended to inform decision-making rather than “text” to be thoughtlessly carried forward or copied and pasted into the current note.

Although the amount and types of data collected are extremely important, how it is used and displayed are paramount. Many historical elements of note writing are inexcusably costly in terms of clinician time and effort when viewed at a level throughout the entire health care system. Redundant items such as laboratory results and copy-and-pasted nursing flow sheet data introduce a variety of “chartjunk” that clutters documentation and makes the identification of truly important information more difficult and potentially even introduces errors that are then propagated throughout the chart [9,10]. Electronic systems are poised to automatically capture the salient components of care so far as these values are interfaced into the system and can even generate an active problem list for the providers. With significant amounts of free text and “unstructured data” being entered, EHRs will need to incorporate more sophisticated processes such as natural language processing and machine learning to provide accurate interpretation of text entered by a variety of different users, from different sources, and in different formats, and then translated into structured data that can be analyzed by the system.

Optimally, a fully functional EHR would be able to provide useful predictive data analytics including the identification of patterns that characterize a patient’s normal physiologic state (thereby enabling detection of significant change from that state), as well as mapping of the predicted clinical trajectory, such as prognosis of patients with sepsis under a number of different clinical scenarios, and with the ability to suggest potential interventions to improve morbidity or mortality [11]. Genomic and other “-omic” information will eventually be useful in categorizing certain findings on the basis of individual susceptibilities to various clinical problems such as sepsis, auto-immune disease, and cancer, and in individualizing diagnostic and treatment recommendations. In addition, an embedded data analytic function will be able to recognize a constellation of relatively subtle changes that are difficult or impossible to detect, especially in the presence of chronic co-morbidities (eg, changes consistent with pulmonary embolism, which can be a subtle and difficult diagnosis in the presence of long standing heart and/or lung disease) [12,13].

The data presentation section must be thoughtfully displayed so that the user is not overwhelmed, but is still aware of what elements are available, and directed to those aspects that are most important. The user then has the tools at hand to construct the truly cognitive sections of the note: the assessment and plan. Data should be displayed in a fashion that efficiently and effectively provides a maximally informationally rich and minimally distracting graphic display. The fundamental principle should result in a thoughtfully planned data display created on the ethos of “just enough and no more,” as well as the incorporation of clinical elements such as severity, acuity, stability, and reversibility. In addition to the now classic teachings of Edward Tufte in this regard, a number of new data artists have entered the field [14]. There is room for much innovation and improvement in this area, as medicine transitions from paper to a digital format that provides enormous potential and capability for new types of displays.

Integrating the Monitors

Bedside and telemetry monitoring systems have become an element of the clinical information system but they do not yet interact with the EHR in a bidirectional fashion to provide decision support. In addition to the raw data elements, the monitors can provide data analytics that could support real-time clinical assessment as well as material for predictive purposes apart from the traditional noisy alarms [15,16]. It may be less apparent how the reverse stream (EHR to bedside monitor) would work, but the EHR can set the context for the interpretation of raw physiologic signals based on previously digitally captured vital signs, patient co-morbidities and current medications, as well as the acute clinical context.

In addition, the display could provide an indication of whether technically ”out of normal range” vital signs (or labs in the emergency screen described below) are actually “abnormal” for this particular patient. For example, a particular type of laboratory value for a patient may have been chronically out of normal range and not represent a change requiring acute investigation and/or treatment. This might be accomplished by displaying these types of ”normally abnormal” values in purple or green rather than red font for abnormal, or via some other designating graphic. The purple font (or whatever display mode was utilized) would designate the value as technically abnormal, but perhaps notcontextually abnormal. Such designations are particularly important for caregivers who are not familiar with the patient.

It also might be desirable to use a combination of accumulated historical data from the monitor and the EHR to formulate personalized alarm limits for each patient. Such personalized alarm limits would provide a smarter range of acceptable values for each patient and perhaps also act to reduce the unacceptable number of false positive alarms that currently plague bedside caregivers (and patients) [17]. These alarm limits would be dynamically based on the input data and subject to reformulation as circumstances changed. We realize that any venture into alarm settings becomes a regulatory and potentially medico-legal issue, but these intimidating factors should not be allowed to grind potentially beneficial innovations to a halt. For example, “hard” limits could be built into the alarm machine so that the custom alarm limits could not fall outside certain designated values.

Supporting the Formulation of the Assessment

Building on both prior and new, interfaced and manually entered data as described above, the next framework element would consist of the formulation of the note in real time. This would consist of structured data so far as available and feasible, but is more likely to require real-time natural language processing performed on the free text being entered. Different takes on this kind of templated structure have already been introduced into several electronic systems. These include note templates created for specific purposes such as end-of-life discussions, or documentation of cardiopulmonary arrest. The very nature of these note types provides a robust context for the content. We also recognize that these shorter and more directed types of notes are not likely to require the kind of extensive clinical decision support (CDS) from which an admission or daily progress note may benefit.

Until the developers of EHRs find a way to fit structured data selection seamlessly and transparently into workflow, we will have to do the best we can with the free text that we have available. While this is a bit clunky in terms of data utilization purposes, perhaps it is not totally undesirable, as free text inserts a needed narrative element into the otherwise storyless EHR environment. Medical care can be described as an ongoing story and free text conveys this story in a much more effective and interesting fashion than do selected structured data bits. Furthermore, stories tend to be more distinctive than lists of structured data entries, which sometimes seem to vary remarkably little from patient to patient. But to extract the necessary information, the computer still needs a processed interpretation of that text. More complex systems are being developed and actively researched to act more analogously to our own ”human” form of clinical problem solving [18], but until these systems are integrated into existing EHRs, clinicians may be able to help by being trained to minimize the potential confusion engendered by reducing completely unconstrained free text entries and/or utilizing some degree of standardization within the use of free text terminologies and contextual modifiers.

Employing the prior data (eg, diagnoses X, Y, Z from the previous note) and new data inputs (eg, laboratory results, imaging reports, and consultants’ recommendations) in conjunction with the assessment being entered, the system would have the capability to check for inconsistencies and omissions based on analysis of both prior and new entries. For example, a patient in the ICU has increasing temperature and heart rate, and decreasing oxygen saturation. These continuous variables are referenced against other patient features and risk factors to suggest the possibility that the patient has developed a pulmonary embolism or an infectious ventilator-associated complication. The system then displays these possible diagnoses within the working assessment screen with hyperlinks to the patient’s flow sheets and other data supporting the suggested problems (Figure 2). The formulation of the assessment is clearly not as potentially evidence-based as that of the plan; however, there should still be dynamic, automatic and rapid searches performed for pertinent supporting material in the formulation of the assessment. These would include the medical literature, including textbooks, online databases, and applications such as WebMD. The relevant literature that the system has identified, supporting the associations listed in the assessment and plan, can then be screened by the user for accuracy and pertinence to the specific clinical context. Another potentially useful CDS tool for assessment formulation is a modality we have termed dynamic clinical data mining (DCDM) [19]. DCDM draws upon the power of large sets of population health data to provide differential diagnoses associated with groupings or constellations of symptoms and findings. Similar to the process just described, the clinician would then have the ability to review and incorporate these suggestions or not.

An optional active search function would also be provided throughout the note creation process for additional flexibility—clinicians are already using search engines, but doing so sometimes in the absence of specific clinical search algorithms (eg, a generic search engine such Google). This may produce search results that are not always of the highest possible quality [20,21]. The EHR-embedded search engine would have its algorithm modified to meet the task as Google has done previously for its search engine [22]. The searchable TRIP database provides a search engine for high-quality clinical evidence, as do the search modalities within Up to Date, Dynamed, BMJ Clinical Evidence, and others [23,24].

http://medinform.jmir.org/article/viewFile/4192/1/68491

Figure 2. Mock visualization of symptoms, signs, laboratory results, and other data input and systems suggestion for differential diagnoses.

Supporting the Formulation of the Plan

With the assessment formulated, the system would then formulate a proposed plan using EBM inputs and DCDM refinements for issues lying outside EBM knowledge. Decision support for plan formulation would include items such as randomized control trials (RCTs), observational studies, clinical practice guidelines (CPGs), local guidelines, and other relevant elements (eg, Cochrane reviews). The system would provide these supporting modalities in a hierarchical fashion using evidence of the highest quality first before proceeding down the chain to lower quality evidence. Notably, RCT data are not available for the majority of specific clinical questions, or it is not applicable because the results cannot be generalized to the patient at hand due to the study’s inclusion and exclusion criteria [25]. Sufficiently reliable observational research data also may not be available, although we expect that the holes in the RCT literature will be increasingly filled by observational studies in the near future [16,26]. In the absence of pertinent evidence-based material, the system would include the functionality which we have termed DCDM, and our Stanford colleagues have termed the “green button” [19,27]. This still-theoretical process is described in detail in the references, but in brief, DCDM would utilize a search engine type of approach to examine a population database to identify similar patients on the basis of the information entered in the EHR. The prior treatments and outcomes of these historical patients would then be analyzed to present options for the care of the current patient that were, to a large degree, based on prior data. The efficacy of DCDM would depend on, among other factors, the availability of a sufficiently large population EHR database, or an open repository that would allow for the sharing of patient data between EHRs. This possibility is quickly becoming a reality with the advent of large, deidentified clinical databases such as that being created by the Patient Centered Outcomes Research Institute [26].

The tentative plan could then be modified by the user on the basis of her or his clinical “wetware” analysis. The electronic workflow could be designed in a number of ways that were modifiable per user choice/customization. For example, the user could first create the assessment and plan which would then be subject to comment and modification by the automatic system. This modification might include suggestions such as adding entirely new items, as well as the editing of entered items. In contrast, as described, the system could formulate an original assessment and plan that was subject to final editing by the user. In either case, the user would determine the final output, but the system would record both system and final user outputs for possible reporting purposes (eg, consistency with best practices). Another design approach might be to display the user entry in toto on the left half of a computer screen and a system-formulated assessment (Figure 3) and plan on the right side for comparison. Links would be provided throughout the system formulation so that the user could drill into EHR-provided suggestions for validation and further investigation and learning. In either type of workflow, the system would comparatively evaluate the final entered plan for consistency, completeness, and conformity with current best practices. The system could display the specific items that came under question and why. Users may proceed to adopt or not, with the option to justify their decision. Data reporting analytics could be formulated on the basis of compliance with EBM care. Such analytics should be done and interpreted with the knowledge that EBM itself is a moving target and many clinical situations do not lend themselves to resolution with the current tools supplied by EBM.

Since not all notes call for this kind of extensive decision support, the CDS material could be displayed in a separate columnar window adjacent to the main part of the screen where the note contents were displayed so that workflow is not affected. Another possibility would be an “opt-out” button by which the user would choose not to utilize these system resources. This would be analogous but functionally opposite to the “green button” opt-in option suggested by Longhurst et al, and perhaps be designated the “orange button” to clearly make this distinction [27]. Later, the system would make a determination as to whether this lack of EBM utilization was justified, and provide a reminder if the care was determined to be outside the bounds of current best practices. While the goal is to keep the user on the EBM track as much as feasible, the system has to “realize” that real care will still extend outside those bounds for some time, and that some notes and decisions simply do not require such machine support.

There are clearly still many details to be worked out regarding the creation and use of a fully integrated bidirectional EHR. There currently are smaller systems that use some components of what we propose. For example, a large Boston hospital uses a program called QPID which culls all previously collected patient data and uses a Google-like search to identify specific details of relevant prior medical history which is then displayed in a user-friendly fashion to assist the clinician in making real-time decisions on admission [28]. Another organization, the American Society of Clinical Oncology, has developed a clinical Health IT tool called CancerLinQ which utilizes large clinical databases of cancer patients to trend current practices and compare the specific practices of individual providers with best practice guidelines [29]. Another hospital system is using many of the components discussed in a new, internally developed platform called Fluence that allows aggregation of patient information, and applies already known clinical practice guidelines to patients’ problem lists to assist practitioners in making evidenced-based decisions [30]. All of these efforts reflect inadequacies in current EHRs and are important pieces in the process of selectively and wisely incorporating these technologies into EHRs, but doing so universally will be a much larger endeavor.

http://medinform.jmir.org/article/viewFile/4192/1/68492

Figure 3. Mock screenshot for the “Assessment and Plan” screen with background data analytics. Based on background analytics that are being run by the system at all times, a series of “problems” are identified and suggested by the system, which are then displayed in the EMR in the box on the left. The clinician can then select problems that are suggested, or input new problems that are then displayed in the the box on the right of the EMR screen, and will now be apart of ongoing analytics for future assessment.  View this figure

Conclusions

Medicine has finally entered an era in which clinical digitization implementations and data analytic systems are converging. We have begun to recognize the power of data in other domains and are beginning to apply it to the clinical space, applying digitization as a necessary but insufficient tool for this purpose (personal communication from Peter Szolovits, The Unreasonable Effectiveness of Clinical Data. Challenges in Big Data for Data Mining, Machine Learning and Statistics Conference, March 2014). The vast amount of information and clinical choices demands that we provide better supports for making decisions and effectively documenting them. The Institute of Medicine demands a “learning health care system” where analysis of patient data is a key element in continuously improving clinical outcomes [31]. This is also an age of increasing medical complexity bound up in increasing financial and time constraints. The latter dictate that medical practice should become more standardized and evidence-based in order to optimize outcomes at the lowest cost. Current EHRs, mostly implemented over the past decade, are a first step in the digitization process, but do not support decision-making or streamline the workflow to the extent to which they are capable. In response, we propose a series of information system enhancements that we hope can be seized, improved upon, and incorporated into the next generation of EHRs.

There is already government support for these advances: The Office of the National Coordinator for Health IT recently outlined their 6-year and 10-year plans to improve EHR and health IT interoperability, so that large-scale realizations of this idea can and will exist. Within 10 years, they envision that we “should have an array of interoperable health IT products and services that allow the health care system to continuously learn and advance the goal of improved health care.” In that, they envision an integrated system across EHRs that will improve not just individual health and population health, but also act as a nationwide repository for searchable and researchable outcomes data [32]. The first step to achieving that vision is by successfully implementing the ideas and the system outlined above into a more fully functional EHR that better supports both workflow and clinical decision-making. Further, these suggested changes would also contribute to making the note writing process an educational one, thereby justifying the very significant time and effort expended, and would begin to establish a true learning system of health care based on actual workflow practices. Finally, the goal is to keep clinicians firmly in charge of the decision loop in a “human-centered” system in which technology plays an essential but secondary role. As expressed in a recent article on the issue of automating systems [33]:

In this model (human centered automation)…technology takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

Key Concepts and Terminology

A number of concepts and terms were introduced throughout this paper, and some clarification and elaboration of these follows:

  • Affordable Care Act (ACA): Legislation passed in 2010 that constitutes two separate laws including the Patient Protection and Affordable Care Act and the Health Care and Education Reconciliation Act. These two pieces of legislation act together for the expressed goal of expanding health care coverage to low-income Americans through expansion of Medicaid and other federal assistance programs [34].
  • Clinical Decision Support (CDS) is defined by CMS as “a key functionality of health information technology” that encompasses a variety of tools including computerized alerts and reminders, clinical guidelines, condition-specific order sets, documentations templates, diagnostic support, and other tools that “when used effectively, increases quality of care, enhances health outcomes, helps to avoid errors and adverse events, improves efficiency, reduces costs, and boosts provider and patient satisfaction” [35].
  • Cognitive Computing is defined as “the simulation of human thought processes in a computerize model…involving self learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works” [36]. Defined by IBM as computer systems that “are trained using artificial intelligence and machine learning algorithms to sense, predict, infer and, in some ways, think” [37].
  • Deep learning is a form of machine learning (a more specific subgroup of cognitive computing) that utilizes multiple levels of data to make hierarchical connections and recognize more complex patterns to be able to infer higher level concepts from lower levels of input and previously inferred concepts [38]. Figure 3 demonstrates how this concept relates to patients illustrating the system recognizing patterns of signs and symptoms experienced by a patient, and then inferring a diagnosis (higher level concept) from those lower level inputs. The next level concept would be recognizing response to treatment for proposed diagnosis, and offering either alternative diagnoses, or change in therapy, with the system adapting as the patient’s course progresses.
  • Dynamic clinical data mining (DCDM): First, data mining is defined as the “process of discovering patterns, automatically or semi-automatically, in large quantities of data” [39]. DCDM describes the process of mining and interpreting the data from large patient databases that contain prior and concurrent patient information including diagnoses, treatments, and outcomes so as to make real-time treatment decisions [19].
  • Natural Language Processing (NLP) is a process based on machine learning, or deep learning, that enables computers to analyze and interpret unstructured human language input to recognize and even act upon meaningful patterns [39,40].

References

  1. Weed LL. Medical records, patient care, and medical education. Ir J Med Sci 1964 Jun;462:271-282. [Medline]
  2. Celi L, Csete M, Stone D. Optimal data systems: the future of clinical predictions and decision support. Curr Opin Crit Care 2014 Oct;20(5):573-580. [CrossRef] [Medline]
  3. Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS One 2013 Nov;8(11):e80318 [FREE Full text] [CrossRef] [Medline]
  4. Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study. JAMA Intern Med 2013 Nov 25;173(21):1962-1969. [CrossRef] [Medline]

more ….

The Electronic Health Record: How far we have travelled, and where is journey’s end?

http://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-have-travelled-and-where-is-journeys-end/

A focus of the Accountable Care Act is improved delivery of quality, efficiency and effectiveness to the patients who receive healthcare in US from the providers in a coordinated system.  The largest confounder in all of this is the existence of silos that are not readily crossed, handovers, communication lapses, and a heavy paperwork burden.  We can add to that a large for profit insurance overhead that is disinterested in the patient-physician encounter.  Finally, the knowledge base of medicine has grown sufficiently that physicians are challenged by the amount of data and the presentation in the Medical Record.

I present a review of the problems that have become more urgent to fix in the last decade.  The administration and paperwork necessitated by health insurers, HMOs and other parties today may account for 40% of a physician’s practice, and the formation of large physician practice groups and alliances of the hospital and hospital staffed physicians (as well as hospital system alliances) has increased in response to the need to decrease the cost of non-patient care overhead.   I discuss some of the points made by two innovators from the healthcare and  the communications sectors.

I also call attention to the New York Times front page article calling attention to a sharp rise in inflation-adjusted Medicare payments for emergency-room services since 2006 due to upcoding at the highest level, partly related to the ability to physician ability to overstate the claim for service provided by correctible improvements I discuss below.  (NY Times, 9/22/2012).  The solution still has another built in step that requires quality control of both the input and the output, achievable today.  This also comes at a time that there is a nationwide implementation of ICD-10 to replace ICD-9 coding.

US medical groups' adoption of EHR (2005)

US medical groups’ adoption of EHR (2005) (Photo credit: Wikipedia)

The first finding by Robert S Didner, on “Decision Making in the Clinical Setting”, concludes that the gathering of information has large costs while reimbursements for the activities provided have decreased, detrimental to the outcomes that are measured.  He suggests that this data can be gathered and reformatted to improve its value in the clinical setting by leading to decisions with optimal outcomes.  He outlines how this can be done.

The second is a discussion by Emergency Medicine  physicians, Thomas A Naegele and harry P Wetzler,  who have developed a Foresighted Practice Guideline (FPG) (“The Foresighted Practice Guideline Model: A Win-Win Solution”).   They focus on collecting data from similar patients, their interventions, and treatments to better understand the value of alternative courses of treatment.  Using the FPG model will enable physicians to elevate their practice to a higher level and they will have hard information on what works.  These two views are more than 10 years old, and they are complementary.

Didner points out that there is no one sequence of tests and questions that can be optimal for all presenting clusters.  Even as data and test results are acquired, the optimal sequence of information gathering is changed, depending on the gathered information.  Thus, the dilemma is created of how to collect clinical data.  Currently, the way information is requested and presented does not support the way decisions are made.   Decisions are made in a “path-dependent” way, which is influenced by the sequence in which the components are considered.    Ideally, it would require a separate form for each combination of presenting history and symptoms, prior to ordering tests, which is unmanageable.   The blank paper format is no better, as the data is not collected in the way it would be used, and it constitutes separate clusters (vital signs, lab work{also divided into CBC, chemistry panel, microbiology, immunology, blood bank, special tests}].   Improvements have been made in the graphical presentation of a series of tests. Didner presents another means of gathering data in machine manipulable form that improves the expected outcomes.  The basis for this model is that at any stage of testing and information gathering there is an expected outcome from the process, coupled with a metric, or hierarchy of values to determine the relative desirability of the possible outcomes.

He creates a value hierarchy:

  1. Minimize the likelihood that a treatable, life-threatening disorder is not treated.
  2. Minimize the likelihood that a treatable, permanently-disabling or disfiguring disorder is not treated.
  3. Minimize the likelihood that a treatable, discomfort causing disorder is not treated.
  4. Minimize the likelihood that a risky procedure, (treatment or diagnostic procedure) is inappropriately administered.
  5. Minimize the likelihood that a discomfort causing procedure is inappropriately administered.
  6. Minimize the likelihood that a costly procedure is inappropriately administered.
  7. Minimize the time of diagnosing and treating the patient.
  8. Minimize the cost of diagnosing and treating the patient.

In reference to a way of minimizing the number, time and cost of tests, he determines that the optimum sequence could be found using Claude  Shannon’s Information theory.  As to a hierarchy of outcome values, he refers to the QALY scale as a starting point. At any point where a determination is made there is disparate information that has to be brought together, such as, weight, blood pressure, cholesterol, etc.  He points out, in addition, that the way the clinical information is organized is not opyimal for the way to display information to enhance human cognitive performance in decision support.  Furthermore, he looks at the limit of short term memory as 10 chunks of information at any time, and he compares the positions of chess pieces on the board with performance of a grand master, if the pieces are in an order commensurate with a “line of attack”.  The information has to be ordered in the way it is to be used! By presenting information used for a particular decision component in a compact space the load on short term memory is reduced, and there is less strain in searching for the relevant information.

He creates a Table to illustrate the point.

Correlation of weight with other cardiac risk factors

Chol                       0.759384
HDL                        -0.53908
LDL                         0.177297
bp-syst                 0.424728
bp-dia                   0.516167

Triglyc                   0.637817

The task of the information system designer is to provide or request the right information, in the best form, at each stage of the procedure.

The FPG concept as deployed by Naegele and Wetzler is a model for design of a more effective health record that has already shown substantial proof of concept in the emergency room setting.  In principle, every clinical encounter is viewed as a learning experience that requires the collection of data , learning from similar patients, and comparing the value of alternative courses of treatment.  The framework for standard data collection is the FPG model. The FPG is distinguished from hindsighted guidelines which are utilized by utilization and peer review organizations.  Over time, the data forms patient clusters and enables the physician to function at a higher level.

Hypothesis construction is experiential, and hypothesis generation and testing is required to go from art to science in the complex practice of medicine.  In every encounter there are 3 components: patient, process, and outcome.  The key to the process is to collect data on patients, processes and outcomes in a standard way.  The main problem with a large portion of the chart is that the description is not uniform.  This is not fully resolved with good natural language encryption.  The standard words and phrases that may be used for a particular complaint or condition constitute a guideline.  This type of “guided documentation” is a step in moving toward a guided practice.  It enables physicians to gather data on patients, processes and outcomes of care in routine settings, and they can be reviewed and updated.  This is a higher level of methodology than basing guidelines on “consensus and opinion”.
When Lee Goldman, et al., created the guideline for classifying chest pain in the emergency room, the characteristics of the chest pain was problematic. In dealing with this he determined that if the chest pain was “stabbing”, or if it radiated to the right foot, heart attack is excluded.

The IOM is intensely committed to practice guidelines for care.  The guidelines are the data bases of the science of medical decision making and disposition processing, and are related to process-flow.  However, the hindsighted  or retrospective approach is diagnosis or procedure oriented.  HPGs are the tool used in utilization review.  The FPG model focuses on the physician-patient encounter and is problem oriented.   We can go back further and remember the contribution by Lawrence Weed to the “structured medical record”.
The physicians today use an FPG framework in looking at a problem or pathology (especially in pathology, which extends the classification by used of biomarker staining).  The Standard Patient File Format (SPPF) was developed by Weed and includes: 1. Patient demographics; 2. Front of the chart; 3. Subjective: Objective; Assessment/diagnosis;6. Plan; Back of the chart.  The FPG retains the structure of the SPPF  All of the words and phrases in the FPG are the data base for the problem or condition. The current construct of the chart is uninviting: nurses notes, medications, lab results, radiology, imaging.

Realtime Clinical Expert Support and Validation System
Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.

The introduction of a DASHBOARD has allowed a presentation of drug reactions, allergies, primary and secondary diagnoses, and critical information about any patient the care giver needing access to the record. The advantage of this innovation is obvious. The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success. It is also imperative that the extraction of data from disparate sources will, in the long run, further improve the diagnostic process.  For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin). Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.  Characteristics expressed as measurements of size, density, and concentration, resulting in more than a dozen composite variables, including the mean corpuscular volume (MCV), mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH), total white cell count (WBC), total lymphocyte count, neutrophil count (mature granulocyte count and bands), monocytes, eosinophils, basophils, platelet count, and mean platelet volume (MPV), blasts, reticulocytes and platelet clumps, as well as other features of classification.   This has been described in a previous post.

It is beyond comprehension that a better construct has not be created for common use.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments, & Computers 2004; 36 (3): 506–515.
De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms. Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts.
Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories. Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

The potential contribution of informatics to healthcare is more than currently estimated

http://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-informatics-to-healthcare-is-more-than-currently-estimated/

 The estimate of improved costsavings in healthcare and diagnostic accuracy is extimated to be substantial.   I have written about the unused potential that we have not yet seen.  In short, there is justification in substantial investment in resources to this, as has been proposed as a critical goal.  Does this mean a reduction in staffing?  I wouldn’t look at it that way.  The two huge benefits that would accrue are:

  1. workflow efficiency, reducing stress and facilitating decision-making.
  2. scientifically, primary knowledge-based  decision-support by well developed algotithms that have been at the heart of computational-genomics.
 Can computers save health care? IU research shows lower costs, better outcomes

Cost per unit of outcome was $189, versus $497 for treatment as usual

 Last modified: Monday, February 11, 2013
BLOOMINGTON, Ind. — New research from Indiana University has found that machine learning — the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems — can drastically improve both the cost and quality of health care in the United States.
 Physicians using an artificial intelligence framework that predicts future outcomes would have better patient outcomes while significantly lowering health care costs.
Using an artificial intelligence framework combining Markov Decision Processes and Dynamic Decision Networks, IU School of Informatics and Computing researchers Casey Bennett and Kris Hauser show how simulation modeling that understands and predicts the outcomes of treatment could
  • reduce health care costs by over 50 percent while also
  • improving patient outcomes by nearly 50 percent.
The work by Hauser, an assistant professor of computer science, and Ph.D. student Bennett improves upon their earlier work that
  • showed how machine learning could determine the best treatment at a single point in time for an individual patient.
By using a new framework that employs sequential decision-making, the previous single-decision research
  • can be expanded into models that simulate numerous alternative treatment paths out into the future;
  • maintain beliefs about patient health status over time even when measurements are unavailable or uncertain; and
  • continually plan/re-plan as new information becomes available.

In other words, it can “think like a doctor.”  (Perhaps better because of the limitation in the amount of information a bright, competent physician can handle without error!)

“The Markov Decision Processes and Dynamic Decision Networks enable the system to deliberate about the future, considering all the different possible sequences of actions and effects in advance, even in cases where we are unsure of the effects,” Bennett said.  Moreover, the approach is non-disease-specific — it could work for any diagnosis or disorder, simply by plugging in the relevant information.  (This actually raises the question of what the information input is, and the cost of inputting.)
The new work addresses three vexing issues related to health care in the U.S.:
  1. rising costs expected to reach 30 percent of the gross domestic product by 2050;
  2. a quality of care where patients receive correct diagnosis and treatment less than half the time on a first visit;
  3. and a lag time of 13 to 17 years between research and practice in clinical care.

 

Framework for Simulating Clinical Decision-Making

“We’re using modern computational approaches to learn from clinical data and develop complex plans through the simulation of numerous, alternative sequential decision paths,” Bennett said. “The framework here easily out-performs the current treatment-as-usual, case-rate/fee-for-service models of health care.”  (see the above)
Bennett is also a data architect and research fellow with Centerstone Research Institute, the research arm of Centerstone, the nation’s largest not-for-profit provider of community-based behavioral health care. The two researchers had access to clinical data, demographics and other information on over 6,700 patients who had major clinical depression diagnoses, of which about 65 to 70 percent had co-occurring chronic physical disorders like diabetes, hypertension and cardiovascular disease.  Using 500 randomly selected patients from that group for simulations, the two
  • compared actual doctor performance and patient outcomes against
  • sequential decision-making models

using real patient data.

They found great disparity in the cost per unit of outcome change when the artificial intelligence model’s
  1. cost of $189 was compared to the treatment-as-usual cost of $497.
  2. the AI approach obtained a 30 to 35 percent increase in patient outcomes
Bennett said that “tweaking certain model parameters could enhance the outcome advantage to about 50 percent more improvement at about half the cost.”
While most medical decisions are based on case-by-case, experience-based approaches, there is a growing body of evidence that complex treatment decisions might be effectively improved by AI modeling.  Hauser said “Modeling lets us see more possibilities out to a further point –  because they just don’t have all of that information available to them.”  (Even then, the other issue is the processing of the information presented.)
Using the growing availability of electronic health records, health information exchanges, large public biomedical databases and machine learning algorithms, the researchers believe the approach could serve as the basis for personalized treatment through integration of diverse, large-scale data passed along to clinicians at the time of decision-making for each patient. Centerstone alone, Bennett noted, has access to health information on over 1 million patients each year. “Even with the development of new AI techniques that can approximate or even surpass human decision-making performance, we believe that the most effective long-term path could be combining artificial intelligence with human clinicians,” Bennett said. “Let humans do what they do well, and let machines do what they do well. In the end, we may maximize the potential of both.”
Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach” was published recently in Artificial Intelligence in Medicine. The research was funded by the Ayers Foundation, the Joe C. Davis Foundation and Indiana University.
For more information or to speak with Hauser or Bennett, please contact Steve Chaplin, IU Communications, at 812-856-1896 or stjchap@iu.edu.
IBM Watson Finally Graduates Medical School
It’s been more than a year since IBM’s Watson computer appeared on Jeopardy and defeated several of the game show’s top champions. Since then the supercomputer has been furiously “studying” the healthcare literature in the hope that it can beat a far more hideous enemy: the 400-plus biomolecular puzzles we collectively refer to as cancer.
Anomaly Based Interpretation of Clinical and Laboratory Syndromic Classes

Larry H Bernstein, MD, Gil David, PhD, Ronald R Coifman, PhD.  Program in Applied Mathematics, Yale University, Triplex Medical Science.

 Statement of Inferential  Second Opinion  
 Realtime Clinical Expert Support and Validation System

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides
  • empirical medical reference and suggests quantitative diagnostics options.

Background

The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record by
  • services, by
  • diagnostic method, and by
  • date, to cite examples.

This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports in a workstation entered by keying to icons.  This requires that the medical practitioner finds

  • the history,
  • medications,
  • laboratory reports,
  • cardiac imaging and EKGs, and
  • radiology
in different workspaces.  The introduction of a DASHBOARD has allowed a presentation of
  • drug reactions,
  • allergies,
  • primary and secondary diagnoses, and
  • critical information about any patient the care giver needing access to the record.
 The advantage of this innovation is obvious.  The startup problem is what information is presented and how it is displayed, which is a source of variability and a key to its success.

Proposal

We are proposing an innovation that supercedes the main design elements of a DASHBOARD and
  • utilizes the conjoined syndromic features of the disparate data elements.
So the important determinant of the success of this endeavor is that it facilitates both
  1. the workflow and
  2. the decision-making process
  • with a reduction of medical error.
 This has become extremely important and urgent in the 10 years since the publication “To Err is Human”, and the newly published finding that reduction of error is as elusive as reduction in cost.  Whether they are counterproductive when approached in the wrong way may be subject to debate.
We initially confine our approach to laboratory data because it is collected on all patients, ambulatory and acutely ill, because the data is objective and quality controlled, and because
  • laboratory combinatorial patterns emerge with the development and course of disease.  Continuing work is in progress in extending the capabilities with model data-sets, and sufficient data.
It is true that the extraction of data from disparate sources will, in the long run, further improve this process.  For instance, the finding of both ST depression on EKG coincident with an increase of a cardiac biomarker (troponin) above a level determined by a receiver operator curve (ROC) analysis, particularly in the absence of substantially reduced renal function.
The conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.  Traditionally this has been accomplished by an intuitive interpretation of the data by the individual clinician.  Through the application of geometric clustering analysis the data may interpreted in a more sophisticated fashion in order to create a more reliable and valid knowledge-based opinion.
The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates the review of a peripheral smear.  While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the production, release or suppression of the formed elements from the blood-forming organ to the circulation.  In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.
Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of
  • size,
  • density, and
  • concentration,
resulting in more than a dozen composite variables, including the
  1. mean corpuscular volume (MCV),
  2. mean corpuscular hemoglobin concentration (MCHC),
  3. mean corpuscular hemoglobin (MCH),
  4. total white cell count (WBC),
  5. total lymphocyte count,
  6. neutrophil count (mature granulocyte count and bands),
  7. monocytes,
  8. eosinophils,
  9. basophils,
  10. platelet count, and
  11. mean platelet volume (MPV),
  12. blasts,
  13. reticulocytes and
  14. platelet clumps,
  15. perhaps the percent immature neutrophils (not bands)
  16. as well as other features of classification.
The use of such variables combined with additional clinical information including serum chemistry analysis (such as the Comprehensive Metabolic Profile (CMP)) in conjunction with the clinical history and examination complete the traditional problem-solving construct. The intuitive approach applied by the individual clinician is limited, however,
  1. by experience,
  2. memory and
  3. cognition.
The application of rules-based, automated problem solving may provide a more reliable and valid approach to the classification and interpretation of the data used to determine a knowledge-based clinical opinion.
The classification of the available hematologic data in order to formulate a predictive model may be accomplished through mathematical models that offer a more reliable and valid approach than the intuitive knowledge-based opinion of the individual clinician.  The exponential growth of knowledge since the mapping of the human genome has been enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.  In a univariate universe the individual has significant control in visualizing data because unlike data may be identified by methods that rely on distributional assumptions.  As the complexity of statistical models has increased, involving the use of several predictors for different clinical classifications, the dependencies have become less clear to the individual.  The powerful statistical tools now available are not dependent on distributional assumptions, and allow classification and prediction in a way that cannot be achieved by the individual clinician intuitively. Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
In the diagnosis of anemia the variables MCV,MCHC and MCH classify the disease process  into microcytic, normocytic and macrocytic categories.  Further consideration of
proliferation of marrow precursors,
  • the domination of a cell line, and
  • features of suppression of hematopoiesis

provide a two dimensional model.  Several other possible dimensions are created by consideration of

  • the maturity of the circulating cells.
The development of an evidence-based inference engine that can substantially interpret the data at hand and convert it in real time to a “knowledge-based opinion” may improve clinical problem solving by incorporating multiple complex clinical features as well as duration of onset into the model.
An example of a difficult area for clinical problem solving is found in the diagnosis of SIRS and associated sepsis.  SIRS (and associated sepsis) is a costly diagnosis in hospitalized patients.   Failure to diagnose sepsis in a timely manner creates a potential financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria (temperature, heart rate, respiratory rate and WBC count) by the clinician.   The application of those clinical criteria, however, defines the condition after it has developed and has not provided a reliable method for the early diagnosis of SIRS.  The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including transthyretin, C-reactive protein and procalcitonin.  Immature granulocyte (IG) measurement has been proposed as a more readily available indicator of the presence of
  • granulocyte precursors (left shift).
The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, may provide a mechanism to enhance workflow and decision making.
An accurate classification based on the multiplicity of available data can be provided by an innovative system that utilizes  the conjoined syndromic features of disparate data elements.  Such a system has the potential to facilitate both the workflow and the decision-making process with an anticipated reduction of medical error.

This study is only an extension of our approach to repairing a longstanding problem in the construction of the many-sided electronic medical record (EMR).  On the one hand, past history combined with the development of Diagnosis Related Groups (DRGs) in the 1980s have driven the technology development in the direction of “billing capture”, which has been a focus of epidemiological studies in health services research using data mining.

In a classic study carried out at Bell Laboratories, Didner found that information technologies reflect the view of the creators, not the users, and Front-to-Back Design (R Didner) is needed.  He expresses the view:

“Pre-printed forms are much more amenable to computer-based storage and processing, and would improve the efficiency with which the insurance carriers process this information.  However, pre-printed forms can have a rather severe downside. By providing pre-printed forms that a physician completes
to record the diagnostic questions asked,
  • as well as tests, and results,
  • the sequence of tests and questions,
might be altered from that which a physician would ordinarily follow.  This sequence change could improve outcomes in rare cases, but it is more likely to worsen outcomes. “
Decision Making in the Clinical Setting.   Robert S. Didner
 A well-documented problem in the medical profession is the level of effort dedicated to administration and paperwork necessitated by health insurers, HMOs and other parties (ref).  This effort is currently estimated at 50% of a typical physician’s practice activity.  Obviously this contributes to the high cost of medical care.  A key element in the cost/effort composition is the retranscription of clinical data after the point at which it is collected.  Costs would be reduced, and accuracy improved, if the clinical data could be captured directly at the point it is generated, in a form suitable for transmission to insurers, or machine transformable into other formats.  Such data capture, could also be used to improve the form and structure of how this information is viewed by physicians, and form a basis of a more comprehensive database linking clinical protocols to outcomes, that could improve the knowledge of this relationship, hence clinical outcomes.
  How we frame our expectations is so important that
  • it determines the data we collect to examine the process.
In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.   This has meaning for
  • hospital operations, for
  • nonhospital laboratory operations, for
  • companies in the diagnostic business, and
  • for planning of health systems.
In 1983, a vision for creating the EMR was introduced by Lawrence Weed and others.  This is expressed by McGowan and Winstead-Fry.
J J McGowan and P Winstead-Fry. Problem Knowledge Couplers: reengineering evidence-based medicine through interdisciplinary development, decision support, and research.
Bull Med Libr Assoc. 1999 October; 87(4): 462–470.   PMCID: PMC226622    Copyright notice

Example of Markov Decision Process (MDP) trans...

Example of Markov Decision Process (MDP) transition automaton (Photo credit: Wikipedia)

Control loop of a Markov Decision Process

Control loop of a Markov Decision Process (Photo credit: Wikipedia)

English: IBM's Watson computer, Yorktown Heigh...

English: IBM’s Watson computer, Yorktown Heights, NY (Photo credit: Wikipedia)

English: Increasing decision stakes and system...

English: Increasing decision stakes and systems uncertainties entail new problem solving strategies. Image based on a diagram by Funtowicz, S. and Ravetz, J. (1993) “Science for the post-normal age” Futures 25:735–55 (http://dx.doi.org/10.1016/0016-3287(93)90022-L). (Photo credit: Wikipedia)

Read Full Post »

MYBPC3 gene and the heart

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

MYBPC3 myosin binding protein C, cardiac [ Homo sapiens (human) ]

http://www.ncbi.nlm.nih.gov/gene/4607

MYBPC3 provided by HGNC
Official Full Name – myosin binding protein C, cardiac provided by HGNC
Primary source – HGNC:HGNC:7551 ;
See related Ensembl:ENSG00000134571; HPRD:02980; MIM:600958; Vega:OTTHUMG00000166986
Gene type protein coding RefSeq status
REVIEWED Organism Homo sapiens
LineageEukaryota; Metazoa; Chordata; Craniata; Vertebrata; Euteleostomi; Mammalia; Eutheria; Euarchontoglires; Primates; Haplorrhini; Catarrhini; Hominidae; Homo
Also known asFHC; CMH4; CMD1MM; LVNC10; MYBP-C

SummaryMYBPC3 encodes the cardiac isoform of myosin-binding protein C. Myosin-binding protein C is a myosin-associated protein found in the cross-bridge-bearing zone (C region) of A bands in striated muscle. MYBPC3, the cardiac isoform, is expressed exclussively in heart muscle. Regulatory phosphorylation of the cardiac isoform in vivo by cAMP-dependent protein kinase (PKA) upon adrenergic stimulation may be linked to modulation of cardiac contraction. Mutations in MYBPC3 are one cause of familial hypertrophic cardiomyopathy. [provided by RefSeq, Jul 2008]

 

 

 

What is the official name of the MYBPC3 gene?

The official name of this gene is “myosin binding protein C, cardiac.”

MYBPC3 is the gene’s official symbol. The MYBPC3 gene is also known by other names, listed below.

Read more about gene names and symbols on the About page.

 

What is the normal function of the MYBPC3 gene?

The MYBPC3 gene provides instructions for making the cardiac myosin binding protein C (cardiac MyBP-C), which is found in heart (cardiac) muscle cells. In these cells, cardiac MyBP-C is associated with a structure called the sarcomere, which is the basic unit of muscle contraction. Sarcomeres are made up of thick and thin filaments. The overlapping thick and thin filaments attach to each other and release, which allows the filaments to move relative to one another so that muscles can contract. Regular contractions of cardiac muscle pump blood to the rest of the body.

In cardiac muscle sarcomeres, cardiac MyBP-C attaches to thick filaments and keeps them from being broken down. Cardiac MyBP-C has chemical groups called phosphate groups attached to it; when the phosphate groups are removed, cardiac MyBP-C is broken down, followed by the breakdown of the proteins of the thick filament. Cardiac MyBP-C also regulates the rate of muscle contraction, although the mechanism is not fully understood.

 

Does the MYBPC3 gene share characteristics with other genes?

The MYBPC3 gene belongs to a family of genes called fibronectin type III domain containing(fibronectin type III domain containing). It also belongs to a family of genes called immunoglobulin superfamily, I-set domain containing (immunoglobulin superfamily, I-set domain containing). It also belongs to a family of genes called MYBP (myosin binding proteins).

A gene family is a group of genes that share important characteristics. Classifying individual genes into families helps researchers describe how genes are related to each other. For more information, see What are gene families? in the Handbook.

http://ghr.nlm.nih.gov/gene/MYBPC3

 

Aliases for MYBPC3 Gene

http://www.genecards.org/cgi-bin/carddisp.pl

  • Myosin Binding Protein C, Cardiac 2 3
  • C-Protein, Cardiac Muscle Isoform 3 4
  • CMD1MM 3 6
  • LVNC10 3 6
  • CMH4 3 6
  • Myosin-Binding Protein C, Cardiac-Type 3
  • Myosin-Binding Protein C, Cardiac 2
  • Cardiac MyBP-C 4
  • MYBP-C 3
  • FHC 3

 

GO – Molecular functioni

http://www.uniprot.org/uniprot/Q14896

GO – Biological processi

Keywords – Molecular functioni

Muscle protein

Keywords – Biological processi

Cell adhesion

Keywords – Ligandi

Actin-binding, Metal-binding, Zinc

Enzyme and pathway databases

 

Organization and Sequence of Human Cardiac Myosin Binding Protein C Gene (MYBPC3) and Identification of Mutations Predicted to Produce Truncated Proteins in Familial Hypertrophic Cardiomyopathy

Lucie CarrierGisele BonneEllen BahrendBing YuPascale RichardFlorence NielBernard Hainque, et al.

Circulation Research.1997; 80: 427-434   http://dx.doi.org:/10.1161/01.res.0000435859.24609.b3

Cardiac myosin binding protein C (MyBP-C) is a sarcomeric protein belonging to the intracellular immunoglobulin superfamily. Its function is uncertain, but for a decade evidence has existed for both structural and regulatory roles. The gene encoding cardiac MyBP-C (MYBPC3) in humans is located on chromosome 11p11.2, and mutations have been identified in this gene in unrelated families with familial hypertrophic cardiomyopathy (FHC). Detailed characterization of the MYBPC3 gene is essential for studies on gene regulation, analysis of the role of MyBP-C in cardiac contraction through the use of recombinant DNA technology, and mutational analyses of FHC. The organization of human MYBPC3 and screening for mutations in a panel of French families with FHC were established using polymerase chain reaction, single-strand conformation polymorphism, and sequencing. The MYBPC3 gene comprises >21 000 base pairs and contains 35 exons. Two exons are unusually small in size, 3 bp each. We found six new mutations associated with FHC in seven unrelated French families. Four of these mutations are predicted to produce truncated cardiac MyBP-C polypeptides. The two others should each produce two aberrant proteins, one truncated and one mutated. The present study provides the first organization and sequence for an MyBP-C gene. The mutations reported here and previously in MYBPC3 result in aberrant transcripts that are predicted to encode significantly truncated cardiac MyBP-C polypeptides. This spectrum of mutations differs from the ones previously observed in other disease genes causing FHC. Our data strengthen the functional importance of MyBP-C in the regulation of cardiac work and provide the basis for further studies.

Cardiac MyBP-C is a member of a family comprising isoforms specific for slow-skeletal, fast-skeletal, and cardiac muscles. The skeletal isoforms were initially described in 1971 [1] and later came to be recognized as proteins with specific myosin- and titin-binding properties located in the A bands of the thick filaments of all vertebrate cross-striated muscle and forming a series of seven to nine transverse stripes, 43 nm apart, in the crossbridgebearing region. [2-5] Subsequent cloning of the three isoforms showed them to belong to the intracellular immunoglobulin superfamily and to share a conserved domain pattern consisting of IgI set domains and fn-3 domains. [6-11]

Comparison of the cardiac and the skeletal MyBP-C isoform sequences reveals three distinct regions that are specific to the cardiac isoform: the N-terminal domain C0 IgI containing 101 residues, the MyBP-C motif (a 105-residue stretch linking the C1 and C2 IgI domains), and a 28-residue loop inserted in the C5 IgI domain. [7,12,13] The MyBP-C motif is not specific to the cardiac isoform, but the alignment of skeletal and cardiac sequences revealed the addition of a nine-residue loop in the cardiac variant, which is the key substrate site for phosphorylation by both protein kinase A and a calmodulin-dependent protein kinase associated with the native protein. [7] As for the 28-residue loop, it is strictly cardiac specific. [14,15] The major myosin-binding site of MyBP-C resides in the C-terminal C10 IgI domain and is mainly restricted to the last 102 amino acids. [16-18] The titin-binding site is also located in the C-terminal region, spanning the C8 to C10 IgI domains of the molecule. [6,13]

The function of MyBP-C is uncertain, but for a decade evidence has existed to indicate both structural and regulatory roles. It should be stressed, however, that most studies were performed on skeletal muscles and that very little functional data exist for cardiac muscle. Several investigators have shown that MyBP-C modulates in vitro the shape and the length of sarcomeric thick filaments [19-21] and that depending on ionic strength and the molar ratio of actin and myosin in solution, the addition of MyBP-C can modulate the actin-activated ATPase activity of skeletal and cardiac myosins. [3,22,23] Partial extraction of the MyBP-C from rat skinned cardiac myocytes and rabbit skeletal muscle fibers alters Ca2+-sensitive tension, supporting the view that contractile function is affected by MyBP-C. [24] This view was very recently strengthened by the elegant studies of Weisberg and Winegrad. [25] These authors showed that phosphorylation of cardiac MyBP-C alters myosin crossbridges in native thick filaments isolated from rat ventricles and suggested that MyBP-C can modify force production in activated cardiac muscles.

The gene encoding the cardiac isoform in humans (MYBPC3) was assigned to the chromosomal location 11p11.2 [7] in a region where we had identified the CMH4 disease locus in FHC. [26] Recently, three mutations in MYBPC3 have been identified in unrelated families with FHC by our group [27] and others. [28] FHC is a genetically and phenotypically heterogeneous disease, transmitted as an autosomal-dominant trait. None of the previous hypotheses of the pathophysiological mechanisms would have predicted that defects in sarcomeric protein genes could be a possible molecular basis for the disease. The results of molecular genetic studies have nevertheless shown that many forms of the disease involve mutations in genes encoding sarcomeric proteins (for reviews, see [29-31]), and the findings that MYBPC3 is one of these disease genes are consistent with the view that cardiac MyBP-C may play a more important role in the regulation of cardiac contraction than was previously thought.

Detailed characterization of the MYBPC3 gene is essential for studies of gene regulation, analysis of the role of cardiac MyBP-C in the sarcomere structure and function through the use of recombinant DNA technology, and, finally, mutational analyses and further studies in FHC. In the present work, we have determined the organization and sequence of the human MYBPC3 gene and shown it to exceed 21 000 bp in size and to contain 35 exons, out of which 34 are coding. We also report that six new mutations in the MYBPC3 gene are associated with FHC in seven unrelated French families. Four of these mutations are predicted to produce truncated cardiac MyBP-C polypeptides in these families. The two others should each produce two aberrant proteins, one truncated and the other mutated or deleted.

 

Screening the Human MYBPC3 Gene for Mutations

The primers were constructed on the basis of flanking intron sequences and were used to amplify each exon (see Table 1). The touchdown PCR was performed (as described above according to the conditions reported in Table 1) on genomic DNA from unrelated FHC patients. For SSCP, PCR products were denatured for 5 minutes at 96 degrees C in a standard denaturing buffer, kept on ice for 5 minutes, loaded onto 6% to 10% polyacrylamide gels, and then run at 6 mA and at 7 degrees C or 20 degrees C in a Hoeffer apparatus. The bands were visualized after silver staining of the gels (Bio-Rad). Sequencing was performed as described above.

Table 1.

Oligonucleotide Primers and PCR Conditions for Detection of Mutations in Human MYBPC3 Gene

RNA Isolation, cDNA Synthesis, and MYBPC3 cDNA Amplifications

Total cellular RNA was isolated from human lymphoblastoid cell lines using RNA Plus (Bioprobe Systems), and the cDNA synthesis was performed as previously described. [27] The cDNA products were amplified in a 50-micro L PCR reaction using two outer primers (see Table 2). A second round of PCR was performed with a final dilution of 1:100 of the first round products, using nested primers (see Table 2). The primers were determined according to the cDNA sequence (EMBL accession number X84075), and cDNA fragments were amplified using a touchdown PCR protocol between 70 degrees C and 60 degrees C. Sizes of normal and mutated cDNA-PCR fragments were assessed, followed by size-fractionation on agarose gels. After extraction and purification of the normal and the putative mutated cDNAs, they were cloned using pGEM-T System II (Promega) and then sequenced as described above.

Table 2.

Oligonucleotide Primers for MYBPC3 cDNA Amplifications

Genomic Organization and Sequence of Human MYBPC3

The size of introns was first estimated by PCR amplification of DNA segments between exons from control genomic DNA, followed by size-fractionation of the PCR products on agarose gels. The exon/intron boundaries and the entire intronic sequences were then determined by sequencing. The sequences have been deposited with EMBL (accession number Y10129). The schematic organization of the human MYBPC3 gene and the alignment of exons with structural domains in the protein are shown in Figure 1. The gene comprises >21 000 bp and contains 35 exons, out of which 34 are coding. A (GT) repeat was found in intron 20 (data not shown). The 101-residue N-terminal extra IgI domain is encoded by exons 1 to 3; the proline-rich domain (51 residues), by exons 3 and 4; the C1 IgI domain (104 residues), by exons 4 to 6; the MyBP-C motif (105 residues), by exons 6 to 12; the C2 IgI domain (91 residues), by exons 12 to 16; the C3 IgI domain (91 residues), by exons 16 to 18; the C4 IgI domain (90 residues), by exons 18 to 20; the linker (11 residues), by exons 20 and 21; the C5 IgI domain (127 residues), by exons 21 to 24; the C6 fn-3 domain (98 residues), by exons 24 to 26; the C7 fn-3 domain (101 residues), by exons 26 to 28; the C8 IgI domain (95 residues), by exons 28 to 30; the C9 fn-3 domain (115 residues), by exons 30 to 32; and the C-terminal C10 IgI domain (94 residues), by exons 32 to 34.

Figure 1.

Schematic organization of the human MYBPC3 gene and alignment of exons with structural domains of the protein. Top, The structural domains of cardiac MyBP-C. The high-affinity myosin heavy chain domain (confined to the C10 IgI repeat), the titin binding site (C8 to C10), and the phosphorylation sites are indicated. Middle, The mRNA with the limits of exons. Bottom, the schematic organization of the gene with locations of exons shown by boxes and introns shown by horizontal lines. The exons are numbered from the 5 prime end of the gene, with exon 1 containing the first codon ATG. The exons coding for structural domains are indicated by interrupted lines.

The sizes of exons and introns are summarized in Table 3. The exon sizes, excluding the 5 prime and 3 prime untranslated regions, vary between 3 and 267 bp. Two of the exons, ie, exons 10 and 14, are unusually small and contain three nucleotides each. The remaining 32 exons vary in size between 18 and 267 bp. Twenty-seven exons finish with a split codon (see Table 3). The intron sizes vary between 85 and [nearly =]2000 bp. The major consensus donor splice site is GTGAG in 53% of the cases, and the major consensus acceptor splice site is CAG in 91% of the cases. Twenty-seven of the 34 introns contain putative branch point sequences located -14 to -51 upstream from each splice acceptor site. Introns 1, 4, 11, 14, 16, 24, and 31 do not contain any known consensus branch point sequence.

Table 3.

Exon-Intron Boundaries in the Human MYBPC3 Gene Identification of Mutations in MYBPC3 Gene Associated With FHC

Because the families were not large enough to assess linkage on the basis of a statistically significant Lod score, we used haplotype analysis to define the disease locus responsible for FHC in each family. Linkage was established on the basis of the transmission of a common haplotype in affected individuals and exclusion on the basis of affected recombinant individuals. Families 717 and 740 presented linkage only to CMH4, and the other five families (families 702, 716, 731, 750, and 754) were less informative but at least potentially linked to CMH4 (data not shown).

All the exon-intron boundaries were analyzed by PCRSSCP according to the conditions described in Table 2. A total of six new mutations were identified in MYBPC3 associated with FHC in seven unrelated French families (Figure 2 andFigure 3, Table 4).

Table 4.

Consequences at mRNA Level of MYBPC3 Mutations

Figure 2.

Pedigrees of families with MYBPC3 gene mutations. Clinical affection status is indicated: darkened, affected; clear, unaffected; and clear with a cross, indeterminate. Genetically affected status is indicated by an asterisk. The mutations (M) are as follows: M1, GTGAG[arrow right]GTGAA splice donor site mutation in intron 7; M2, GAA[arrow right]CAA mutation in exon 17; M3, GT[arrow right]AT splice donor site mutation in intron 23; M4, TGAT[arrow right]TGGT transversion in the branch point consensus sequence of intron 23; M5, [-GCGTC] deletion in exon 25; and M6, duplication [+TTCAAGAATGGC]/deletion [-ACCT] in exon 33.

Figure 3.

Normal and mutated cardiac MyBP-C polypeptides. N indicates the normal structure of human cardiac MyBP-C; M1 to M6 correspond to the predicted products of the aberrant MyBP-C cDNAs resulting from the different mutations.

M1 is a GTGAG[arrow right]GTGAA transition in the 3 prime splice donor site of intron 7 in family 717. The G residue at position +5 in the intron is a highly conserved nucleotide in the splice donor consensus sequence. [33] The G[arrow right]A mutation inactivates this donor site. Amplification of MYBPC3 cDNA from patients’ lymphocytes identified the skipping of the 49-bp exon 7 that produces a frameshift. No alternative splice donor site was found in intron 7. The aberrant cDNA encodes 258 normal cardiac MyBP-C residues, followed by 25 new amino acids, and a premature termination of translation. This should produce a large truncated protein (-80%) lacking the MyBP-C motif containing the phosphorylation sites and the titin and myosin binding sites.

M2 is a G[arrow right]C transversion at position 1656 in exon 17 in families 702 and 750 that produces a mutated polypeptide in the C3 domain at the position 542 (Glu[arrow right]Gln). Otherwise, this mutation affects the last nucleotide of the exon, which is part of the consensus splicing site. [34] A common feature in human exon-intron boundaries is that 80% of exons finish with a guanine (85% in MYBPC3). This mutation also results in an aberrant transcript in lymphocytes (with the skipping of exon 17) that directly introduces a stop codon. The aberrant cDNA encodes 486 normal cardiac MyBP-C residues, leading to a truncated protein (-62%) that lacks the titin and myosin binding sites.

M3 is a GT[arrow right]AT transition in the 3 prime splice donor site of intron 23 in family 716 that inactivates this splicing site. This mutation produces the skipping of the 160-bp exon 23. No alternative splice donor site was found in lymphocytes. The mutated cDNA identified in lymphocytes encodes 717 normal residues and then 51 novel amino acids, followed by premature termination of the translation in the C5 domain, leading to a potential truncated protein (-44%) that loses the titin and myosin binding domains.

M4 is a TGAT[arrow right]TGGT transition in intron 23 in family 740. This A[arrow right]G mutation inactivates a potential branch point consensus sequence (URAY). Although three potential branch points exist upstream from the mutation, they do not seem to be used, since analysis of the transcripts in lymphocytes indicates the existence of two aberrant cDNAs. One corresponds to the skipping of the 105-bp exon 24 without frameshift and encodes a polypeptide depleted of 35 amino acids in the C6 domain (-50% of C6). The other still contains the 724-bp intron 23. This mutant cDNA is associated with a frameshift: it encodes 770 normal cardiac MyBP-C residues and then 100 novel amino acids, followed by a stop codon, and the corresponding truncated protein (-40%) should not interact with either titin or myosin.

M5 is a 5-bp deletion (-GCGTC) in exon 25 in family 731. This deletion also produces a frameshift: the aberrant cDNA identified in the lymphocytes encodes 845 normal MyBP-C residues and then 35 novel amino acids, followed by a premature stop codon in the C6 domain that should produce a truncated protein (-34%), losing the C-terminal region containing both the titin- and myosin-binding sites.

M6 is a 12-bp duplication (+TTCAAGAATGGC)/4-bp deletion (-ACCT) in exon 33 in family 754. This modification introduces a frameshift at position 3691 that leads to 1220 normal MyBP-C residues and then 19 novel amino acids, followed by a premature stop codon in the last third part of the C10 domain. The predicted truncated protein (-4%) should also lose part of its myosin binding site.

All these six mutations were absent in 200 samples from control unrelated subjects without FHC and also in 42 unrelated probands with FHC (out of which 8 have mutations in MYBPC3, 8 have mutations in the beta-myosin heavy chain gene [MYH7], 1 has a mutation in the cardiac troponin T gene, and 25 have presently undefined mutations).

Discussion

The present work describes the first genomic organization for an MyBP-C. The gene is over 21 000 bp and contains 35 exons. An interesting feature of the organization of this gene is that there is a striking correspondence between the limits of the exons and those of structural domains (Figure 1). The IgI and fn-3 domains are encoded by two or three exons. The linker region between the IgI C4 and IgI C5 domains corresponds to exon 20. Twenty-six of the 28 cardiac-specific amino acids of the IgI C5 domain correspond to exon 22. Finally, the MyBP-C motif is encoded by the most complex exon structure: the nine cardiac-specific amino acids correspond to exon 8, and the four phosphorylation sites described by Gautel et al [7] are encoded by six exons and are located at the end or at the junction of two exons (phosphorylation sites: A, junction of exons 7 and 8; B, end of exon 8; C, end of exon 9, exon 10, and beginning of exon 11; and D, end of exon 12). The correlation between exonic organization and protein structure has also recently been described concerning the titin, [35] suggesting a common feature for the intracellular immunoglobulin superfamily.

We suggest that the new mutations described here cause FHC because they segregate with the disease, are not present in controls, and result in aberrant transcripts that are predicted to encode significantly altered cardiac MyBP-C polypeptide structure and/or function. They are all transcribed into mRNAs in lymphocytes. However, because most, if not all, genes in humans are thought to be transcribed at very low levels in lymphocytes (“illegitimate transcription”), [36] these results do not address the hypothesis that these mutations are expressed in the diseased myocardium. Since cardiac MyBP-C is specifically expressed in heart, ventricular tissue is needed to address this issue, and we had no access to any myocardial specimens. One study documented the expression of a missense mutation in the mRNA for the beta-myosin heavy chain in myocardial tissue from an affected patient with FHC. [37] Because the beta-myosin heavy chain is normally expressed in slow-twitch skeletal fibers, skeletal muscle biopsies can also be used to show that the mutated myosin is produced in the muscle and that the mutation alters the function of the beta-myosin and the contractile properties of the muscle fibers. [38,39] One might thus reasonably assume that the MYBPC3 gene mutations are expressed in the myocardium and that they exert their effect by altering the multimeric complex assembly of the cardiac sarcomere via at least one of these mechanisms: (1) They can act as “poison polypeptides” through a dominant-negative effect. The altered proteins would be incorporated in the sarcomere and would alter the assembly of the sarcomeric filaments, since most truncated MyBP-Cs are unable to cross-link the titin and/or myosin molecules. (2) They can act as “null alleles,” potentially leading to haplo insufficiency; the production of insufficient quantities of normal cardiac MyBP-C would produce an imbalance in stoichiometry of the thick-filament components that would be sufficient to alter the sarcomeric structure and function. (3) Since myosin, titin, and MyBP-C might be translated and assembled cotranslationally, one can also assume that the misfolded, mutated MYBPC3 mRNAs may disturb the translation of the other sarcomeric components that would interfere with the proper assembly of sarcomeric structures.

The full spectrum of mutations of the FHC disease genes is far from known, but it is intriguing to note that most mutations found so far in MYH7 are missense ones, whereas most of those in MYBPC3 disrupt the reading frame and produce premature stop codons. Both genes are large ones, composed of [nearly =]40 exons, and there are no reasons for different types of mutations in the two genes. Thus, one might hypothesize that mutations leading to truncated proteins exist also for MYH7 in humans but have no deleterious effect. In support of this are the reports of two deletions in the C-terminal part of the beta-myosin heavy chain molecule with almost no phenotype. One is a 2.4-kbp deletion including part of intron 39 and exon 40 containing the 3 prime untranslated region and the polyadenylation signal, which was reported in a small pedigree. [40] Only the proband had developed clinically diagnosed hypertrophic cardiomyopathy at a very late onset (age, 59 years), and the other genotypically affected family members had not developed the disease at 10, 32, and 33 years. The other one is a large deletion leaving only a short variant of the beta-myosin heavy chain constituting only the first 53 residues of the molecule (out of 1935). This deletion was found by chance in an unaffected individual. [41] For MYBPC3, in contrast, the majority of the mutations described so far produce the C-terminal truncation of the cardiac MyBP-C polypeptides and are associated with an FHC phenotype. However, no definitive conclusion can be drawn at this stage concerning the pathogenic mechanisms of mutations in these two genes. The present work provides the molecular basis for the production of transgenic animals for cardiac MyBP-C that will help to resolve some of these issues.

Footnotes
  • Received December 2, 1996; accepted January 10, 1997.

  • This manuscript was sent to Laurence Kedes, Consulting Editor, for review by expert referees, editorial decision, and final disposition.

  • Selected Abbreviations and Acronyms
    EMBL
    European Molecular Biology Laboratory
    FHC
    familial hypertrophic cardiomyopathy
    fn-3
    fibronectin III
    MyBP-C
    myosin binding protein C
    PCR
    polymerase chain reaction
    SSCP
    single-strand conformation polymorphism analysis
  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.

 

MYBPC3 – Hypertrophic Cardiomyopathy Testing

http://www.cincinnatichildrens.org/workarea/downloadasset.aspx?id=90111
Hypertrophic Cardiomyopathy (HCM) is relatively common, with a prevalence of 1 in 500 adults (1). HCM is a primary disorder of heart muscle characterized by left ventricular hypertrophy. The most classic finding in HCM is asymmetric septal hypertrophy, with or without left ventricular outflow tract obstruction. The disease demonstrates extensive clinical variability with regard to age of onset, severity and progression of disease. HCM can affect infants and children although it is more typically identified in adolescence or adulthood (2,3).

The MYBPC3 gene codes for cardiac myosin binding protein C. Phosphorylation of this protein modulates contraction and is an important component of the sarcomere (4). The MYBPC3 gene contains 35 exons and is located at chromosome 11p11.2. Up to 40% of individuals with a clinical diagnosis of HCM have MYBPC3 mutations (2). MYBPC3 mutations are inherited in an autosomal dominant manner. The majority of individuals inherit the MYBPC3 from a parent, although de novo mutations do occur. Mutations in MYBPC3 and MYH7 genes are the most common causes of HCM. However, the disease is genetically heterogeneous and sequencing additional genes should be considered if familial HCM is suspected or the underlying etiology remains unknown. Approximately 50-65% of individuals with a known or suspected diagnosis of familial HCM have a mutation in one of a number of genes encoding components of the sarcomere and cytoskeleton (3). Compound heterozygous mutations have been reported in MYBPC3 and other genes associated with HCM (5). Mutations in the MYBPC3 gene have been primarily associated with HCM, but can also be associated with other types of heart muscle disease including dilated cardiomyopathy, restrictive cardiomyopathy and left-ventricular non-compaction (6).
Indication MYBPC3 testing is utilized to confirm a diagnosis of HCM in patients with clinically evident disease. Genetic testing also allows for early identification and diagnosis of individuals at greatest risk prior to the expression of typical clinical manifestations. If a mutation is identified in an asymptomatic individual, regular and routine outpatient follow up is indicated. If clinically unaffected members of a family with an identified mutation for HCM are found not to carry that mutation, they can be definitely diagnosed as unaffected and reassured that neither they nor their children will be at higher risk compared to the general population to develop symptoms related to HCM. A negative test result in an individual with a known familial mutation also eliminates the need for routine follow up.
Methodology:
All 35 exons of the MYBPC3 gene, as well as the exon/intron boundaries and a portion of untranslated regions of the gene are amplified by PCR. Genomic DNA sequences from both forward and reverse directions are obtained by automatic fluorescent detection using an ABI PRISM® 3730 DNA Analyzer. Sequence variants different from National Center for Biotechnology Information GenBank references are further evaluated for genetic significance. If a mutation is identified, a known familial mutation analysis will be available for additional family members.
Sensitivity & Accuracy:
Greater than 98.5% of the mutations in exon 1-35 of MYBPC3 are detectable by sequence based methods. Sequencing does not detect deletions or duplications. Mutations in MYBPC3 account for up to 40% of cases of idiopathic hypertrophic cardiomyopathy.
References:
1. Maron BJ, Gardin JM, Flack JM, Gidding SS, Kurosaki TT, Bild DE. Prevalence of hypertrophic cardiomyopathy in a general population of young adults. Echocardiographic analysis of 4111 subjects in the cardia study. Coronary artery risk development in (young) adults. Circulation. 1995;92:785-789.
2. Kaski JP, Syrris P, Esteban MT, Jenkins S, Pantazis A, Deanfield JE, McKenna WJ, Elliott PM. Prevalence of sarcomere protein gene mutations in preadolescent children with hypertrophic cardiomyopathy. Circulation Cardiovascular Genetics. 2009;2:436441.
3. Morita H, Rehm HL, Menesses A, McDonough B, Roberts AE, Kucherlapati R, Towbin JA, Seidman JG, Seidman CE. Shared genetic causes of cardiac hypertrophy in children and adults. The New England Journal of Medicine. 2008;358:1899-1908.
4. van Dijk SJ, Dooijes D, dos Remedios C, Michels M, Lamers JM, Winegrad S, Schlossarek S, Carrier L, ten Cate FJ, Stienen GJ, van der Velden J. Cardiac myosin-binding protein c mutations and hypertrophic cardiomyopathy: Haploinsufficiency, deranged phosphorylation, and cardiomyocyte dysfunction. Circulation. 2009;119:1473-1483.
5. Van Driest SL, Vasile VC, Ommen SR, Will ML, Tajik AJ, Gersh BJ, Ackerman MJ. Myosin binding protein c mutations and compound heterozygosity in hypertrophic cardiomyopathy. Journal of the American College of Cardiology. 2004;44:1903-1910.
6. Hershberger RE, Norton N, Morales A, Li DX, Siegfried JD, Gonzalez-Quintana J. Coding sequence rare variants identified in MYBPC3, MYH6, TPM1, TNNC1, and TNNI3 from 312 patients with familial or idiopathic dilated cardiomyopathy. CirculationCardiovascular Genetics. 2010;3:155-161.

Read Full Post »

Hand Held DNA Sequencer

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Point-of-Care DNA Sequencer Inching Closer to Widespread Use as Beta-Testers Praise Oxford Technologies’ Pocketsize, Portable Nanopore Device

November 4, 2015

MinION could help achieve NIH’s goal of $1,000 human genome sequencing and in remote clinics and outbreak zones shift testing away from medical laboratories

Point-of-care DNA sequencing  technology is edging ever closer to widespread commercial use as the Oxford Nanopore MinION sequencer  draws praise and registers successes in pre-release testing.

A pocketsize gene-sequencing machine such as the MinION could transform the marketplace by shifting DNA testing to remote clinics and outbreak zones while eliminating the need to return samples to clinical laboratories for analysis. Such devices also are expected to increase the need for trained genetic pathologists andmedical technologists.

After Much Anticipation, MinION Delivers on Promises

The MinION, produced by United Kingdom-based Oxford Nanopore Technologies, is a miniaturized instrument about the size of a USB memory stick that plugs directly into a PC or laptop computer’s USB port. Unlike bench-top sequencers, the MinION uses nanopore “strand sequencing” technology to deliver ultra-long-read-length single-molecule sequence data.

“The USB-powered sequencer contains thousands of wells, each containing nanopores—narrow protein channels that are only wide enough for a single strand of DNA. When DNA enters the channels, each base gives off a unique electronic signature that can be detected by the system, providing a readout of the DNA sequence,” reported

After several years of unfulfilled promises, Oxford began delivering the MinION in the spring of 2014 to researchers participating in its early access program called MAP . For a $1,000 access fee, participants receive a starter kit and may purchase consumable supplies. The current price for additional flow cells ranges from $900 for one to $500 per piece when purchased in 48-unit quantities.

 

Nick Loman, an Independent Research Fellow in the Institute for Microbiology and Infection at the University of Birmingham, UK, had questioned if MinION’s promise would ever be realized. But the USB-size sequencer won him over after he used it to detect Salmonella within 15 minutes in samples sent from a local hospital.

 

Loman received the MinION in May 2014 as part of the MAP program and quickly tested its usefulness. After using the device to sequence a strain of Pseudomonas aeruginosa, a common hospital-acquired infection (HAI), he next helped solve the riddle of an outbreak of Salmonella infection in a Birmingham hospital that had affected 30 patients and staff.

“The hospital wanted to understand quickly what was happening,” Loman stated. “But routine genome sequencing is quite slow. It usually takes weeks or even months to get information back.”

Using MinION, Loman detected Salmonella in some of the samples sent from the hospital in less than 15 minutes. Ultimately, the main source of the outbreak was traced to a German egg supplier.

“The MinION just blew me away,” Loman stated in Wired. “The idea that you could do sequencing on a sort of USB stick that you can chuck around does stretch credulity.”

Portable Sequencing Opens Up Intriguing Possibilities for Pathologists

In May 2015, Oxford released a second version of the device, the MinION MkI. According to the company website, the updated MinION is a “full production device featuring improvements of performance and ease of use,” such as improved temperature control and updated mechanism to engage the device with the consumable flow cells.

“The bench-top sequencers opened up the market to a certain degree,” Loman says. “You started seeing [them] in intensive research groups and in the clinic. But what if anyone could have this hanging off their key ring and go do sequencing? That’s an insane idea, and we don’t really know what it’s going to mean in terms of the potential applications. We’re very much at the start of thinking about what we might be able to do, if anyone can just sequence anything, anywhere they are.”

 

Joshua Quick, a PhD candidate at the University of Birmingham, UK believes Oxford Nanopore Technologies’ portable and inexpensive device will change the gene sequencing landscape.

 

Accuracy One Trade-off for Portability

Beta-testers have shown that the miniature device can read out relatively long stretches of genetic sequence with increasing accuracy, but according to the report in the journal Nature , the MinION MkI will need to correct several shortcomings found in the original sequencer:

• It is not practical to sequence large genomes with the device, with some experts estimating it would take a year for the original version to sequence the equivalent of a human genome.

• The machine has a high error rate compared with those of existing full-sized sequencers, misidentifying DNA sequence 5%–30% of the time.

• It also has difficulties reading sections of genome that contain long stretches of a single DNA base.

Yet researchers who have used the device remain enthusiastic about the future of this fourth-generation sequencing technique, which may have the potential to achieve the $1,000-per-human-genome goal set by the National Institutes of Health  (NIH).

“This is the democratization of sequencing,” Joshua Quick, a PhD candidate at the University of Birmingham, told Nature. “You don’t have to rely on expensive infrastructure and costly equipment.”

News accounts did not provide information about Oxford Nanopore’s plans to obtain an EU mark for its MinION device. That will be the next step to demonstrating that the device is ready for widespread clinical use. At the same time, clinical laboratory managers and pathologist should take note of the capabilities of the MinION MkI as described above. Researchers are already finding it useful to identify infectious diseases in clinical setting where other diagnostic methods have not yet identified the agent causing the infection.

Read Full Post »

Gene Editing by creation of a complement without transcription error

Larry H. Bernstein, MD, FCAP, Curator

LPBI

2.2.19

2.2.19   Gene Editing by Creation of a Complement without Transcription Error, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Nanoparticle-Based Artificial Transcription Factor  

NanoScript: A Nanoparticle-Based Artificial Transcription Factor for Effective Gene Regulation

Abstract Image

Transcription factor (TF) proteins are master regulators of transcriptional activity and gene expression. TF-based gene regulation is a promising approach for many biological applications; however, several limitations hinder the full potential of TFs. Herein, we developed an artificial, nanoparticle-based transcription factor, termed NanoScript, which is designed to mimic the structure and function of TFs. NanoScript was constructed by tethering functional peptides and small molecules called synthetic transcription factors, which mimic the individual TF domains, onto gold nanoparticles. We demonstrate that NanoScript localizes within the nucleus and initiates transcription of a reporter plasmid by over 15-fold. Moreover, NanoScript can effectively transcribe targeted genes on endogenous DNA in a nonviral manner. Because NanoScript is a functional replica of TF proteins and a tunable gene-regulating platform, it has great potential for various stem cell applications.

NanoScript_emulates_TF_Structure_and_Function_large.jpg

http://www.energyigert.rutgers.edu/sites/default/files/faculty/kibumlee/NanoScript_emulates_TF_Structure_and_Function_large.jpg

HIGHLIGHTS

  • Transcription Factors (TF) are proteins that regulate transcription and gene expression
  • NanoScript is an versatile, nanoparticle-based platform that mimics TF structure and biological function
  • NanoScript is stable in physiological environments and localizes within the nucleus
  • NanoScript initiates targeted gene expression by over 15-fold to 30 fold, which would be critical for stem cell differentiation and cellular reprogramming
  • NanoScript transcribes endogenous genes on native DNA in a non-viral manner

Transcription factor (TF) proteins are master regulators of transcriptional activity and gene expression. TF-based gene regulation is an essential approach for many biological applications such as stem cell differentiation and cellular programming, however, several limitations hinder the full potential of TFs.

To address this challenge, researchers in Prof. KiBum Lee’s group (Sahishnu Patel and Perry Yin) developed an artificial, nanoparticle-based transcription factor, termed NanoScript, which is designed to mimic the structure and function of TFs. NanoScript was constructed by tethering functional peptides and small molecules called synthetic transcription factors, which mimic the individual TF domains, onto gold nanoparticles. They demonstrated that NanoScript localizes within the nucleus and initiates transcription of a targeted gene with high efficiency. Moreover, NanoScript can effectively transcribe targeted genes on endogenous DNA in a non-viral manner.

NanoScript is a functional replica of TF proteins and a tunable gene-regulating platform. NanoScript has two attractive features that make this the perfect platform for stem cell-based application. First, because gene regulation by NanoScript is non-viral, it serves as an attractive alternative to current differentiation methods that use viral vectors. Second, by simply rearranging the sequence of one molecule on NanoScript, NanoScript can target any differentiation-specific genes and induce differentiation, and thus has excellent prospect for applications in stem cell biology and cellular reprogramming.

Perry To-tien Yin
PhD Candidate, Rutgers University
Prospects for graphene–nanoparticle-based hybrid sensors

PT Yin, TH Kim, JW Choi, KB Lee
Physical Chemistry Chemical Physics 15 (31), 12785-12799
31 2013
Axonal Alignment and Enhanced Neuronal Differentiation of Neural Stem Cells on Graphene‐Nanoparticle Hybrid Structures

A Solanki, STD Chueng, PT Yin, R Kappera, M Chhowalla, KB Lee
Advanced Materials 25 (38), 5477-5482
22 2013
Label‐Free Polypeptide‐Based Enzyme Detection Using a Graphene‐Nanoparticle Hybrid Sensor

S Myung, PT Yin, C Kim, J Park, A Solanki, PI Reyes, Y Lu, KS Kim, …
Advanced Materials 24 (45), 6081-6087
22 2012
Guiding Stem Cell Differentiation into Oligodendrocytes Using Graphene‐Nanofiber Hybrid Scaffolds

S Shah, PT Yin, TM Uehara, STD Chueng, L Yang, KB Lee
Advanced materials 26 (22), 3673-3680
21 2014
Design, Synthesis, and Characterization of Graphene–Nanoparticle Hybrid Materials for Bioapplications

PT Yin, S Shah, M Chhowalla, KB Lee
Chemical reviews 115 (7), 2483-2531
16 2015
Multimodal Magnetic Core–Shell Nanoparticles for Effective Stem‐Cell Differentiation and Imaging

B Shah, PT Yin, S Ghoshal, KB Lee
Angewandte Chemie 125 (24), 6310-6315
16 2013
Nanotopography-mediated reverse uptake for siRNA delivery into neural stem cells to enhance neuronal differentiation

A Solanki, S Shah, PT Yin, KB Lee
Scientific reports 3
14 2013
Combined Magnetic Nanoparticle‐based MicroRNA and Hyperthermia Therapy to Enhance Apoptosis in Brain Cancer Cells

PT Yin, BP Shah, KB Lee
small 10 (20), 4106-4112
11 2014

A highly robust, efficient nanoparticle-based platform to advance stem cell therapeutics

(Nanowerk News) Associate Professor Ki-Bum Lee has developed patent-pending technology that may overcome one of the critical barriers to harnessing the full therapeutic potential of stem cells.
One of the major challenges facing researchers interested in regenerating cells and growing new tissue to treat debilitating injuries and diseases such as Parkinson’s disease, heart disease, and spinal cord trauma, is creating an easy, effective, and non-toxic methodology to control differentiation into specific cell lineages. Lee and colleagues at Rutgers and Kyoto University in Japan have invented a platform they call NanoScript, an important breakthrough for researchers in the area of gene expression. Gene expression is the way information encoded in a gene is used to direct the assembly of a protein molecule, which is integral to the process of tissue development through stem cell therapeutics.
Stem cells hold great promise for a wide range of medical therapeutics as they have the ability to grow tissue throughout the body. In many tissues, stem cells have an almost limitless ability to divide and replenish other cells, serving as an internal repair system.
Nanoscript

Schematic representation of NanoScript’s design and function. (a) By assembling individual STF molecules, including the DBD (DNA-binding domain), AD (activation domain), and NLS (nuclear localization signal), onto a single 10 nm gold nanoparticle, we have developed the NanoScript platform to replicate the structure and function of TFs. This NanoScript penetrates the cell membrane and enters the nucleus through the nuclear receptor with the help of the NLS peptide. Once in the nucleus, NanoScript interacts with DNA to initiate transcriptional activity and induce gene expression. (b) When comparing the structure of NanoScript to representative TF proteins, the three essential domains are effectively replicated. The linker domain (LD) fuses the multidomain protein together and is replicated by the gold nanoparticle (AuNP). (c) The DBD binds to complementary DNA sequences, while the AD recruits transcriptional machinery components such as RNA polymerase II (RNA Pol II), mediator complex, and general transcription factors (GTFs). The synergistic function of the DBD and AD moieties on NanoScript initiates transcriptional activity and expression of targeted genes. (d) The AuNPs are monodisperse and uniform. The NanoScript constructs are shown to effectively localize within the nucleus, which is important because transcriptional activity occurs only in the nucleus. (Reprinted with permission y American Chemical Society) (click on image to enlarge)

Read more: Using nanotechnology to regulate gene expression at the transcriptional level

Transcription factor (TF) proteins are master regulators of gene expression. TF proteins play a pivotal role in regulating stem cell differentiation. Although some have tried to make synthetic molecules that perform the functions of natural transcription factors, NanoScript is the first nanomaterial TF protein that can interact with endogenous DNA.
ACS Nano, a publication of the American Chemical Society (ACS), has published Lee’s research on NanoScript (“NanoScript: A Nanoparticle-Based Artificial Transcription Factor for Effective Gene Regulation”). The research is supported by a grant from the National Institutes of Health (NIH).
“Our motivation was to develop a highly robust, efficient nanoparticle-based platform that can regulate gene expression and eventually stem cell differentiation,” said Lee, who leads a Rutgers research group primarily focused on developing and integrating nanotechnology with chemical biology to modulate signaling pathways in cancer and stem cells. “Because NanoScript is a functional replica of TF proteins and a tunable gene-regulating platform, it has great potential to do exactly that. The field of stem cell biology now has another platform to regulate differentiation while the field of nanotechnology has demonstrated for the first time that we can regulate gene expression at the transcriptional level.”
NanoScript was constructed by tethering functional peptides and small molecules called synthetic transcription factors, which mimic the individual TF domains, onto gold nanoparticles.
“NanoScript localizes within the nucleus and initiates transcription of a reporter plasmid by up to 30-fold,” said Sahishnu Patel, Rutgers Chemistry graduate student and co-author of the ACS Nano publication. “NanoScript can effectively transcribe targeted genes on endogenous DNA in a nonviral manner.”
Lee said the next step for his research is to study what happens to the gold nanoparticles after NanoScript is utilized, to ensure no toxic effects arise, and to ensure the effectiveness of NanoScript over long periods of time.
“Due to the unique tunable properties of NanoScript, we are highly confident this platform not only will serve as a desirable alternative to conventional gene-regulating methods,” Lee said, “but also has direct employment for applications involving gene manipulation such as stem cell differentiation, cancer therapy, and cellular reprogramming. Our research will continue to evaluate the long-term implications for the technology.”
Lee, originally from South Korea, joined the Rutgers faculty in 2008 and has earned many honors including the NIH Director’s New Innovator Award. Lee received his Ph.D. in Chemistry from Northwestern University where he studied with Professor Chad. A. Mirkin, a pioneer in the coupling of nanotechnology and biomolecules. Lee completed his postdoctoral training at The Scripps Research Institute with Professor Peter G. Schultz. Lee has served as a Visiting Scholar at both Princeton University and UCLA Medical School.
The primary interest of Lee’s group is to develop and integrate nanotechnologies and chemical functional genomics to modulate signaling pathways in mammalian cells towards specific cell lineages or behaviors. He has published more than 50 articles and filed for 17 corresponding patents.
Source: Rutgers University

Read more: A highly robust, efficient nanoparticle-based platform to advance stem cell therapeutics

Nanoparticle-based transcription factor mimics

http://nanomedicine.ucsd.edu/blog/article/nanoparticle-based-transcription-factor-mimics

Biologists have been enhancing expression of specific genes with plasmids and viruses for decades, which has been essential to uncovering the function of numerous genes and the relationships among the proteins they encode. However, tools that allow enhancement of expression of endogenous genes at the transcriptional level could be a powerful complement to these strategies. Many chemical biologists have made enormous progress developing molecular tools for this purpose; recent work by a group at Rutgers suggests how nanotechnology might allow application of this strategy in living organisms, and perhaps one day in patients.

In a paper published in ACS Nano, researchers led by KiBum Lee synthesized gold nanoparticles bearing synthetic or shortened versions of the three essential components of transcription factors (TFs), the proteins that “turn on” expression of specific genes in cells. Specifically, polyamides previously designed to bind to a specific promoter sequence, transactivation peptides, and nuclear localization peptides were conjugated to the nanoparticle surface. These nanoparticles enhanced expression of both a reporter plasmid (by ~15-fold) and several endogenous genes (by up to 65%). This enhancement is much greater than that possible using previous constructs lacking nuclear localization sequences; the team incorporated a high proportion of those peptides to ensure efficient delivery to the nucleus.

Nanoscript, a synthetic transciption factor
Diagram of the synthetic TF mimic (termed NanoScript). Decorated particles are ~35 nm in diameter. Letters are amino acid sequences; Py-Im, N-methylpyrrole-N-methylimidazole.

These nanoparticles offer an alternative to delivering protein TFs, which remains extremely challenging despite considerable effort towards the development of delivery systems that transport cargo into cells. Among other barriers to the use of native TFs, incorporating them into polymeric or lipid-based carriers often alters their shape, which would likely reduce their function.

While the group suggests future generations of these nanoparticles might one day be used to treat diseases caused by defects in TF genes, many questions remain. First, the duration of gene expression enhancement is not known; the study only assesses effects at 48 h post-administration. Further, whether gold is the best material for the core remains unclear, as its non-biodegradability means the particles would likely accumulate in the liver over time; synthetic TFs with biodegradable cores might also be considered.

Patel S et al., NanoScript: a nanoparticle-based artificial transcription factor for effective gene regulation,ACS Nano 2014; published online Sep 3.

http://www.wtec.org/bem/docs/BEM-FinalReport-Web.pdf

Biocompatibility and Toxicity of Nanobiomaterials

“Biocompatibility and Toxicity of Nanobiomaterials” is an annual special issue published in “Journal of Nanomaterials.”

http://www.hindawi.com/journals/jnm/toxicity.nanobiomaterials/

Porous Ti6Al4V Scaffold Directly Fabricated by Sintering: Preparation and In Vivo Experiment
Xuesong Zhang, Guoquan Zheng, Jiaqi Wang, Yonggang Zhang, Guoqiang Zhang, Zhongli Li, and Yan Wang
Department of Orthopaedics, Chinese People’s Liberation Army General Hospital, Beijing 100853, China AcademicEditor:XiaomingLi
The interface between the implant and host bone plays a key role in maintaining primary and long-term stability of the implants. Surface modification of implant can enhance bone in growth and increase bone formation to create firm osseo integration between the implant and host bone and reduce the risk of implant losing. This paper mainly focuses on the fabricating of 3-dimensiona interconnected porous titanium by sintering of Ti6Al4V powders, which could be processed to the surface of the implant shaft and was integrated with bone morphogenetic proteins (BMPs). The structure and mechanical property of porous Ti6Al4V was observed and tested. Implant shaft with surface of porous titanium was implanted into the femoral medullary cavity of dog after combining with BMPs. The results showed that the structure and elastic modulus of 3D interconnected porous titanium was similar to cancellous bone; porous titanium combined with BMP was found to have large amount of fibrous tissue with fibroblastic cells; bone formation was significantly greater in 6 weeks postoperatively than in 3 weeks after operation. Porous titanium fabricated by powders sintering and combined with BMPs could induce tissue formation and increase bone formation to create firm osseo integration between the implant and host bone.

Journal of Materials Chemistry B   Issue 39, 2013

Materials for biology and medicine
Synthesis of nanoparticles, their biocompatibility, and toxicity behavior for biomedical applications
J. Mater. Chem. B, 2013,1, 5186-5200    DOI: http://dx.doi.org:/10.1039/C3TB20738B

Nanomaterials research has in part been focused on their use in biomedical applications for more than several decades. However, in recent years this field has been developing to a much more advanced stage by carefully controlling the size, shape, and surface-modification of nanoparticles. This review provides an overview of two classes of nanoparticles, namely iron oxide and NaLnF4, and synthesis methods, characterization techniques, study of biocompatibility, toxicity behavior, and applications of iron oxide nanoparticles and NaLnF4nanoparticles as contrast agents in magnetic resonance imaging. Their optical properties will only briefly be mentioned. Iron oxide nanoparticles show a saturation of magnetization at low field, therefore, the focus will be MLnF4 (Ln = Dy3+, Ho3+, and Gd3+) paramagnetic nanoparticles as alternative contrast agents which can sustain their magnetization at high field. The reason is that more potent contrast agents are needed at magnetic fields higher than 7 T, where most animal MRI is being done these days. Furthermore we observe that the extent of cytotoxicity is not fully understood at present, in part because it is dependent on the size, capping materials, dose of nanoparticles, and surface chemistry, and thus needs optimization of the multidimensional phenomenon. Therefore, it needs further careful investigation before being used in clinical applications.

Graphical abstract: Synthesis of nanoparticles, their biocompatibility, and toxicity behavior for biomedical applications

http://pubs.rsc.org/services/images/RSCpubs.ePlatform.Service.FreeContent.ImageService.svc/ImageService/image/GA?id=C3TB20738B

Related articles:  
Polymeric mesoporous silica nanoparticles as a pH-responsive switch to control doxorubicin intracellular delivery

Ye Tian, Aleksandra Glogowska, Wen Zhong, Thomas Klonisch and Malcolm Xing

J. Mater. Chem. B, 2013,1, 5264-5272

Tao Cai, Min Li, Bin Zhang, Koon-Gee Neoh and En-Tang Kang

J. Mater. Chem. B, 2014,2, 814-825
From themed collection Nanoparticles in Biology

pH-responsive physical gels from poly(meth)acrylic acid-containing crosslinked particles: the relationship between structure and mechanical properties

Silvia S. Halacheva, Tony J. Freemont and Brian R. Saunders

J. Mater. Chem. B, 2013,1, 4065-4078

citations…

HAMLET interacts with lipid membranes and perturbs their structure and integrity

HAMLET (human α-lactalbumin made lethal to tumor cells) is a tumoricidal …. of the alternative complement pathway preserves photoreceptors after retinal injury ….. Life-long in vivo cell-lineage tracing shows that no oogenesis originates from …. ananoparticle-based artificial transcription factor for effective gene regulation …

Authors: Ann-Kristin Mossberg, Maja Puchades, Øyvind Halskau, Anne Baumann, Ingela Lanekoff, Yinxia Chao, Aurora Martinez, Catharina Svanborg, & Roger Karlsson

www.regenerativemedicine.net/NewsletterArchives.asp?qEmpID…

Summary: 

Background – Cell membrane interactions rely on lipid bilayer constituents and molecules inserted within the membrane, including specific receptors. HAMLET (human α-lactalbumin made lethal to tumor cells) is a tumoricidal complex of partially unfolded α-lactalbumin (HLA) and oleic acid that is internalized by tumor cells, suggesting that interactions with the phospholipid bilayer and/or specific receptors may be essential for the tumoricidal effect. This study examined whether HAMLET interacts with artificial membranes and alters membrane structure.

Methodology/Principal Findings – We show by surface plasmon resonance that HAMLET binds with high affinity to surface adherent, unilamellar vesicles of lipids with varying acyl chain composition and net charge. Fluorescence imaging revealed that HAMLET accumulates in membranes of vesicles and perturbs their structure, resulting in increased membrane fluidity. Furthermore, HAMLET disrupted membrane integrity at neutral pH and physiological conditions, as shown by fluorophore leakage experiments. These effects did not occur with either native HLA or a constitutively unfolded Cys-Ala HLA mutant (rHLAall-Ala). HAMLET also bound to plasma membrane vesicles formed from intact tumor cells, with accumulation in certain membrane areas, but the complex was not internalized by these vesicles or by the synthetic membrane vesicles.

Conclusions/Significance – The results illustrate the difference in membrane affinity between the fatty acid bound and fatty acid free forms of partially unfolded HLA and suggest that HAMLET engages membranes by a mechanism requiring both the protein and the fatty acid. Furthermore, HAMLET binding alters the morphology of the membrane and compromises its integrity, suggesting that membrane perturbation could be an initial step in inducing cell death.

Source: Public Library of Science ONE; 5(2) (02/23/10) 

Read Full Post »

Aging Protein Signature

Larry H Bernstein, MD, FCAP, Curator

LPBI

 

Anti-Aging Protein GDF11: Does it Work?

Read Full Post »

« Newer Posts - Older Posts »