Feeds:
Posts
Comments

Posts Tagged ‘Digital pathology’


Live Conference Coverage @Medcitynews Converge 2018 Philadelphia: The Davids vs. the Cancer Goliath Part 2

8:40 – 9:25 AM The Davids vs. the Cancer Goliath Part 2

Startups from diagnostics, biopharma, medtech, digital health and emerging tech will have 8 minutes to articulate their visions on how they aim to tame the beast.

Start Time End Time Company
8:40 8:48 3Derm
8:49 8:57 CNS Pharmaceuticals
8:58 9:06 Cubismi
9:07 9:15 CytoSavvy
9:16 9:24 PotentiaMetrics

Speakers:
Liz Asai, CEO & Co-Founder, 3Derm Systems, Inc. @liz_asai
John M. Climaco, CEO, CNS Pharmaceuticals @cns_pharma 

John Freyhof, CEO, CytoSavvy
Robert Palmer, President & CEO, PotentiaMetrics @robertdpalmer 
Moira Schieke M.D., Founder, Cubismi, Adjunct Assistant Prof UW Madison @cubismi_inc

 

3Derm Systems

3Derm Systems is an image analysis firm for dermatologic malignancies.  They use a tele-medicine platform to accurately triage out benign malignancies observed from the primary care physician, expediate those pathology cases if urgent to the dermatologist and rapidly consults with you over home or portable device (HIPAA compliant).  Their suite also includes a digital dermatology teaching resource including digital training for students and documentation services.

 

CNS Pharmaceuticals

developing drugs against CNS malignancies, spun out of research at MD Anderson.  They are focusing on glioblastoma and Berubicin, an anthracycline antiobiotic (TOPOII inhibitor) that can cross the blood brain barrier.  Berubicin has good activity in a number of animal models.  Phase I results were very positive and Phase II is scheduled for later in the year.  They hope that the cardiotoxicity profile is less severe than other anthracyclines.  The market opportunity will be in temazolamide resistant glioblastoma.

Cubismi

They are using machine learning and biomarker based imaging to visualize tumor heterogeneity. “Data is the new oil” (Intel CEO). We need prediction machines so they developed a “my body one file” system, a cloud based data rich file of a 3D map of human body.

CUBISMI IS ON A MISSION TO HELP DELIVER THE FUTURE PROMISE OF PRECISION MEDICINE TO CURE DISEASE AND ASSURE YOUR OPTIMAL HEALTH.  WE ARE BUILDING A PATIENT-DOCTOR HEALTH DATA EXCHANGE PLATFORM THAT WILL LEVERAGE REVOLUTIONARY MEDICAL IMAGING TECHNOLOGY AND PUT THE POWER OF HEALTH DATA INTO THE HANDS OF YOU AND YOUR DOCTORS.

 

CytoSavvy

CytoSavvy is a digital pathology company.  They feel AI has a fatal flaw in that no way to tell how a decision was made. Use a Shape Based Model Segmentation algorithm which uses automated image analysis to provide objective personalized pathology data.  They are partnering with three academic centers (OSU, UM, UPMC) and pool data and automate the rule base for image analysis.

CytoSavvy’s patented diagnostic dashboards are intuitive, easy–to-use and HIPAA compliant. Our patented Shape-Based Modeling Segmentation (SBMS) algorithms combine shape and color analysis capabilities to increase reliability, save time, and improve decisions. Specifications and capabilities for our web-based delivery system follow.

link to their white paper: https://www.cytosavvy.com/resources/healthcare-ai-value-proposition.pdf

PotentialMetrics

They were developing a diagnostic software for cardiology epidemiology measuring outcomes however when a family member got a cancer diagnosis felt there was a need for outcomes based models for cancer treatment/care.  They deliver real world outcomes for persoanlized patient care to help patients make decisions on there care by using a socioeconomic modeling integrated with real time clinical data.

Featured in the Wall Street Journal, using the informed treatment decisions they have generated achieve a 20% cost savings on average.  There research was spun out of Washington University St. Louis.

They have concentrated on urban markets however the CEO had mentioned his desire to move into more rural areas of the country as there models work well for patients in the rural setting as well.

Please follow on Twitter using the following #hash tags and @pharma_BI 

#MCConverge

#cancertreatment

#healthIT

#innovation

#precisionmedicine

#healthcaremodels

#personalizedmedicine

#healthcaredata

And at the following handles:

@pharma_BI

@medcitynews

Read Full Post »


Digital PCR

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

GEN Roundup: Digital PCR Advances Partition by Partition  

By Partitioning Samples Digital PCR Is Lowering Detection Limits and Enabling New Applications

GEN  Mar 1, 2016 (Vol. 36, No. 5)       http://www.genengnews.com/gen-articles/gen-roundup-digital-pcr-advances-partition-by-partition/5697

 

  • Digital PCR (dPCR) has generated intense interest because it is showing potential as a clinical diagnostics tool. It has already proven to be a useful technique for any application where extreme sensitivity or precise quantification is essential, such as identifying mutations or copy number variations in tumor cells, or examining gene expression at the single-cell level.

    GEN interviewed several dPCR experts to find out specifically why the technique is increasing in popularity. GEN also asked the experts to envision dPCR’s future capabilities.

  • GEN: What makes dPCR technology such a superior tool for discovery and diagnostic applications?

    Dr. Shelton The high levels of sensitivity, precision, and reproducibility in DNA and quantification are the major strengths of dPCR. The technology is robust where differences in primer efficiency or the presence of sample-specific PCR inhibitors are trivial to the final quantification through an end-point amplification reaction.

    This provides value to discovery as a trusted tool for validating potential biomarkers and hypotheses generated by broad profiling techniques such as microarrays or next-generation sequencing (NGS). In diagnostics applications, the reproducibility and rapid results of dPCR are critical for labs around the world to quickly compare and share data, especially for ultra-low detection of DNA where variability is high.

    Dr. Garner Digital PCR provides a precise direct counting approach for single molecule detection, thereby providing a straightforward process for the absolute quantification of nucleic acids in samples. One of the biggest advantages of using a system such as ours is its ability to do real-time reads on digital samples. When samples go through PCR, their results are recorded after each cycle.

    These results build a curve, and customers can analyze the data if something went wrong. If it isn’t a clean read—from either a contamination issue, primer-dimer issue, or off-target issue—the curve isn’t the classic PCR curve.

    Dr. Menezes Digital PCR allows absolute quantification of target concentration in samples without the need for standard curves. Obtaining consistent, precise, and absolute quantification with regular qPCR is dependent on standard curve generation and amplification efficiency calculations, which can introduce errors.

    Ms. Hibbs At MilliporeSigma Cell Design Studio, the implementation of dPCR has improved and accelerated the custom cell engineering workflow. After the application of zinc finger nuclease or CRISPR/Cas to create precise genetic modifications in mammalian cell lines, dPCR is used to characterize the expected frequency of homologous recombination and develop a screening strategy based on this expected frequency.

    In some cell lines, homologous recombination occurs at a low frequency. In such cases, dPCR is used to screen cell pools and subsequently identify rare clones having the desired mutation. Digital PCR is also used to accurately and expeditiously measure target gene copy number. It is used this way, for example, in polyploid cell lines.

    Dr. Price The ability to partition genomic samples to a level that enables robust detection of single target molecules is what sets dPCR apart as an innovative tool. Each partition (droplet in the case of the RainDrop System) operates as an individual PCR reaction, allowing for sensitive, reproducible, and precise quantification of nucleic acid molecules without the need for reference standards or endogenous controls.

    Partitioning also provides greater tolerance to PCR inhibitors compared to quantitative PCR (qPCR). In doing so, dPCR can remedy many shortcomings of qPCR by transforming the analog, exponential nature of PCR into a digital signal.

    Mr. Wakida Digital PCR is an ideal technology for detecting rare targets at concentrations of 0.1% or lower. By partitioning samples prior to PCR, exceptionally rare targets can be isolated into individual partitions and amplified.

    Digital PCR produces absolute quantitative results, so in some respects, it is easier than qPCR because it doesn’t require a standard curve, with the added advantages of being highly tolerant of inhibitors and being able to detect more minute fold changes. Absolute quantification is useful for generating reference standards, detecting viral load, and preparing NGS libraries.

  • GEN: In what field do you think dPCR will have the greatest impact in the future?

    Dr. Shelton dPCR will have a great impact on precision medicine, especially in liquid biopsy analysis. Cell-free DNA from bodily fluids such as urine or blood plasma can be analyzed quickly and cost-effectively using dPCR. For example, a rapid dPCR test can be performed to determine mutations present in a patient’s tumor and help drive treatment decisions.

    Iterative monitoring of disease states can also be achieved due to the relatively low cost of dPCR, providing faster response times when medications are failing. Gene editing will also be greatly impacted by dPCR. Digital PCR enables refinement and optimization of gene-editing tools and conditions. Digital PCR also serves as quality control of therapeutically modified cells and viral transfer vectors used in gene-therapy efforts.

    Dr. Garner The BioMark™ HD system combines dPCR with simultaneous real-time data for counting and validation. This capability is important for applications such as rare mutation detection, GMO quantitation, and aneuploidy detection—where false positives are intolerable and precision is paramount.

    Any field that requires precision and the ability to detect false positives is a likely target for Fluidigm’s dPCR. Suitable applications include detecting and quantifying cancer-causing genes in patients’ cells, viral RNA that infects bacteria, or fetal DNA in an expectant mother’s plasma.

    Dr. Menezes This technology is particularly useful for samples with low frequency sequences as, for example, those containing rare alleles, low levels of pathogen, or low levels of target gene expression. Teasing out fine differences in copy number variants is another area where this technology delivers more precise data.

    Ms. Hibbs Digital PCR overcomes limitations associated with low-abundance template material and quantification of rare mutations in a high background of wild-type DNA sequence. For this reason, dPCR is poised to have significant impacts in diverse clinical applications such as detection and quantification of rare mutations in liquid biopsies, detection of viral pathogens, and detection of copy number variation and mosaicism.

    Dr. Price Due to its high sensitivity, precision, and absolute quantification, the RainDrop dPCR has the potential to extend the range of nucleic acid analysis beyond the reach of other methods in a number of applications that could lend themselves to diagnostic, prognostic, and predictive applications. The precision of dPCR can be extremely useful in applications that require finer measures of fold change and rare variant detection.

    Digital PCR is suitable for addressing varied research and clinical challenges. These include the early detection of cancer, pathogen/viral detection and quantitation, copy number variation, rare mutation detection, fetal genetic screening, and predicting transplant rejection. Additional applications include gene expression analysis, microRNA analysis, and NGS library quantification.

    Mr. Wakida Digital PCR will have an impact on applications for detecting rare targets by enabling investigators to complement and extend their capabilities beyond traditionally employed methods. One such application is using dPCR to monitor rare targets in peripheral blood, as in liquid biopsies.

    The monitoring of peripheral blood by means of dPCR has been described in several peer-reviewed articles. In one such article, investigators considered the clinical value of Thermo’s QuantStudio™ 3D Digital PCR system for the detection of circulating DNA in metastatic colorectal cancer (Dig Liver Dis. 2015 Oct; 47(10): 884–90).

  • GEN: Is there a new technology on the horizon that will increase the speed and/or efficiency of dPCR?

    Dr. Shelton High-throughput sample analysis can be an issue with some dPCR systems. However, Bio-Rad’s Automated Droplet Generator allows labs to process 96 samples simultaneously, a capability that eliminates user-to-user variability and minimizes hands-on time.

    We also want users to get the most information from one sample. Therefore, we are focused on expanding the multiplexing capabilities of our system. In development at Bio-Rad are new technologies that increase the multiplexing capabilities without loss of specificity or accuracy in the downstream workflow.

    Dr. Garner Much of the industry direction seems to be in offering ever-higher resolution, or the ability to run more samples at the same resolution. Thus far, however, customers haven’t found commercial uses for these tools. Also, with increasing resolution and the search for even rarer mutations, the challenge of detecting false positives becomes an even bigger issue.

    Dr. Menezes Use of ZEN™ Double-Quenched Probes by IDT in digital PCR provides increased sensitivity and a lower limit of detection. Due to the second quencher, ZEN probes provide even lower background than traditional single-quenched probes. And this lower background enables increased sensitivity when analyzing samples with low copy number targets, where every droplet matters.

    Ms. Hibbs Quantification relies upon counting the number of positive partitions at the end point of the reaction. Accordingly, precision and resolution can be increased by increasing the number of partitions. We are now capable of analyzing on the order of millions of partitions per run, further extending the lower limit of detection. Additionally, the workflow is amenable to the integration of automation in order to increase throughput and standardize reaction set up.

    Dr. Price Although dPCR is still an emerging technology, there is tremendous interest in its potential clinical diagnostics applications. Enabling adoption of dPCR in the clinical lab requires addressing current gaps in workflow, cost, throughput, and turnaround time.

    Digital PCR technology has the potential for being improved significantly in two dimensions. First, one can address the problem of serially detecting positive versus negative partitions by leveraging lower-cost imaging detection technologies. Alternatively, one may capitalize on the small partition volumes to dramatically reduce the time to perform PCR. Ideally, the future will bring both capabilities to bear.

    Mr. Wakida Compared to qPCR, dPCR currently requires more hands-on time to set up experiments. We are investigating methods to address this.

 

PCR Shows Off Its Clinical Chops   

Thanks to Advances in Genomics, PCR Is Becoming More Common in Clinical Applications

  • Last May, Roche Molecular Systems announced that its cobas Liat Strep A assay received a CLIA waiver. This clinic-ready assay can detect Streptococcus pyogenes (group A ß-hemolytic streptococcus) DNA in throat swabs by targeting a segment of the S. pyogenes genome.

    Since its invention by Kary B. Mullis in 1985, the polymerase chain reaction (PCR) has become well established, even routine, in research laboratories. And now PCR is becoming more common in clinical applications, thanks to advances in genomics and the evolution of more sensitive quantitative PCR methodologies.

    Examples of clinical applications of PCR include point-of-care (POC) molecular tests for bacterial and viral detection, as well as mutation detection in liquid or tumor biopsies for patient stratification and treatment monitoring.
    Industry leaders recently participated in a CHI conference that was held in San Francisco. This conference—PCR for Molecular Medicine—encompassed research and clinical perspectives and emphasized advanced techniques and tools for effective disease diagnosis.
    To kick off the event, speakers shared their views on POC molecular tests. These tests, the speakers insisted, can provide significant value to healthcare only if they support timely decision making.
    Clinic-ready PCR platforms need to combine speed, ease of use, and accuracy. One such platform, the cobas Liat (“laboratory in a tube”), is manufactured by Roche Molecular Systems. The system employs nucleic acid purification and state-of-art PCR-based assay chemistry to enable POC sites to rapidly provide lab-quality results.
    The cobas Liat Strep A Assay detects Streptococcus pyogenes (group A β-hemolytic streptococcus) DNA by targeting a segment of the S. pyogenes genome. The operator transfers an aliquot of a throat swab sample in Amies medium into a cobas Liat Strep A Assay tube, scans the relevant tube and sample identification barcodes, and then inserts the tube into the analyzer for automated processing and result interpretation. No other operator intervention or interpretation is required. Results are ready in approximately 15 minutes.

    According to Shuqi Chen, Ph.D., vp of Point-of-Care R&D at Roche Molecular Systems, clinical studies of the cobas Liat Strep A Assay demonstrated 97.7% sensitivity when the test was used at CLIA-waived, intended-use sites, such as physicians’ offices. In comparison, rapid antigen tests and diagnostic culture have sensitivities of 70% and 81%, respectively (according to a 2009 study Tanz et al. in Pediatrics).

    The cobas Liat assay preserved the same ease-of-use and rapid turnaround as the rapid antigen tests. It addition, it provided significantly faster turnaround than the lab-based culture test, which can take 24–48 hours.

    A CLIA waiver was announced for the cobas Liat Strep A assay in May 2015. CLIA wavers have been submitted for cobas Liat flu assays, and Roche intends to extend the assay menu.

    POC tests are also moving into field applications. Coyote Bioscience has developed a novel method for one-step gene testing without nucleic acid extraction that can be as fast as 10 minutes from blood sample to result. Their portable devices for molecular diagnostics can be used as genetic biosensors to bring complex clinical testing directly to the patient.

    “Instead of sequential steps, reactions happen in parallel, significantly reducing analysis time. Buffer, enzyme, and temperature profiles are optimized to maximize sensitivity,” explained Sabrina Li, CEO, Coyote Bioscience. “Both RNA and DNA can be analyzed simultaneously from a drop of blood in the same reaction.”

    The first-generation Mini-8 system was used for Ebola detection in Africa where close to 600 samples were tested with 98.8% sensitivity. Recently in China, the Mini-8 system was applied in hospitals and small community clinics for hepatitis B and C and Bunia virus detection. The second-generation InstantGene system is currently being tested internally with clinical samples.

  • Digital PCR

    Conventional real-time PCR technology, while suited to the analysis of high-quality clinical samples, may effectively conceal amplification efficiency changes when sample quality is inconsistent. A more effective alternative, Bio-Rad suggests, is its droplet-digital PCR (ddPCR) technology, which can provide absolute quantification of target DNA or RNA, a critical advantage when samples are limited, degraded, or contain PCR inhibitors. The company says that of the half-dozen clinical trials that are using digital PCR, half rely on the Bio-Rad QX200 ddPCR system.

    Personalized cancer care requires ultra-sensitive detection and monitoring of actionable mutations from patient samples. The high sensitivity and precision of droplet-digital PCR (ddPCR) from Bio-Rad Laboratories offers critical advantages when clinical samples are limited, degraded, or contain PCR inhibitors.

    Typically, formalin-fixed and paraffin-embedded (FFPE) tissue samples are processed. FFPE samples work well for immunohistochemistry and protein analysis; however, the formalin fixation can damage nucleic acids and inhibit the PCR reaction. Samples may yield 100 ng of purified nucleic acid, but the actual amplifiable material is less than 1%, or 1 ng, in most cases.

    “Current qPCR technology depends on real-time fluorescence accumulation as the PCR is occurring, which can be an effective means of detecting and quantifying DNA targets in nondegraded samples,” commented Dawne Shelton, Ph.D., staff scientist, Digital Biology Center, Applications Development Group, Bio-Rad Laboratories. “Amplification efficiency is critical; if that amplification efficiency changes because of sample quality it is hidden in the qPCR methodology.”

    “In ddPCR, that is a big red flag,” Dr. Shelton continued. “It changes the format of how the data look immediately so you know the amount of inhibition and which samples are too inhibited to use.”

    Tissue types vary and contain different degrees of fat or other content that can also act as PCR inhibitors. In blood monitoring, the small circulating fragments of DNA are extremely degraded; in addition, food, supplements, or other compounds ingested by the patient may have an inhibitory effect.

    Clinical labs test for these variabilities and clean the blood, but remnant PCR inhibitors can remain. In ddPCR, a single template is partitioned into a droplet. If the droplet contains a good template, it produces a signal; otherwise, it does not—a simple yes or no answer.

    “Even if there is no PCR inhibition, most clinical samples yield very small amounts of nucleic acid,” Dr. Shelton added. “To make a secure decision using qPCR is difficult because you are in a gray zone at the very end of its linear range. ddPCR operates best with small sample amounts and provides good statistics for confidence in your results.”

    Currently, at least a half dozen clinical trials worldwide are using digital PCR, half of them are using the Bio-Rad QX200 Droplet Digital PCR system. Examples of studies include examining BCR-ABL monitoring in patients with chronic myelogenous leukemia (CML); identifying activating mutations in epidermal growth factor receptor (EGFR) for first-line therapy of new drugs in patients with lung cancer; and the monitoring of resistance mutations such as EFGR T790M in patients with non-small cell lung cancer (NSCLC).

    Clovis Oncology used a technology called BEAMing (Beads, Emulsions, Amplification, and Magnetics), a type of digital PCR for blood-based molecular testing, to perform EGFR testing on almost 250 patients in clinical trials. In BEAMing, individual EGFR gene copies from plasma are separated into individual water droplets in a water-in-oil emulsion. The gene copies are then amplified by PCR on magnetic beads.

    The beads are counted by flow cytometry using fluorescently labeled probes to distinguish mutant beads from wild-type. Because each bead can be traced to an individual EGFR molecule in the patient’s plasma, the method is highly quantitative.

    “BEAMing is particularly well-suited for the detection of known mutations in circulating tumor DNA. In this circumstance, the mutation of interest often occurs at low levels, perhaps only 1–2 copies per milliliter or even less, and in a high background of wild-type DNA that comes from normal tissue. BEAMing can detect one mutant molecule in a background of 5,000 wild-type molecules in clinical samples,” stated Andrew Allen, MRCP, Ph.D., chief medical officer, Clovis Oncology.

    In the studies, the EGFR-resistance mutation T790M could be identified in plasma 81% of the time that it was seen in the matched patient tumor biopsy. Additionally, about 10% of patients in the study had a T790M mutation in plasma that was not identified in tissue, presumably because of tumor heterogeneity. Another 5–10% of the patients did not provide an EGFR result, usually because the tissue biopsy had no tumor cells.

    In aggregate, these results suggest that plasma EGFR testing can be a valuable complement to tumor testing in the clinical management of NSCLC patients, and can provide an alternative when a biopsy is not available. Tumor biopsies may provide only limited tissue, if in fact any tissue is available, for molecular analysis. Also, mutations may be missed due to tumor heterogeneity. These mutations may be captured by sampling the blood, which acts as a reservoir for mutations from all parts of a patient’s tumor burden.

    In the last few years, a panoply of clinically actionable driver mutations have been identified for NSCLC, including mutations in EGFR, BRAF, and HER2, as well as ALK, ROS, and RET rearrangements. These driver mutations will migrate NSCLC molecular diagnostic testing in the next few years toward panel testing of relevant cancer genes using various digital technologies, including next-generation sequencing.

     

PCR Has a History of Amplifying Its Game

A GEN 35th Anniversary Retrospective

PCR Has a History of Amplifying Its Game

PCR is a fast and inexpensive technique used to amplify segments of DNA that continues to adapt and evolve for the demanding needs of molecular biology researchers. This diagram shows the basic principles of PCR amplification. [NHGRI]

  • The influence that the polymerase chain reaction (PCR) has had on modern molecular biology is nothing short of remarkable. This technique, which is akin to molecular photocopying, has been the centerpiece of everything from the OJ Simpson Trial to the completion of the Human Genome Project. Clinical laboratories use this DNA amplification method for infectious disease testing and tissue typing in organ transplantation. Most recently, with the explosion of the molecular diagnostics field and meteoric rise in the use of next-generation sequencing platforms, PCR has enhanced its standing as an essential pillar of genomic science.

    Let’s open the door to the past and take a look back around 35 years ago when GEN started reporting on the relatively new disciplines of genetic engineering and molecular biology. At that time, GEN was among the first to hear the buzz surrounding a new method to synthesize and amplify DNA in the laboratory. In reviewing the fascinating history of PCR, we will see how the molecular diagnostics field took shape and where it could be headed in the future.

  • Some Like It Hot

    The biological sciences rarely advance within a vacuum—rather they rely on previous discoveries to promote directly or indirectly our understanding. The contributions made by scientists in the field of molecular biology that contributed to the functional pieces of PCR were numerous and spread out over more than two decades.

    It began with H. Gobind Khorana’s advances in understanding the genetic code, leading to the use of synthetic DNA oligonucleotides, continued through Kjell Kleepe’s 1971 vision of a two-primer system for replicating DNA segments, to Fredrick Sanger’s method of DNA sequencing—a process that would win him the Nobel prize in 1980—which utilized DNA oligo primers, nucleotide precursors, and a DNA synthesis enzyme.

    All of the previous discoveries were essential to PCR’s birth, yet it would be an egregious mistake to begin a retrospective on PCR and not discuss the enzyme upon which the entire reaction hinges upon—DNA polymerase. In 1956, Nobel laureate Arthur Kornberg and his colleagues discovered DNA polymerase (Pol I), in Escherichia coli. Moreover, the researchers described the fundamental process by which the polymerase enzyme copies the base sequence of a DNA template strand. However, it would take biologists another 20 years to discover a version of DNA polymerase that was stable enough for use for any meaningful laboratory purposes.

    That discovery came in 1976 when a team of researchers from the University of Cincinnati described the activity of a DNA polymerase (Taq) they isolated from the extreme thermophile bacteria, Thermus aquaticus, which lives in hot springs and hydrothermal vents. The fact that this enzyme could withstand typical protein-denaturing temperatures and function optimally around 75–80°C fortuitously set the stage for the development of PCR.

    By 1983, all of the ingredients to bake the molecular cake were sitting in the biological cupboard waiting to be assembled in the proper order. At that time, Nobel laureate Kary Mullis was working as a scientist for the Cetus Corporation trying to perfect oligonucleotide synthesis. Mullis stumbled upon the idea of amplifying segments of DNA using multiple rounds of replication and the two primer system—essentially modifying and expanding upon Sanger’s sequencing reaction. Mullis discovered that the temperatures for each step (melting, annealing, and extension) in the reaction would need to be painstakingly controlled by hand. In addition, he realized that since the reactions were using a non-thermostable DNA polymerase, fresh enzyme would need to be “spiked in” after each successive cycle.

    Mullis’ hard work and persistence paid off as the reaction was successful at amplifying a particular segment of DNA that was flanked by two opposing nucleotide primer molecules. Two years later, the Cetus team presented their work at the annual meeting of the American Society for Human Genetics, and the first mention of the method was published in Science that same year; however, that article did not go into detail about the specifics of the newly developed PCR method—a paper that would be rejected by roughly 15 journals and would not be published until 1987.

    Although scientists were a bit slow on the uptake for the new method, the researchers at Cetus were developing ways to improve upon the original assay. In 1986, the scientists substituted the original heat-liable DNA polymerase for the temperature-resistant Taq polymerase, removing the need to spike in enzyme and dramatically reducing errors while increasing sensitivity. A year later, PerkinElmer launched their creation of a thermal cycler, allowing scientists to regulate the heating and cooling parts of the PCR reaction with greater efficiency.

    Extremely soon after the introduction of Taq and the launch of the thermal cycler, the PCR reaction exploded exponentially among research laboratories and not only vaulted molecular biology to the pinnacle of researcher interests, it also launched a molecular diagnostics revolution that continues today and shows no signs of slowing down.

  • Molecular Workhorse

    In the years since PCR first burst onto the scene, there have been a number of significant advancements to the technique that have widely improved the overall method. For example, in 1991, a new DNA polymerase from the hyperthermophilic bacteria Pyrococcus furiosus, or Pfu, was introduced as a high-fidelity alternative enzyme to Taq. Unlike Taq polymerase, Pfu has built in 3′ to 5′ exonuclease proofreading activity, which allows the enzyme to correct nucleotide incorporation errors on the fly—dramatically increasing base specificity, albeit at a reduced rate of amplification versusTaq.

    In 1995, two advancements were introduced to PCR users. The first, called antibody “hot-start” PCR, utilized an immunoglobulin molecule that is directed to the DNA polymerase and inhibits its activity until the first 95°C melt stage, denaturing the antibody and allowing the polymerase to become active. Although this process was effective in increasing the specificity of the PCR reaction, many researchers found that the technique was time consuming and often caused cross-contamination of samples.

    The second innovation introduced that year began another revolution for molecular biology and the PCR method. Real-time PCR, or quantitative PCR (qPCR), allowed researchers to quantitatively create DNA templates for PCR amplification from RNA transcripts through the use of the reverse-transcriptase enzyme and specifically incorporated fluorescent reporter dyes. The technique is still widely used by researchers to monitor gene expression extremely accurately. Over the past 20 years, many companies have spent many R&D dollars to create more accurate, higher throughput, and simple qPCR machines to meet researcher demands.

    With the advent of next-generation sequencing techniques—and the rise of techniques that started commanding the attention of more and more researchers—PCR machines and methods needed to evolve and modernize to keep pace. PCR remained the lynchpin in almost all the next-generation sequencing reactions that came along, but the traditional technique wasn’t nearly as precise as required.

    Digital PCR (dPCR) was introduced as a refinement of the conventional method, with the first real commercial system emerging around 2006. dPCR can be used to quantify directly and clonally amplify DNA or RNA.

    The apparatus carries out a single reaction within a sample. The sample, however, is separated into a large number of partitions. Moreover, the reaction is performed in each partition individually—allowing a more reliable measurement of nucleic acid content. Researchers often use this method for studying gene-sequence variations, such as copy number variants (CNV), point mutations, rare-sequence detection, and microRNA analysis, as well as for routine amplification of next-generation sequencing samples.

  • Future of PCR: Better, Faster, Stronger!

    It is almost impossible to envision a future laboratory setting that wouldn’t utilize PCR in some fashion, especially due to the heavy reliance of next-generation sequencing techniques for accurate PCR samples and at the very least using the method as a simple amplification tool for creating DNA fragments of interest.

    Yet there is at least one new next-generation sequencing technique that can identify native DNA sequences without an amplification step—nanopore sequencing. Although this technique has performed well in many preliminary trials, it is in its relative infancy. It will probably undergo additional development lasting several years before it approaches large-scale adoption by researchers. Even then, PCR has become so engrained into daily laboratory life that to try to phase out the technique would be like asking molecular biologists to give up their pipettes or restriction enzymes.

    Most PCR equipment manufacturers continue to seek ways to improve the speed and sensitivity of their thermal cyclers, while biologists continue to look toward ways to genetically engineer better DNA polymerase molecules with even greater fidelity than their naturally occurring cousins. Whatever the new advancements are, and wherever they lead the life sciences field, you can count on us at GEN to continue to provide our readers with detailed information for another 35 years … at least!

     

Read Full Post »


Imaging of Non-tumorous and Tumorous Human Brain Tissues

Reporter and Curator: Dror Nir, PhD

The point of interest in the article I feature below is that it represents a potential building block in a future system that will use full-field optical coherence tomography during brain surgery to improve the accuracy of cancer lesions resection. The article is featuring promising results for differentiating tumor from normal brain tissue in large samples (order of 1–3 cm2) by offering images with spatial resolution comparable to histological analysis, sufficient to distinguish microstructures of the human brain parenchyma.  Easy to say, and hard to make…:) –> Intraoperative apparatus to guide the surgeon in real time during resection of brain tumors.

 

Imaging of non-tumorous and tumorous human brain tissues with full-field optical coherence tomography 

Open Access Article

Osnath Assayaga1Kate Grievea1Bertrand DevauxbcFabrice HarmsaJohan Palludbc,Fabrice ChretienbcClaude BoccaraaPascale Varletbc;  a Inserm U979 “Wave Physics For Medicine” ESPCI -ParisTech – Institut Langevin, 1 rue Jussieu, 75005, b France, Centre Hospitalier Sainte-Anne, 1 rue Cabanis 75014 Paris, France

c University Paris Descartes, France.

Abstract

A prospective study was performed on neurosurgical samples from 18 patients to evaluate the use of full-field optical coherence tomography (FF-OCT) in brain tumor diagnosis.

FF-OCT captures en face slices of tissue samples at 1 μm resolution in 3D to a penetration depth of around 200 μm. A 1 cm2 specimen is scanned at a single depth and processed in about 5 min. This rapid imaging process is non-invasive and requires neither contrast agent injection nor tissue preparation, which makes it particularly well suited to medical imaging applications.

Temporal chronic epileptic parenchyma and brain tumors such as meningiomas, low-grade and high-grade gliomas, and choroid plexus papilloma were imaged. A subpopulation of neurons, myelin fibers and CNS vasculature were clearly identified. Cortex could be discriminated from white matter, but individual glial cells such as astrocytes (normal or reactive) or oligodendrocytes were not observable.

This study reports for the first time on the feasibility of using FF-OCT in a real-time manner as a label-free non-invasive imaging technique in an intraoperative neurosurgical clinical setting to assess tumorous glial and epileptic margins.

Abbreviations

  • FF-OCT, full field optical coherence tomography;
  • OCT, optical coherence tomography

Keywords

Optical imaging; Digital pathology; Brain imaging; Brain tumor; Glioma

1. Introduction

1.1. Primary CNS tumors

Primary central nervous system (CNS) tumors represent a heterogeneous group of tumors with benign, malignant and slow-growing evolution. In France, 5000 new cases of primary CNS tumors are detected annually (Rigau et al., 2011). Despite considerable progress in diagnosis and treatment, the survival rate following a malignant brain tumor remains low and 3000 deaths are reported annually from CNS tumors in France (INCa, 2011). Overall survival from brain tumors depends on the complete resection of the tumor mass, as identified through postoperative imaging, associated with updated adjuvant radiation therapy and chemotherapy regimen for malignant tumors (Soffietti et al., 2010). Therefore, there is a need to evaluate the completeness of the tumor resection at the end of the surgical procedure, as well as to identify the different components of the tumor interoperatively, i.e. tumor tissue, necrosis, infiltrated parenchyma (Kelly et al., 1987). In particular, the persistence of non-visible tumorous tissue or isolated tumor cells infiltrating brain parenchyma may lead to additional resection.

For low-grade tumors located close to eloquent brain areas, a maximally safe resection that spares functional tissue warrants the current use of intraoperative techniques that guide a more complete tumor resection. During awake surgery, speech or fine motor skills are monitored, while cortical and subcortical stimulations are performed to identify functional areas (Sanai et al., 2008). Intraoperative MRI provides images of the surgical site as well as tomographic images of the whole brain that are sufficient for an approximate evaluation of the abnormal excised tissue, but offers low resolution (typically 1 to 1.5 mm) and produces artifacts at the air-tissue boundary of the surgical site.

Histological and immunohistochemical analyses of neurosurgical samples remain the current gold standard method used to analyze tumorous tissue due to advantages of sub-cellular level resolution and high contrast. However, these methods require lengthy (12 to 72 h), complex multiple steps, and use of carcinogenic chemical products that would not be technically possible intra-operatively. In addition, the number of histological slides that can be reviewed and analyzed by a pathologist is limited, and it defines the number and size of sampled locations on the tumor, or the surrounding tissue.

To obtain histology-like information in a short time period, intraoperative cytological smear tests are performed. However tissue architecture information is thereby lost and the analysis is carried out on only a limited area of the sample (1 mm × 1 mm).

Intraoperative optical imaging techniques are recently developed high resolution imaging modalities that may help the surgeon to identify the persistence of tumor tissue at the resection boundaries. Using a conventional operating microscope with Xenon lamp illumination gives an overall view of the surgical site, but performance is limited by the poor discriminative capacity of the white light illumination at the surgical site interface. Better discrimination between normal and tumorous tissues has been obtained using fluorescence properties of tumor cells labeled with preoperatively administered 5-ALA. Tumor tissue shows a strong ALA-induced PPIX fluorescence at 635 nm and 704 nm when the operative field is illuminated with a 440 nm-filtered lamp. More complete resections of high-grade gliomas have been demonstrated using 5-ALA fluorescence guidance (Stummer et al., 2000), however brain parenchyma infiltrated by isolated tumor cells is not fluorescent, reducing the interest of this technique when resecting low-grade gliomas.

Refinement of this induced fluorescence technique has been achieved using a confocal microscope and intraoperative injection of sodium fluorescein. A 488 nm laser illuminates the operative field and tissue contact analysis is performed using a handheld surgical probe (field of view less than 0.5 × 0.5 mm) which scans the fluorescence of the surgical interface at the 505–585 nm band. Fluorescent isolated tumor cells are clearly identified at depths from 0 to 500 μm from the resection border (Sanai et al., 2011), demonstrating the potential of this technique in low-grade glioma resection.

Reviewing the state-of-the-art, a need is identified for a quick and reliable method of providing the neurosurgeon with architectural and cellular information without the need for injection or oral intake of exogenous markers in order to guide the neurosurgeon and optimize surgical resections.

1.2. Full-field optical coherence tomography

Introduced in the early 1990s (Huang et al., 1991), optical coherence tomography (OCT) uses interference to precisely locate light deep inside tissue. The photons coming from the small volume of interest are distinguished from light scattered by the other parts of the sample by the use of an interferometer and a light source with short coherence length. Only the portion of light with the same path length as the reference arm of the interferometer, to within the coherence length of the source (typically a few μm), will produce interference. A two-dimensional B-scan image is captured by scanning. Recently, the technique has been improved, mainly in terms of speed and sensitivity, through spectral encoding (De Boer et al., 2003Leitgeb et al., 2003 and Wojtkowski et al., 2002).

A recent OCT technique called full-field optical coherence tomography (FF-OCT) enables both a large field of view and high resolution over the full field of observation (Dubois et al., 2002 and Dubois et al., 2004). This allows navigation across the wide field image to follow the morphology at different scales and different positions. FF-OCT uses a simple halogen or light-emitting diode (LED) light source for full field illumination, rather than lasers and point-by-point scanning components required for conventional OCT. The illumination level is low enough to maintain the sample integrity: the power incident on the sample is less than 1 mW/mm2 using deep red and near infrared light. FF-OCT provides the highest OCT 3D resolution of 1.5 × 1.5 × 1 μm3 (X × Y × Z) on unprepared label-free tissue samples down to depths of approximately 200 μm–300 μm (tissue-dependent) over a wide field of view that allows digital zooming down to the cellular level. Interestingly, it produces en face images in the native field view (rather than the cross-sectional images of conventional OCT), which mimic the histology process, thereby facilitating the reading of images by pathologists. Moreover, as for conventional OCT, it does not require tissue slicing or modification of any kind (i.e. no tissue fixation, coloration, freezing or paraffin embedding). FF-OCT image acquisition and processing time is less than 5 min for a typical 1 cm2 sample (Assayag et al., in press) and the imaging performance has been shown to be equivalent in fresh or fixed tissue (Assayag et al., in press and Dalimier and Salomon, 2012). In addition, FF-OCT intrinsically provides digital images suitable for telemedicine.

Numerous studies have been published over the past two decades demonstrating the suitability of OCT for in vivo or ex vivo diagnosis. OCT imaging has been previously applied in a variety of tissues such as the eye (Grieve et al., 2004 and Swanson et al., 1993), upper aerodigestive tract (Betz et al., 2008Chen et al., 2007 and Ozawa et al., 2009), gastrointestinal tract (Tearney et al., 1998), and breast tissue and lymph nodes (Adie and Boppart, 2009Boppart et al., 2004Hsiung et al., 2007Luo et al., 2005Nguyen et al., 2009Zhou et al., 2010 and Zysk and Boppart, 2006).

In the CNS, published studies that evaluate OCT (Bizheva et al., 2005Böhringer et al., 2006Böhringer et al., 2009Boppart, 2003 and Boppart et al., 1998) using time-domain (TD) or spectral domain (SD) OCT systems had insufficient resolution (10 to 15 μm axial) for visualization of fine morphological details. A study of 9 patients with gliomas carried out using a TD-OCT system led to classification of the samples as malignant versus benign (Böhringer et al., 2009). However, the differentiation of tissues was achieved by considering the relative attenuation of the signal returning from the tumorous zones in relation to that returning from healthy zones. The classification was not possible by real recognition of CNS microscopic structures. Another study showed images of brain microstructures obtained with an OCT system equipped with an ultra-fast laser that offered axial and lateral resolution of 1.3 μm and 3 μm respectively (Bizheva et al., 2005). In this way, it was possible to differentiate malignant from healthy tissue by the presence of blood vessels, microcalcifications and cysts in the tumorous tissue. However the images obtained were small (2 mm × 1 mm), captured on fixed tissue only and required use of an expensive large laser thereby limiting the possibility for clinical implementation.

Other studies have focused on animal brain. In rat brain in vivo, it has been shown that optical coherence microscopy (OCM) can reveal neuronal cell bodies and myelin fibers (Srinivasan et al., 2012), while FF-OCT can also reveal myelin fibers (Ben Arous et al., 2011), and movement of red blood cells in vessels (Binding et al., 2011).

En face images captured with confocal reflectance microscopy can closely resemble FF-OCT images. For example, a prototype system used by Wirth et al. (2012) achieves lateral and axial resolution of 0.9 μm and 3 μm respectively. However small field size prevents viewing of wide-field architecture and slow acquisition speed prohibits the implementation of mosaicking. In addition, the poorer axial resolution and lower penetration depth of confocal imaging in comparison to FF-OCT limit the ability to reconstruct cross-sections from the confocal image stack.

This study is the first to analyze non-tumorous and tumorous human brain tissue samples using FF-OCT.

2. Materials and methods

2.1. Instrument

The experimental arrangement of FF-OCT (Fig. 1A) is based on a configuration that is referred to as a Linnik interferometer (Dubois et al., 2002). A halogen lamp is used as a spatially incoherent source to illuminate the full field of an immersion microscope objective at a central wavelength of 700 nm, with spectral width of 125 nm. The signal is extracted from the background of incoherent backscattered light using a phase-shifting method implemented in custom-designed software. This study was performed on a commercial FF-OCT system (LightCT, LLTech, France).

 

Fig 1

Capturing “en face” images allows easy comparison with histological sections. The resolution, pixel number and sampling requirements result in a native field of view that is limited to about 1 mm2. The sample is moved on a high precision mechanical platform and a number of fields are stitched together (Beck et al., 2000) to display a significant field of view. The FF-OCT microscope is housed in a compact setup (Fig. 1B) that is about the size of a standard optical microscope (310 × 310 × 800 mm L × W × H).

2.2. Imaging protocol

All images presented in this study were captured on fresh brain tissue samples from patients operated on at the Neurosurgery Department of Sainte-Anne Hospital, Paris. Informed and written consent was obtained in all cases following the standard procedure at Sainte-Anne Hospital from patients who were undergoing surgical intervention. Fresh samples were collected from the operating theater immediately after resection and sent to the pathology department. A pathologist dissected each sample to obtain a 1–2 cm2 piece and made a macroscopic observation to orientate the specimen in order to decide which side to image. The sample was immersed in physiological serum, placed in a cassette, numbered, and brought to the FF-OCT imaging facility in a nearby laboratory (15 min distant) where the FF-OCT images were captured. The sample was placed in a custom holder with a coverslip on top (Fig. 1C, D). The sample was raised on a piston to rest gently against the coverslip in order to flatten the surface and so optimize the image capture. The sample is automatically scanned under a 10 × 0.3 numerical aperture (NA) immersion microscope objective. The immersion medium is a silicone oil of refractive index close to that of water, chosen to optimize index matching and slow evaporation. The entire area of each sample was imaged at a depth of 20 μm beneath the sample surface. This depth has been reported to be optimal for comparison of FF-OCT images to histology images in a previous study on breast tissue (Assayag et al., in press). There are several reasons for the choice of imaging depth: firstly, histology was also performed at approximately 20 μm from the edge of the block, i.e. the depth at which typically the whole tissue surface begins to be revealed. Secondly, FF-OCT signal is attenuated with depth due to multiple scattering in the tissue, and resolution is degraded with depth due to aberrations. The best FF-OCT images are therefore captured close to the surface, and the best matching is achieved by attempting to image at a similar depth as the slice in the paraffin block. It was also possible to capture image stacks down to several hundred μm in depth (where penetration depth is dependent on tissue type), for the purpose of reconstructing a 3D volume and imaging layers of neurons and myelin fibers. An example of such a stack in the cerebellum is shown as a video (Video 2) in supplementary material. Once FF-OCT imaging was done, each sample was immediately fixed in formaldehyde and returned to the pathology department where it underwent standard processing in order to compare the FF-OCT images to histology slides.

2.3. Matching FF-OCT to histology

The intention in all cases was to match as closely as possible to histology. FF-OCT images were captured 20 μm below the surface. Histology slices were captured 20 μm from the edge of the block. However the angle of the inclusion is hard to control and so some difference in the angle of the plane always exists when attempting matching. Various other factors that can cause differences stem from the histology process — fixing, dehydrating, paraffin inclusion etc. all alter the tissue and so precise correspondence can be challenging. Such difficulties are common in attempting to match histology to other imaging modalities (e.g. FF-OCT Assayag et al., in press; OCT Bizheva et al., 2005; confocal microscopy Wirth et al., 2012).

An additional parameter in the matching process is the slice thickness. Histology slides were 4 μm in thickness while FF-OCT optical slices have a 1 μm thickness. The finer slice of the FF-OCT image meant that lower cell densities were perceived on the FF-OCT images (in those cases where individual cells were seen, e.g. neurons in the cortex). This difference in slice thickness affects the accuracy of the FF-OCT to histology match. In order to improve matching, it would have been possible to capture four FF-OCT slices in 1 μm steps and sum the images to mimic the histology thickness. However, this would effectively degrade the resolution, which was deemed undesirable in evaluating the capacities of the FF-OCT method.

3. Results

18 samples from 18 adult patients (4 males, 14 females) of age range 19–81 years have been included in the study: 1 mesial temporal lobe epilepsy and 1 cerebellum adjacent to a pulmonary adenocarcinoma metastasis (serving as the non-tumor brain samples), 7 diffuse supratentorial gliomas (4 WHO grade II, 3 WHO grade III), 5 meningiomas, 1 hemangiopericytoma, and 1 choroid plexus papilloma. Patient characteristics are detailed in Table 1.

 

Table 1

3.1. FF-OCT imaging identifies myelinated axon fibers, neuronal cell bodies and vasculature in the human epileptic brain and cerebellum

The cortex and the white matter are clearly distinguished from one another (Fig. 2). Indeed, a subpopulation of neuronal cell bodies (Fig. 2B, C) as well as myelinated axon bundles leading to the white matter could be recognized (Fig. 2D, E). Neuronal cell bodies appear as dark triangles (Fig. 2C) in relation to the bright surrounding myelinated environment. The FF-OCT signal is produced by backscattered photons from tissues of differing refractive indices. The number of photons backscattered from the nuclei in neurons appears to be too few to produce a signal that allows their differentiation from the cytoplasm, and therefore the whole of the cell body (nucleus plus cytoplasm) appears dark.

Fig 2

 

Myelinated axons are numerous, well discernible as small fascicles and appear as bright white lines (Fig. 2E). As the cortex does not contain many myelinated axons, it appears dark gray. Brain vasculature is visible (Fig. 2F and G), and small vessels are distinguished by a thin collagen membrane that appears light gray. Video 1 in supplementary material shows a movie composed of a series of en face 1 μm thick optical slices captured over 100 μm into the depth of the cortex tissue. The myelin fibers and neuronal cell bodies are seen in successive layers.

The different regions of the human hippocampal formation are easily recognizable (Fig. 3). Indeed, CA1 field and its stratum radiatum, CA4 field, the hippocampal fissure, the dentate gyrus, and the alveus are easily distinguishable. Other structures become visible by zooming in digitally on the FF-OCT image. The large pyramidal neurons of the CA4 field (Fig. 3B) and the granule cells that constitute the stratum granulosum of the dentate gyrus are visible, as black triangles and as small round dots, respectively (Fig. 3D).

 

Fig 3

In the normal cerebellum, the lamellar or foliar pattern of alternating cortex and central white matter is easily observed (Fig. 4A). By digital zooming, Purkinje and granular neurons also appear as black triangles or dots, respectively (Fig. 4C), and myelinated axons are visible as bright white lines (Fig. 4E). Video 2 in supplementary material shows a fly-through movie in the reconstructed axial slice orientation of a cortex region in cerebellum. The Purkinje and granular neurons are visible down to depths of 200 μm in the tissue.

 

Fig 4

3.2. FF-OCT images distinguish meningiomas from hemangiopericytoma in meningeal tumors

The classic morphological features of a meningioma are visible on the FF-OCT image: large lobules of tumorous cells appear in light gray (Fig. 5A), demarcated by collagen-rich bundles (Fig. 5B) which are highly scattering and appear a brilliant white in the FF-OCT images. The classic concentric tumorous cell clusters (whorls) are very clearly distinguished on the FF-OCT image (Fig. 5D). In addition the presence of numerous cell whorls with central calcifications (psammoma bodies) is revealed (Fig. 5F). Collagen balls appear bright white on the FF-OCT image (Fig. 5H). As the collagen balls progressively calcify, they are consumed by the black of the calcified area, generating a target-like image (Fig. 5H). Calcifications appear black in FF-OCT as they are crystalline and so allow no penetration of photons to their interior.

Fig 5

Mesenchymal non-meningothelial tumors such as hemangiopericytomas represent a classic differential diagnosis of meningiomas. In FF-OCT, the hemangiopericytoma is more monotonous in appearance than the meningiomas, with a highly vascular branching component with staghorn-type vessels (Fig. 6A, C).

Fig 6

3.3. FF-OCT images identify choroid plexus papilloma

The choroid plexus papilloma appears as an irregular coalescence of multiple papillas composed of elongated fibrovascular axes covered by a single layer of choroid glial cells (Fig. 7). By zooming in on an edematous papilla, the axis appears as a black structure covered by a regular light gray line (Fig. 7B). If the papilla central axis is hemorrhagic, the fine regular single layer is not distinguishable (Fig. 7C). Additional digital zooming in on the image reveals cellular level information, and some nuclei of plexus choroid cells can be recognized. However, cellular atypia and mitosis are not visible. These represent key diagnosis criteria used to differentiate choroid plexus papilloma (grade I) from atypical plexus papilloma (grade II).

Fig 7

3.4. FF-OCT images detect the brain tissue architecture modifications generated by diffusely infiltrative gliomas

Contrary to the choroid plexus papillomas which have a very distinctive architecture in histology (cauliflower-like aspect), very easily recognized in the FF-OCT images (Fig. 7A to G), diffusely infiltrating glioma does not present a specific tumor architecture (Fig. 8) as they diffusely permeate the normal brain architecture. Hence, the tumorous glial cells are largely dispersed through a nearly normal brain parenchyma (Fig. 8E). The presence of infiltrating tumorous glial cells attested by high magnification histological observation (irregular atypical cell nuclei compared to normal oligodendrocytes) is not detectable with the current generation of FF-OCT devices, as FF-OCT cannot reliably distinguish the individual cell nuclei due to lack of contrast (as opposed to lack of resolution). In our experience, diffuse low-grade gliomas (less than 20% of tumor cell density) are mistaken for normal brain tissue on FF-OCT images. However, in high-grade gliomas (Fig. 8G–K), the infiltration of the tumor has occurred to such an extent that the normal parenchyma architecture is lost. This architectural change is easily observed in FF-OCT and is successfully identified as high-grade glioma, even though the individual glial cell nuclei are not distinguished.

Fig 8

4. Discussion

We present here the first large size images (i.e. on the order of 1–3 cm2) acquired using an OCT system that offer spatial resolution comparable to histological analysis, sufficient to distinguish microstructures of the human brain parenchyma.

Firstly, the FF-OCT technique and the images presented here combine several practical advantages. The imaging system is compact, it can be placed in the operating room, the tissue sample does not require preparation and image acquisition is rapid. This technique thus appears promising as an intraoperative tool to help neurosurgeons and pathologists.

Secondly, resolution is sufficient (on the order of 1 μm axial and lateral) to distinguish brain tissue microstructures. Indeed, it was possible to distinguish neuron cell bodies in the cortex and axon bundles going towards white matter. Individual myelin fibers of 1 μm in diameter are visible on the FF-OCT images. Thus FF-OCT may serve as a real-time anatomical locator.

Histological architectural characteristics of meningothelial, fibrous, transitional and psammomatous meningiomas were easily recognizable on the FF-OCT images (lobules and whorl formation, collagenous-septae, calcified psammoma bodies, thick vessels). Psammomatous and transitional meningiomas presented distinct architectural characteristics in FF-OCT images in comparison to those observed in hemangiopericytoma. Thus, FF-OCT may serve as an intraoperative tool, in addition to extemporaneous examination, to refine differential diagnosis between pathological entities with different prognoses and surgical managements.

Diffuse glioma was essentially recognized by the loss of normal parenchyma architecture. However, glioma could be detected on FF-OCT images only if the glial cell density is greater than around 20% (i.e. the point at which the effect on the architecture becomes noticeable). The FF-OCT technique is therefore not currently suitable for the evaluation of low tumorous infiltration or tumorous margins. Evaluation at the individual tumor cell level is only possible by IDH1R132 immunostaining in IDH1 mutated gliomas in adults (Preusser et al., 2011). One of the current limitations of the FF-OCT technique for use in diagnosis is the difficulty in estimating the nuclear/cytoplasmic boundaries and the size and form of nuclei as well as the nuclear-cytoplasmic ratio of cells. This prevents precise classification into tumor subtypes and grades.

To increase the accuracy of diagnosis of tumors where cell density measurement is necessary for grading, perspectives for the technique include development of a multimodal system (Harms et al., 2012) to allow simultaneous co-localized acquisition of FF-OCT and fluorescence images. The fluorescence channel images in this multimodal system show cell nuclei, which increase the possibility of diagnosis and tumor grading direct from optical images. However, the use of contrast agents for the fluorescence channel means that the multimodal imaging technique is no longer non-invasive, and this may be undesirable if the tissue is to progress to histology following optical imaging. This is a similar concern in confocal microscopy where use of dyes is necessary for fluorescence detection (Wirth et al., 2012).

In its current form therefore, FF-OCT is not intended to serve as a diagnostic tool, but should rather be considered as an additional intraoperative aid in order to determine in a short time whether or not there is suspicious tissue present in a sample. It does not aim to replace histological analyses but rather to complement them, by offering a tool at the intermediary stage of intraoperative tissue selection. In a few minutes, an image is produced that allows the surgeon or the pathologist to assess the content of the tissue sample. The selected tissue, once imaged with FF-OCT, may then proceed to conventional histology processing in order to obtain the full diagnosis (Assayag et al., in press and Dalimier and Salomon, 2012).

Development of FF-OCT to allow in vivo imaging is underway, and first steps include increasing camera acquisition speed. First results of in vivo rat brain imaging have been achieved with an FF-OCT prototype setup, and show real-time visualization of myelin fibers (Ben Arous et al., 2011) and movement of red blood cells in vessels (Binding et al., 2011). To respond more precisely to surgical needs, it would be preferable to integrate the FF-OCT system into a surgical probe. Work in this direction is currently underway and preliminary images of skin and breast tissue have been captured with a rigid probe FF-OCT prototype (Latrive and Boccara, 2011).

In conclusion, we have demonstrated the capacity of FF-OCT for imaging of human brain samples. This technique has potential as an intraoperative tool for determining tissue architecture and content in a few minutes. The 1 μm3 resolution and wide-field down to cellular-level views offered by the technique allowed identification of features of non-tumorous and tumorous tissues such as myelin fibers, neurons, microcalcifications, tumor cells, microcysts, and blood vessels. Correspondence with histological slides was good, indicating suitability of the technique for use in clinical practice for tissue selection for biobanking for example. Future work to extend the technique to in vivo imaging by rigid probe endoscopy is underway.

The following are the supplementary data related to this article.

Video 1.  Shows a movie composed of a series of en face 1 μm thick optical slices captured over 100 μm into the depth of the cortex tissue. The myelin fibers and neuronal cell bodies are seen in successive layers. Field size is 800 μm × 800 μm.

Video 2.  Shows a fly-through movie in the reconstructed cross-sectional orientation showing 1 μm steps through a 3D stack down to 200 μm depth in cerebellum cortical tissue. Purkinje and granular neurons are visible as dark spaces. Field size is 800 μm × 200 μm.

Acknowledgments

The authors wish to thank LLTech SAS for use of the LightCT Scanner.

References

 

Adie and Boppart, 2009

Adie, Boppart

Optical Coherence Tomography for Cancer Detection

SpringerLink (2009), pp. 209–250

Assayag et al., in press

Assayag et al.

Large field, high resolution full field optical coherence tomography: a pre-clinical study of human breast tissue and cancer assessment

Technology in Cancer Research & Treatment TCRT Express, 1 (1) (2013), p. e600254http://dx.doi.org/10.7785/tcrtexpress.2013.600254

Beck et al., 2000

Beck et al.

Computer-assisted visualizations of neural networks: expanding the field of view using seamless confocal montaging

Journal of Neuroscience Methods, 98 (2) (2000), pp. 155–163

Ben Arous et al., 2011

Ben Arous et al.

Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy

Journal of Biomedical Optics, 16 (11) (2011), p. 116012

Full Text via CrossRef

Betz et al., 2008

C.S. Betz et al.

A set of optical techniques for improving the diagnosis of early upper aerodigestive tract cancer

Medical Laser Application, 23 (2008), pp. 175–185

Binding et al., 2011

Binding et al.

Brain refractive index measured in vivo with high-NA defocus-corrected full-field OCT and consequences for two-photon microscopy

Optics Express, 19 (6) (2011), pp. 4833–4847

Bizheva et al., 2005

Bizheva et al.

Imaging ex vivo healthy and pathological human brain tissue with ultra-high-resolution optical coherence tomography

Journal of Biomedical Optics, 10 (2005), p. 011006 http://dx.doi.org/10.1117/1.1851513

Böhringer et al., 2006

Böhringer et al.

Time domain and spectral domain optical coherence tomography in the analysis of brain tumor tissue

Lasers in Surgery and Medicine, 38 (2006), pp. 588–597 http://dx.doi.org/10.1002/lsm.20353

Böhringer et al., 2009

Böhringer et al.

Imaging of human brain tumor tissue by near-infrared laser coherence tomography

Acta Neurochirurgica, 151 (2009), pp. 507–517 http://dx.doi.org/10.1007/s00701-009-0248-y

Boppart, 2003

Boppart

Optical coherence tomography: technology and applications for neuroimaging

Psychophysiology, 40 (2003), pp. 529–541 http://dx.doi.org/10.1111/1469-8986.00055

Boppart et al., 1998

Boppart et al.

Optical coherence tomography for neurosurgical imaging of human intracortical melanoma

Neurosurgery, 43 (1998), pp. 834–841 http://dx.doi.org/10.1097/00006123-199810000-00068

Boppart et al., 2004

Boppart et al.

Optical coherence tomography: feasibility for basic research and image-guided surgery of breast cancer

Breast Cancer Research and Treatment, 84 (2004), pp. 85–97

Chen et al., 2007

Chen et al.

Ultrahigh resolution optical coherence tomography of Barrett’s esophagus: preliminary descriptive clinical study correlating images with histology

Endoscopy, 39 (2007), pp. 599–605

Dalimier and Salomon, 2012

Dalimier, Salomon

Full-field optical coherence tomography: a new technology for 3D high-resolution skin imaging

Dermatology, 224 (2012), pp. 84–92 http://dx.doi.org/10.1159/000337423

De Boer et al., 2003

De Boer et al.

Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography

Optics Letters, 28 (2003), pp. 2067–2069

Dubois et al., 2002

Dubois et al.

High-resolution full-field optical coherence tomography with a Linnik microscope

Applied Optics, 41 (4) (2002), p. 805

Dubois et al., 2004

Dubois et al.

Ultrahigh-resolution full-field optical coherence tomography

Applied Optics, 43 (14) (2004), p. 2874

Grieve et al., 2004

Grieve et al.

Ocular tissue imaging using ultrahigh-resolution, full-field optical coherence tomography

Investigative Ophthalmology & Visual Science, 45 (2004), pp. 4126-3–4131

Harms et al., 2012

Harms et al.

Multimodal full-field optical coherence tomography on biological tissue: toward all optical digital pathology

Proc. SPIE 2011, Multimodal Biomedical Imaging VII, 8216 (2012)

Hsiung et al., 2007

Hsiung et al.

Benign and malignant lesions in the human breast depicted with ultrahigh resolution and three-dimensional optical coherence tomography

Radiology, 244 (2007), pp. 865–874

Read Full Post »


Reporter: Larry H Bernstein, MD, FCAP

Pathologists May Be Healthcare’s Rock Stars of Big Data in Genomic Medicine’s ’Third Wave’

Published: December 17 2012

Pathologists are positioned to be the primary interpreters of big data as genomic medicine further evolves

Pathologists and clinical laboratory managers may be surprised to learn that at least one data scientist has proclaimed pathologists the real big data rock stars of healthcare. The reason has to do with the shift in focus of genomic medicine from therapeutics and presymptomatic disease assessment to big data analytics.

In a recent posting published at Forbes.com, data scientist Jim Golden heralded the pronouncement of Harvard pathologist Mark S. Boguski, M.D., Ph.D., FACM. He declared that “The time of the $1,000 genome meme is over!”

DNA Sequencing Systems and the $1,000 Genome

Golden has designed, built, and programmed DNA sequencing devices. He apprenticed under the Human Genome program and spent 15 years working towards the $1,000 genome. “I’m a believer,” he blogged. “[That’s] why I was so intrigued [by Boguski’s remarks].

Boguski is Associate Professor of Pathology at the Center for Biomedical Informatics at Harvard Medical School and the Department of Pathology at Beth Israel Deaconess Medical Center. It was in a presentation at a healthcare conference in Boston that Boguski pronounced that it is time for the $1,000 genome to go.

“Big data analytics” will be required for translational medicine to succeed in the Third Wave of Genetic Medicine. That’s the opinion of Mark S. Boguski, M.D., Ph.D., who is a pathologist-informatist at Harvard Medical School and Beth Israel Deaconess Medical Center. Boguski predicts that pathologists are positioned to become the “rock stars” of big data analytics. For pathologists and clinical laboratory administrators, that means that big computer power will become increasingly important for all medical laboratories. (Photo by Medicine20Congress.com.)

Both Golden and Boguski acknowledged the benefits generated by the race to the $1,000 genome. Competition to be first to achieve this milestone motivated scientists and engineers to swiftly drive down the cost of decoding DNA. The result was a series of advances in instrumentation, chemistry, and biology.

Pathologists and Big Data Analytics

“Our notions about how genome science and technology would improve health and healthcare have changed,” Boguski wrote in an editorial published at Future Medicine. He then noted that the focus has shifted to big data analytics.

In the editorial, Boguski described the phases of development of genomic medicine as “waves.” The first wave occurred during the mid- to late-1990s. It focused on single- nucleotide polymorphisms (SNP) and therapeutics.

Medical Laboratories Have Opportunity to Perform Presymptomatic Testing

The second wave focused on presymptomatic testing for disease risk assessment and Genome Wide Association Studies (GWAS). Researchers expected this data to help manage common diseases.

The first two waves of medical genomics were conducted largely by the pharmaceutical industry, as well as with  primary care and public health communities, according to Boguski. Considerable optimism accompanied each wave of medical genomics.

“Despite the earlier optimism, progress in improving human health has been modest and incremental, rather than paradigm-shifting,” noted Boguski, who wrote that,to date, only a handful of genome-derived drugs have reached the market. He further observed that products such as direct-to-consumer genomic testing have proved more educational and recreational than medical.

“Third Wave” of Genomic Medicine

It was rapid declines in the cost of next-generation DNA sequencing technologies that now has triggered the third wave of genomic medicine. Its focus is postsymptomatic genotyping for individualized and optimized disease management.

“This is where genomics is likely to bring the most direct and sustained impact on healthcare for several reasons,” stated Boguski. “Genomics technologies enable disease diagnosis of sufficient precision to drive both cost-effective [patient] management and better patient outcomes. Thus, they are an essential part of the prescription for disruptive healthcare reform.”

Boguski reiterated the case for the value of laboratory medicine. He stated the following critical—but often overlooked—points, each of which is familiar to pathologists and clinical laboratory managers:

1. Pathologist-directed, licensed clinical laboratory testing has a major effect on clinical decision-making.

2. Medical laboratory testing services account for only about 2% of healthcare expenditures in the United States.

3. Medical laboratory services strongly influence the remaining 98% of costs through the information they provide on the prevention, diagnosis, treatment, and management of disease.

Molecular Diagnostics Reaching Maturity for Clinical Laboratory Testing

“Genome analytics are just another technology in the evolution of molecular diagnostics,” Boguski declared in his editorial.

Read more: Pathologists May Be Healthcare’s Rock Stars of Big Data in Genomic Medicine’s ’Third Wave’ | Dark Daily http://www.darkdaily.com/pathologists-may-be-healthcare%e2%80%99s-rock-stars-of-big-data-in-genomic-medicines-third-wave-1217#ixzz2FL24IRAA

English: Created by Abizar Lakdawalla.

English: Created by Abizar Lakdawalla. (Photo credit: Wikipedia)

English: Workflow for DNA nanoball sequencing

English: Workflow for DNA nanoball sequencing (Photo credit: Wikipedia)

DNA sequence

DNA sequence (Photo credit: Wikipedia)

Big Data: water wordscape

Big Data: water wordscape (Photo credit: Marius B)

 

 

Comment & Response

Right now the cost of the testing and the turnaround times are not favorable. It is going to take a decade or more for clinical labs to catch up. For some time it will be send out tests to Quest, LabCorp, and State or University lab consortia.

The power of the research technology is pushing this along, but for Personalized Medicine the testing should be coincident with the patient visit, and the best list of probable issues should be accessible on the report screen. The EHR industry is dominated by 2 companies that I see have no interest in meeting the needs of the physicians. The payback has to be on efficient workflow, accurate assessment of the record, and timely information. The focus for 25 years has been on billing structure. But even the revised billing codes (ICD10) can’t be less than 5 years out-of-date because of improvements in the knowledge base and improvements in applied math algorithms.

The medical record still may have information buried min a word heap, and the laboratory work is a go-to-you know where sheet with perhaps 15 variables on a page, with chemistry and hematology, immunology, blood bank, and microbiology on separate pages. The ability of the physician to fully digest the information with “errorless” discrimination is tested, and the stress imposed by the time for each patient compromises performance. There is work going on in moving proteomics along to a high throughput system for improved commercial viability, that was reported by Leigh Anderson a few years ago. The genomics is more difficult, but the genomics is partly moving to rapid micropanel tools.

In summary, there are 3 factors:

1. Automation and interpretation
2. Integration into the EHR in real time and usable by a physician.
3. The sorting out of the highest feature “predictors” and classifying them into clinically meaningful sets and subsets.

When this is done, then the next generation of recoding will be in demand.
The Automated Malnutrition Assessment
Gil David1, Larry Bernstein2, Ronald R. Coifman1

1Department of Mathematics, Program in Applied Mathematics,
Yale University, New Haven, CT 06510, USA,
2Triplex Consulting, Trumbull, CT 06611

Abstract

Introduction: We propose an automated nutritional assessment (ANA) algorithm that provides a method for malnutrition risk prediction with high accuracy and reliability.

Materials and Methods: The database used for this study is a file of 432 patients, where each patient is described by 4 laboratory parameters and 11 clinical parameters. A malnutrition risk assessment of low (1), moderate (2) or high (3) was assigned by a dietitian for each patient. An algorithm for data organization and classification via characteristic metrics is proposed. For each patient, the algorithm characterizes its unique profile and builds a characteristic metric to identify similar patients who are mapped into a classification.

Results: The algorithm assigned a malnutrition risk level for each patient based on different training sizes that were taken out of the data.

Our method resulted in an average error (distance between the automated score and the real score) of 0.386, 0.3507, 0.3454, 0.34 and 0.2907 for 10%, 30%, 50%, 70% and 90% training sizes, respectively.

Our method outperformed the compared method even when our method used a smaller training set then the compared method. In addition, we show that the laboratory parameters themselves are sufficient for the automated risk prediction and organize the patients into clusters that correspond to low, low-moderate, moderate, moderate-high and high risk areas.

Discussion: The problem of rapidly identifying risk and severity of malnutrition is crucial for minimizing medical and surgical complications. These are not easily performed or adequately expedited. We characterize for each patient a unique profile and map similar patients into a classification. We also find that the laboratory parameters themselves are sufficient for the automated risk prediction.

Keywords: Network Algorithm, unsupervised classification, malnutrition screening, protein energy malnutrition (PEM), malnutrition risk, characteristic metric, characteristic profile, data characterization, non-linear differential diagnosis.

 

Read Full Post »