Feeds:
Posts
Comments

Posts Tagged ‘image interpretation’


Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

April 13, 2016

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

references:

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang, Kayvan Reza Niazi & Bahram Jalali. Deep Learning in Label-free Cell Classification. Scientific Reports 6, Article number: 21471 (2016); doi:10.1038/srep21471 (open access)

Supplementary Information

 

Deep Learning in Label-free Cell Classification

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang,Kayvan Reza Niazi & Bahram Jalali

Scientific Reports 6, Article number: 21471 (2016)    http://dx.doi.org:/10.1038/srep21471

Deep learning extracts patterns and knowledge from rich multidimenstional datasets. While it is extensively used for image recognition and speech processing, its application to label-free classification of cells has not been exploited. Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells1,2. However, each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy1,3. Imaging flow cytometry4,5 on the other hand captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering6.

In addition to classification accuracy, the throughput is another critical specification of a flow cytometer. Indeed high throughput, typically 100,000 cells per second, is needed to screen a large enough cell population to find rare abnormal cells that are indicative of early stage diseases. However there is a fundamental trade-off between throughput and accuracy in any measurement system7,8. For example, imaging flow cytometers face a throughput limit imposed by the speed of the CCD or the CMOS cameras, a number that is approximately 2000 cells/s for present systems9. Higher flow rates lead to blurred cell images due to the finite camera shutter speed. Many applications of flow analyzers such as cancer diagnostics, drug discovery, biofuel development, and emulsion characterization require classification of large sample sizes with a high-degree of statistical accuracy10. This has fueled research into alternative optical diagnostic techniques for characterization of cells and particles in flow.

Recently, our group has developed a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept11. This instrument overcomes the trade-off between sensitivity and speed by using Amplified Time-stretch Dispersive Fourier Transform12,13,14,15. In time stretched imaging16, the object’s spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds (Fig. 1). Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching8,11,16. Moreover, warped stretch transform17,18can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view19. In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput20. Integrated with a microfluidic channel, coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.

Figure 1: Time stretch quantitative phase imaging (TS-QPI) and analytics system; A mode-locked laser followed by a nonlinear fiber, an erbium doped fiber amplifier (EDFA), and a wavelength-division multiplexing (WDM) filter generate and shape a train of broadband optical pulses. http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

 

Box 1: The pulse train is spatially dispersed into a train of rainbow flashes illuminating the target as line scans. The spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame. The ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/s. The phase shift and intensity loss at each location within the field of view are embedded into the spectral interference patterns using a Michelson interferometer. Box 2: The interferogram pulses were then stretched in time so that spatial information could be mapped into time through time-stretch dispersive Fourier transform (TS-DFT), and then captured by a single pixel photodetector and an analog-to-digital converter (ADC). The loss of sensitivity at high shutter speed is compensated by stimulated Raman amplification during time stretch. Box 3: (a) Pulse synchronization; the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images. (b) The biomass density of a cell leads to a spatially varying optical phase shift. When a rainbow flash passes through the cells, the changes in refractive index at different locations will cause phase walk-off at interrogation wavelengths. Hilbert transformation and phase unwrapping are used to extract the spatial phase shift. (c) Decoding the phase shift in each pulse at each wavelength and remapping it into a pixel reveals the protein concentration distribution within cells. The optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms. Thus, quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent. Cell features describing morphology, granularity, biomass, etc are extracted from the images. (d) These biophysical features are used in a machine learning algorithm for high-accuracy label-free classification of the cells.

On another note, surface markers used to label cells, such as EpCAM21, are unavailable in some applications; for example, melanoma or pancreatic circulating tumor cells (CTCs) as well as some cancer stem cells are EpCAM-negative and will escape EpCAM-based detection platforms22. Furthermore, large-population cell sorting opens the doors to downstream operations, where the negative impacts of labels on cellular behavior and viability are often unacceptable23. Cell labels may cause activating/inhibitory signal transduction, altering the behavior of the desired cellular subtypes, potentially leading to errors in downstream analysis, such as DNA sequencing and subpopulation regrowth. In this way, quantitative phase imaging (QPI) methods24,25,26,27 that categorize unlabeled living cells with high accuracy are needed. Coherent time stretch imaging is a method that enables quantitative phase imaging at ultrahigh throughput for non-invasive label-free screening of large number of cells.

In this work, the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning. Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely28,29,30,31 among cells and their variations reflect important information of genotypes and physiological stimuli32. The multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.

We further improved the accuracy, repeatability, and the balance between sensitivity and specificity of our label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating characteristics (ROC). To demonstrate sensitivity, specificity, and accuracy of multi-feature label-free flow cytometry using our technique, we classified (1) OT-IIhybridoma T-lymphocytes and SW-480 colon cancer epithelial cells, and (2) Chlamydomonas reinhardtii algal cells (herein referred to as Chlamydomonas) based on their lipid content, which is related to the yield in biofuel production. Our preliminary results show that compared to classification by individual biophysical parameters, our label-free hyperdimensional technique improves the detection accuracy from 77.8% to 95.5%, or in other words, reduces the classification inaccuracy by about five times.     ……..

 

Feature Extraction

The decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as shown in Fig. 2 (see Section Methods: Image Reconstruction). These images are used to obtain biophysical fingerprints of the cells8,36. With domain expertise, raw images are fused and transformed into a suitable set of biophysical features, listed in Table 1, which the deep learning model further converts into learned features for improved classification.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f2.jpg

The optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells. The attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles. The optical loss image is derived from the low frequency component of the pulse interferograms. The optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations can be found in Section Methods. Also, supplementary Videos 1 and 2 show measurements of cell-induced optical path length difference by TS-QPI at four different points along the rainbow for OT-II and SW-480, respectively.

Table 1: List of extracted features.

Feature Name    Description         Category

 

Figure 3: Biophysical features formed by image fusion.

(a) Pairwise correlation matrix visualized as a heat map. The map depicts the correlation between all major 16 features extracted from the quantitative images. Diagonal elements of the matrix represent correlation of each parameter with itself, i.e. the autocorrelation. The subsets in box 1, box 2, and box 3 show high correlation because they are mainly related to morphological, optical phase, and optical loss feature categories, respectively. (b) Ranking of biophysical features based on their AUCs in single-feature classification. Blue bars show performance of the morphological parameters, which includes diameter along the interrogation rainbow, diameter along the flow direction, tight cell area, loose cell area, perimeter, circularity, major axis length, orientation, and median radius. As expected, morphology contains most information, but other biophysical features can contribute to improved performance of label-free cell classification. Orange bars show optical phase shift features i.e. optical path length differences and refractive index difference. Green bars show optical loss features representing scattering and absorption by the cell. The best performed feature in these three categories are marked in red.

Figure 4: Machine learning pipeline. Information of quantitative optical phase and loss images are fused to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.

The neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification. This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC). Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node. The training process maximizes AUC, pushing the ROC curve toward the upper left corner, which means improved sensitivity and specificity in classification.

….   How to cite this article: Chen, C. L. et al. Deep Learning in Label-free Cell Classification.

Sci. Rep. 6, 21471; http://dx.doi.org:/10.1038/srep21471

 

Computer Algorithm Helps Characterize Cancerous Genomic Variations

http://www.genengnews.com/gen-news-highlights/computer-algorithm-helps-characterize-cancerous-genomic-variations/81252626/

To better characterize the functional context of genomic variations in cancer, researchers developed a new computer algorithm called REVEALER. [UC San Diego Health]

Scientists at the University of California San Diego School of Medicine and the Broad Institute say they have developed a new computer algorithm—REVEALER—to better characterize the functional context of genomic variations in cancer. The tool, described in a paper (“Characterizing Genomic Alterations in Cancer by Complementary Functional Associations”) published in Nature Biotechnology, is designed to help researchers identify groups of genetic variations that together associate with a particular way cancer cells get activated, or how they respond to certain treatments.

REVEALER is available for free to the global scientific community via the bioinformatics software portal GenePattern.org.

“This computational analysis method effectively uncovers the functional context of genomic alterations, such as gene mutations, amplifications, or deletions, that drive tumor formation,” said senior author Pablo Tamayo, Ph.D., professor and co-director of the UC San Diego Moores Cancer Center Genomics and Computational Biology Shared Resource.

Dr. Tamayo and team tested REVEALER using The Cancer Genome Atlas (TCGA), the NIH’s database of genomic information from more than 500 human tumors representing many cancer types. REVEALER revealed gene alterations associated with the activation of several cellular processes known to play a role in tumor development and response to certain drugs. Some of these gene mutations were already known, but others were new.

For example, the researchers discovered new activating genomic abnormalities for beta-catenin, a cancer-promoting protein, and for the oxidative stress response that some cancers hijack to increase their viability.

REVEALER requires as input high-quality genomic data and a significant number of cancer samples, which can be a challenge, according to Dr. Tamayo. But REVEALER is more sensitive at detecting similarities between different types of genomic features and less dependent on simplifying statistical assumptions, compared to other methods, he adds.

“This study demonstrates the potential of combining functional profiling of cells with the characterizations of cancer genomes via next-generation sequencing,” said co-senior author Jill P. Mesirov, Ph.D., professor and associate vice chancellor for computational health sciences at UC San Diego School of Medicine.

 

Characterizing genomic alterations in cancer by complementary functional associations

Jong Wook Kim, Olga B Botvinnik, Omar Abudayyeh, Chet Birger, et al.

Nature Biotechnology (2016)              http://dx.doi.org:/10.1038/nbt.3527

Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes

 

Figure 2: REVEALER results for transcriptional activation of β-catenin in cancer.close

(a) This heatmap illustrates the use of the REVEALER approach to find complementary genomic alterations that match the transcriptional activation of β-catenin in cancer. The target profile is a TCF4 reporter that provides an estimate of…

 

An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

Jonathan P. Celli, Imran Rizvi, Adam R. Blanden, Iqbal Massodi, Michael D. Glidden, Brian W. Pogue & Tayyaba Hasan

Scientific Reports 4; 3751  (2014)    http://dx.doi.org:/10.1038/srep03751

While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

The attrition rates for preclinical development of oncology therapeutics are particularly dismal due to a complex set of factors which includes 1) the failure of pre-clinical models to recapitulate determinants of in vivo treatment response, and 2) the limited ability of available assays to extract treatment-specific data integral to the complexities of therapeutic responses1,2,3. Three-dimensional (3D) tumour models have been shown to restore crucial stromal interactions which are missing in the more commonly used 2D cell culture and that influence tumour organization and architecture4,5,6,7,8, as well as therapeutic response9,10, multicellular resistance (MCR)11,12, drug penetration13,14, hypoxia15,16, and anti-apoptotic signaling17. However, such sophisticated models can only have an impact on therapeutic guidance if they are accompanied by robust quantitative assays, not only for cell viability but also for providing mechanistic insights related to the outcomes. While numerous assays for drug discovery exist18, they are generally not developed for use in 3D systems and are often inherently unsuitable. For example, colorimetric conversion products have been noted to bind to extracellular matrix (ECM)19 and traditional colorimetric cytotoxicity assays reduce treatment response to a single number reflecting a biochemical event that has been equated to cell viability (e.g. tetrazolium salt conversion20). Such approaches fail to provide insight into the spatial patterns of response within colonies, morphological or structural effects of drug response, or how overall culture viability may be obscuring the status of sub-populations that are resistant or partially responsive. Hence, the full benefit of implementing 3D tumour models in therapeutic development has yet to be realized for lack of analytical methods that describe the very aspects of treatment outcome that these systems restore.

Motivated by these factors, we introduce a new platform for quantitative in situ treatment assessment (qVISTA) in 3D tumour models based on computational analysis of information-dense biological image datasets (bioimage-informatics)21,22. This methodology provides software end-users with multiple levels of complexity in output content, from rapidly-interpreted dose response relationships to higher content quantitative insights into treatment-dependent architectural changes, spatial patterns of cytotoxicity within fields of multicellular structures, and statistical analysis of nodule-by-nodule size-dependent viability. The approach introduced here is cognizant of tradeoffs between optical resolution, data sampling (statistics), depth of field, and widespread usability (instrumentation requirement). Specifically, it is optimized for interpretation of fluorescent signals for disease-specific 3D tumour micronodules that are sufficiently small that thousands can be imaged simultaneously with little or no optical bias from widefield integration of signal along the optical axis of each object. At the core of our methodology is the premise that the copious numerical readouts gleaned from segmentation and interpretation of fluorescence signals in these image datasets can be converted into usable information to classify treatment effects comprehensively, without sacrificing the throughput of traditional screening approaches. It is hoped that this comprehensive treatment-assessment methodology will have significant impact in facilitating more sophisticated implementation of 3D cell culture models in preclinical screening by providing a level of content and biological relevance impossible with existing assays in monolayer cell culture in order to focus therapeutic targets and strategies before costly and tedious testing in animal models.

Using two different cell lines and as depicted in Figure 1, we adopt an ECM overlay method pioneered originally for 3D breast cancer models23, and developed in previous studies by us to model micrometastatic ovarian cancer19,24. This system leads to the formation of adherent multicellular 3D acini in approximately the same focal plane atop a laminin-rich ECM bed, implemented here in glass-bottom multiwell imaging plates for automated microscopy. The 3D nodules resultant from restoration of ECM signaling5,8, are heterogeneous in size24, in contrast to other 3D spheroid methods, such as rotary or hanging drop cultures10, in which cells are driven to aggregate into uniformly sized spheroids due to lack of an appropriate substrate to adhere to. Although the latter processes are also biologically relevant, it is the adherent tumour populations characteristic of advanced metastatic disease that are more likely to be managed with medical oncology, which are the focus of therapeutic evaluation herein. The heterogeneity in 3D structures formed via ECM overlay is validated here by endoscopic imaging ofin vivo tumours in orthotopic xenografts derived from the same cells (OVCAR-5).

 

Figure 1: A simplified schematic flow chart of imaging-based quantitative in situ treatment assessment (qVISTA) in 3D cell culture.

(This figure was prepared in Adobe Illustrator® software by MD Glidden, JP Celli and I Rizvi). A detailed breakdown of the image processing (Step 4) is provided in Supplemental Figure 1.

A critical component of the imaging-based strategy introduced here is the rational tradeoff of image-acquisition parameters for field of view, depth of field and optical resolution, and the development of image processing routines for appropriate removal of background, scaling of fluorescence signals from more than one channel and reliable segmentation of nodules. In order to obtain depth-resolved 3D structures for each nodule at sub-micron lateral resolution using a laser-scanning confocal system, it would require ~ 40 hours (at approximately 100 fields for each well with a 20× objective, times 1 minute/field for a coarse z-stack, times 24 wells) to image a single plate with the same coverage achieved in this study. Even if the resources were available to devote to such time-intensive image acquisition, not to mention the processing, the optical properties of the fluorophores would change during the required time frame for image acquisition, even with environmental controls to maintain culture viability during such extended imaging. The approach developed here, with a mind toward adaptation into high throughput screening, provides a rational balance of speed, requiring less than 30 minutes/plate, and statistical rigour, providing images of thousands of nodules in this time, as required for the high-content analysis developed in this study. These parameters can be further optimized for specific scenarios. For example, we obtain the same number of images in a 96 well plate as for a 24 well plate by acquiring only a single field from each well, rather than 4 stitched fields. This quadruples the number conditions assayed in a single run, at the expense of the number of nodules per condition, and therefore the ability to obtain statistical data sets for size-dependent response, Dfrac and other segmentation-dependent numerical readouts.

 

We envision that the system for high-content interrogation of therapeutic response in 3D cell culture could have widespread impact in multiple arenas from basic research to large scale drug development campaigns. As such, the treatment assessment methodology presented here does not require extraordinary optical instrumentation or computational resources, making it widely accessible to any research laboratory with an inverted fluorescence microscope and modestly equipped personal computer. And although we have focused here on cancer models, the methodology is broadly applicable to quantitative evaluation of other tissue models in regenerative medicine and tissue engineering. While this analysis toolbox could have impact in facilitating the implementation of in vitro 3D models in preclinical treatment evaluation in smaller academic laboratories, it could also be adopted as part of the screening pipeline in large pharma settings. With the implementation of appropriate temperature controls to handle basement membranes in current robotic liquid handling systems, our analyses could be used in ultra high-throughput screening. In addition to removing non-efficacious potential candidate drugs earlier in the pipeline, this approach could also yield the additional economic advantage of minimizing the use of costly time-intensive animal models through better estimates of dose range, sequence and schedule for combination regimens.

 

Microscope Uses AI to Find Cancer Cells More Efficiently

Thu, 04/14/2016 – by Shaun Mason

http://www.mdtmag.com/news/2016/04/microscope-uses-ai-find-cancer-cells-more-efficiently

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses.

There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.

Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data with the goal of achieving accurate decision making.

The study, which was published in the journal Nature Scientific Reports, was led by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.

Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The new microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.

The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.

“Each frame is slowed down in time and optically amplified so it can be digitized,” Mahjoubfar said. “This lets us perform fast cell imaging that the artificial intelligence component can distinguish.”

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem.

“The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination,” Chen said.

The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.   …..  see also http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

Chen, C. L. et al. Deep Learning in Label-free Cell Classification.    Sci. Rep. 6, 21471;   http://dx.doi.org:/10.1038/srep21471

 

 

Advertisements

Read Full Post »


The importance of spatially-localized and quantified image interpretation in cancer management

Writer & reporter: Dror Nir, PhD

I became involved in the development of quantified imaging-based tissue characterization more than a decade ago. From the start, it was clear to me that what clinicians needs will not be answered by just identifying whether a certain organ harbors cancer. If imaging devices are to play a significant role in future medicine, as a complementary source of information to bio-markers and gene sequencing the minimum value expected of them is accurate directing of biopsy needles and treatment tools to the malignant locations in the organ.  Therefore, the design goal of the first Prostate-HistoScanning (“PHS”) version I went into the trouble of characterizing localized volume of tissue at the level of approximately 0.1cc (1x1x1 mm). Thanks to that, the imaging-interpretation overlay of PHS localizes the suspicious lesions with accuracy of 5mm within the prostate gland; Detection, localisation and characterisation of prostate cancer by prostate HistoScanning(™).

I then started a more ambitious research aiming to explore the feasibility of identifying sub-structures within the cancer lesion itself. The preliminary results of this exploration were so promising that it surprised not only the clinicians I was working with but also myself. It seems, that using quality ultrasound, one can find Imaging-Biomarkers that allows differentiation of inside structures of a cancerous lesions. Unfortunately, for everyone involved in this work, including me, this scientific effort was interrupted by financial constrains before reaching maturity.

My short introduction was made to explain why I find the publication below important enough to post and bring to your attention.

I hope for your agreement on the matter.

Quantitative Imaging in Cancer Evolution and Ecology

Robert A. Gatenby, MD, Olya Grove, PhD and Robert J. Gillies, PhD

From the Departments of Radiology and Cancer Imaging and Metabolism, Moffitt Cancer Center, 12902 Magnolia Dr, Tampa, FL 33612. Address correspondence to  R.A.G. (e-mail: Robert.Gatenby@Moffitt.org).

Abstract

Cancer therapy, even when highly targeted, typically fails because of the remarkable capacity of malignant cells to evolve effective adaptations. These evolutionary dynamics are both a cause and a consequence of cancer system heterogeneity at many scales, ranging from genetic properties of individual cells to large-scale imaging features. Tumors of the same organ and cell type can have remarkably diverse appearances in different patients. Furthermore, even within a single tumor, marked variations in imaging features, such as necrosis or contrast enhancement, are common. Similar spatial variations recently have been reported in genetic profiles. Radiologic heterogeneity within tumors is usually governed by variations in blood flow, whereas genetic heterogeneity is typically ascribed to random mutations. However, evolution within tumors, as in all living systems, is subject to Darwinian principles; thus, it is governed by predictable and reproducible interactions between environmental selection forces and cell phenotype (not genotype). This link between regional variations in environmental properties and cellular adaptive strategies may permit clinical imaging to be used to assess and monitor intratumoral evolution in individual patients. This approach is enabled by new methods that extract, report, and analyze quantitative, reproducible, and mineable clinical imaging data. However, most current quantitative metrics lack spatialness, expressing quantitative radiologic features as a single value for a region of interest encompassing the whole tumor. In contrast, spatially explicit image analysis recognizes that tumors are heterogeneous but not well mixed and defines regionally distinct habitats, some of which appear to harbor tumor populations that are more aggressive and less treatable than others. By identifying regional variations in key environmental selection forces and evidence of cellular adaptation, clinical imaging can enable us to define intratumoral Darwinian dynamics before and during therapy. Advances in image analysis will place clinical imaging in an increasingly central role in the development of evolution-based patient-specific cancer therapy.

© RSNA, 2013

 

Introduction

Cancers are heterogeneous across a wide range of temporal and spatial scales. Morphologic heterogeneity between and within cancers is readily apparent in clinical imaging, and subjective descriptors of these differences, such as necrotic, spiculated, and enhancing, are common in the radiology lexicon. In the past several years, radiology research has increasingly focused on quantifying these imaging variations in an effort to understand their clinical and biologic implications (1,2). In parallel, technical advances now permit extensive molecular characterization of tumor cells in individual patients. This has led to increasing emphasis on personalized cancer therapy, in which treatment is based on the presence of specific molecular targets (3). However, recent studies (4,5) have shown that multiple genetic subpopulations coexist within cancers, reflecting extensive intratumoral somatic evolution. This heterogeneity is a clear barrier to therapy based on molecular targets, since the identified targets do not always represent the entire population of tumor cells in a patient (6,7). It is ironic that cancer, a disease extensively and primarily analyzed genetically, is also the most genetically flexible of all diseases and, therefore, least amenable to such an approach.

Genetic variations in tumors are typically ascribed to a mutator phenotype that generates new clones, some of which expand into large populations (8). However, although identification of genotypes is of substantial interest, it is insufficient for complete characterization of tumor dynamics because evolution is governed by the interactions of environmental selection forces with the phenotypic, not genotypic, properties of populations as shown, for example, by evolutionary convergence to identical phenotypes among cave fish even when they are from different species (911). This connection between tissue selection forces and cellular properties has the potential to provide a strong bridge between medical imaging and the cellular and molecular properties of cancers.

We postulate that differences within tumors at different spatial scales (ie, at the radiologic, cellular, and molecular [genetic] levels) are related. Tumor characteristics observable at clinical imaging reflect molecular-, cellular-, and tissue-level dynamics; thus, they may be useful in understanding the underlying evolving biology in individual patients. A challenge is that such mapping across spatial and temporal scales requires not only objective reproducible metrics for imaging features but also a theoretical construct that bridges those scales (Fig 1).

P1a

Figure 1a: Computed tomographic (CT) scan of right upper lobe lung cancer in a 50-year-old woman.

P1b

Figure 1b: Isoattenuation map shows regional heterogeneity at the tissue scale (measured in centimeters).

 cd

Figure 1c & 1d: (c, d)Whole-slide digital images (original magnification, ×3) of a histologic slice of the same tumor at the mesoscopic scale (measured in millimeters) (c) coupled with a masked image of regional morphologic differences showing spatial heterogeneity (d). 

p1e

Figure 1e: Subsegment of the whole slide image shows the microscopic scale (measured in micrometers) (original magnification, ×50).

p1f

Figure 1f: Pattern recognition masked image shows regional heterogeneity. In a, the CT image of non–small cell lung cancer can be analyzed to display gradients of attenuation, which reveals heterogeneous and spatially distinct environments (b). Histologic images in the same patient (c, e) reveal heterogeneities in tissue structure and density on the same scale as seen in the CT images. These images can be analyzed at much higher definition to identify differences in morphologies of individual cells (3), and these analyses reveal clusters of cells with similar morphologic features (d, f). An important goal of radiomics is to bridge radiologic data with cellular and molecular characteristics observed microscopically.

To promote the development and implementation of quantitative imaging methods, protocols, and software tools, the National Cancer Institute has established the Quantitative Imaging Network. One goal of this program is to identify reproducible quantifiable imaging features of tumors that will permit data mining and explicit examination of links between the imaging findings and the underlying molecular and cellular characteristics of the tumors. In the quest for more personalized cancer treatments, these quantitative radiologic features potentially represent nondestructive temporally and spatially variable predictive and prognostic biomarkers that readily can be obtained in each patient before, during, and after therapy.

Quantitative imaging requires computational technologies that can be used to reliably extract mineable data from radiographic images. This feature information can then be correlated with molecular and cellular properties by using bioinformatics methods. Most existing methods are agnostic and focus on statistical descriptions of existing data, without presupposing the existence of specific relationships. Although this is a valid approach, a more profound understanding of quantitative imaging information may be obtained with a theoretical hypothesis-driven framework. Such models use links between observable tumor characteristics and microenvironmental selection factors to make testable predictions about emergent phenotypes. One such theoretical framework is the developing paradigm of cancer as an ecologic and evolutionary process.

For decades, landscape ecologists have studied the effects of heterogeneity in physical features on interactions between populations of organisms and their environments, often by using observation and quantification of images at various scales (1214). We propose that analytic models of this type can easily be applied to radiologic studies of cancer to uncover underlying molecular, cellular, and microenvironmental drivers of tumor behavior and specifically, tumor adaptations and responses to therapy (15).

In this article, we review recent developments in quantitative imaging metrics and discuss how they correlate with underlying genetic data and clinical outcomes. We then introduce the concept of using ecology and evolutionary models for spatially explicit image analysis as an exciting potential avenue of investigation.

 

Quantitative Imaging and Radiomics

In patients with cancer, quantitative measurements are commonly limited to measurement of tumor size with one-dimensional (Response Evaluation Criteria in Solid Tumors [or RECIST]) or two-dimensional (World Health Organization) long-axis measurements (16). These measures do not reflect the complexity of tumor morphology or behavior, and in many cases, changes in these measures are not predictive of therapeutic benefit (17). In contrast, radiomics (18) is a high-throughput process in which a large number of shape, edge, and texture imaging features are extracted, quantified, and stored in databases in an objective, reproducible, and mineable form (Figs 12). Once transformed into a quantitative form, radiologic tumor properties can be linked to underlying genetic alterations (the field is called radiogenomics) (1921) and to medical outcomes (2227). Researchers are currently working to develop both a standardized lexicon to describe tumor features (28,29) and a standard method to convert these descriptors into quantitative mineable data (30,31) (Fig 3).

p2

Figure 2: Contrast-enhanced CT scans show non–small cell lung cancer (left) and corresponding cluster map (right). Subregions within the tumor are identified by clustering pixels based on the attenuation of pixels and their cumulative standard deviation across the region. While the entire region of interest of the tumor, lacking the spatial information, yields a weighted mean attenuation of 859.5 HU with a large and skewed standard deviation of 243.64 HU, the identified subregions have vastly different statistics. Mean attenuation was 438.9 HU ± 45 in the blue subregion, 210.91 HU ± 79 in the yellow subregion, and 1077.6 HU ± 18 in the red subregion.

 

p3

Figure 3: Chart shows the five processes in radiomics.

Several recent articles underscore the potential power of feature analysis. After manually extracting more than 100 CT image features, Segal and colleagues found that a subset of 14 features predicted 80% of the gene expression pattern in patients with hepatocellular carcinoma (21). A similar extraction of features from contrast agent–enhanced magnetic resonance (MR) images of glioblastoma was used to predict immunohistochemically identified protein expression patterns (22). Other radiomic features, such as texture, can be used to predict response to therapy in patients with renal cancer (32) and prognosis in those with metastatic colon cancer (33).

These pioneering studies were relatively small because the image analysis was performed manually, and the studies were consequently underpowered. Thus, recent work in radiomics has focused on technical developments that permit automated extraction of image features with the potential for high throughput. Such methods, which rely heavily on novel machine learning algorithms, can more completely cover the range of quantitative features that can describe tumor heterogeneity, such as texture, shape, or margin gradients or, importantly, different environments, or niches, within the tumors.

Generally speaking, texture in a biomedical image is quantified by identifying repeating patterns. Texture analyses fall into two broad categories based on the concepts of first- and second-order spatial statistics. First-order statistics are computed by using individual pixel values, and no relationships between neighboring pixels are assumed or evaluated. Texture analysis methods based on first-order statistics usually involve calculating cumulative statistics of pixel values and their histograms across the region of interest. Second-order statistics, on the other hand, are used to evaluate the likelihood of observing spatially correlated pixels (34). Hence, second-order texture analyses focus on the detection and quantification of nonrandom distributions of pixels throughout the region of interest.

The technical developments that permit second-order texture analysis in tumors by using regional enhancement patterns on dynamic contrast-enhanced MR images were reviewed recently (35). One such technique that is used to measure heterogeneity of contrast enhancement uses the Factor Analysis of Medical Image Sequences (or FAMIS) algorithm, which divides tumors into regions based on their patterns of enhancement (36). Factor Analysis of Medical Image Sequences–based analyses yielded better prognostic information when compared with region of interest–based methods in numerous cancer types (1921,3739), and they were a precursor to the Food and Drug Administration–approved three-time-point method (40). A number of additional promising methods have been developed. Rose and colleagues showed that a structured fractal-based approach to texture analysis improved differentiation between low- and high-grade brain cancers by orders of magnitude (41). Ahmed and colleagues used gray level co-occurrence matrix analyses of dynamic contrast-enhanced images to distinguish benign from malignant breast masses with high diagnostic accuracy (area under the receiver operating characteristic curve, 0.92) (26). Others have shown that Minkowski functional structured methods that convolve images with differently kernelled masks can be used to distinguish subtle differences in contrast enhancement patterns and can enable significant differentiation between treatment groups (42).

It is not surprising that analyses of heterogeneity in enhancement patterns can improve diagnosis and prognosis, as this heterogeneity is fundamentally based on perfusion deficits, which generate significant microenvironmental selection pressures. However, texture analysis is not limited to enhancement patterns. For example, measures of heterogeneity in diffusion-weighted MR images can reveal differences in cellular density in tumors, which can be matched to histologic findings (43). Measures of heterogeneity in T1- and T2-weighted images can be used to distinguish benign from malignant soft-tissue masses (23). CT-based texture features have been shown to be highly significant independent predictors of survival in patients with non–small cell lung cancer (24).

Texture analyses can also be applied to positron emission tomographic (PET) data, where they can provide information about metabolic heterogeneity (25,26). In a recent study, Nair and colleagues identified 14 quantitative PET imaging features that correlated with gene expression (19). This led to an association of metagene clusters to imaging features and yielded prognostic models with hazard ratios near 6. In a study of esophageal cancer, in which 38 quantitative features describing fluorodeoxyglucose uptake were extracted, measures of metabolic heterogeneity at baseline enabled prediction of response with significantly higher sensitivity than any whole region of interest standardized uptake value measurement (22). It is also notable that these extensive texture-based features are generally more reproducible than simple measures of the standardized uptake value (27), which can be highly variable in a clinical setting (44).

 

Spatially Explicit Analysis of Tumor Heterogeneity

Although radiomic analyses have shown high prognostic power, they are not inherently spatially explicit. Quantitative border, shape, and texture features are typically generated over a region of interest that comprises the entire tumor (45). This approach implicitly assumes that tumors are heterogeneous but well mixed. However, spatially explicit subregions of cancers are readily apparent on contrast-enhanced MR or CT images, as perfusion can vary markedly within the tumor, even over short distances, with changes in tumor cell density and necrosis.

An example is shown in Figure 2, which shows a contrast-enhanced CT scan of non–small cell lung cancer. Note that there are many subregions within this tumor that can be identified with attenuation gradient (attenuation per centimeter) edge detection algorithms. Each subregion has a characteristic quantitative attenuation, with a narrow standard deviation, whereas the mean attenuation over the entire region of interest is a weighted average of the values across all subregions, with a correspondingly large and skewed distribution. We contend that these subregions represent distinct habitats within the tumor, each with a distinct set of environmental selection forces.

These observations, along with the recent identification of regional variations in the genetic properties of tumor cells, indicate the need to abandon the conceptual model of cancers as bounded organlike structures. Rather than a single self-organized system, cancers represent a patchwork of habitats, each with a unique set of environmental selection forces and cellular evolution strategies. For example, regions of the tumor that are poorly perfused can be populated by only those cells that are well adapted to low-oxygen, low-glucose, and high-acid environmental conditions. Such adaptive responses to regional heterogeneity result in microenvironmental selection and hence, emergence of genetic variations within tumors. The concept of adaptive response is an important departure from the traditional view that genetic heterogeneity is the product of increased random mutations, which implies that molecular heterogeneity is fundamentally unpredictable and, therefore, chaotic. The Darwinian model proposes that genetic heterogeneity is the result of a predictable and reproducible selection of successful adaptive strategies to local microenvironmental conditions.

Current cross-sectional imaging modalities can be used to identify regional variations in selection forces by using contrast-enhanced, cell density–based, or metabolic features. Clinical imaging can also be used to identify evidence of cellular adaptation. For example, if a region of low perfusion on a contrast-enhanced study is necrotic, then an adaptive population is absent or minimal. However, if the poorly perfused area is cellular, then there is presumptive evidence of an adapted proliferating population. While the specific genetic properties of this population cannot be determined, the phenotype of the adaptive strategy is predictable since the environmental conditions are more or less known. Thus, standard medical images can be used to infer specific emergent phenotypes and, with ongoing research, these phenotypes can be associated with underlying genetic changes.

This area of investigation will likely be challenging. As noted earlier, the most obvious spatially heterogeneous imaging feature in tumors is perfusion heterogeneity on contrast-enhanced CT or MR images. It generally has been assumed that the links between contrast enhancement, blood flow, perfusion, and tumor cell characteristics are straightforward. That is, tumor regions with decreased blood flow will exhibit low perfusion, low cell density, and high necrosis. In reality, however, the dynamics are actually much more complex. As shown in Figure 4, when using multiple superimposed sequences from MR imaging of malignant gliomas, regions of tumor that are poorly perfused on contrast-enhanced T1-weighted images may exhibit areas of low or high water content on T2-weighted images and low or high diffusion on diffusion-weighted images. Thus, high or low cell densities can coexist in poorly perfused volumes, creating perfusion-diffusion mismatches. Regions with poor perfusion with high cell density are of particular clinical interest because they represent a cell population that is apparently adapted to microenvironmental conditions associated with poor perfusion. The associated hypoxia, acidosis, and nutrient deprivation select for cells that are resistant to apoptosis and thus are likely to be resistant to therapy (46,47).

p4

Figure 4: Left: Contrast-enhanced T1 image from subject TCGA-02-0034 in The Cancer Genome Atlas–Glioblastoma Multiforme repository of MR volumes of glioblastoma multiforme cases. Right: Spatial distribution of MR imaging–defined habitats within the tumor. The blue region (low T1 postgadolinium, low fluid-attenuated inversion recovery) is particularly notable because it presumably represents a habitat with low blood flow but high cell density, indicating a population presumably adapted to hypoxic acidic conditions.

Furthermore, other selection forces not related to perfusion are likely to be present within tumors. For example, evolutionary models suggest that cancer cells, even in stable microenvironments, tend to speciate into “engineers” that maximize tumor cell growth by promoting angiogenesis and “pioneers” that proliferate by invading normal issue and co-opting the blood supply. These invasive tumor phenotypes can exist only at the tumor edge, where movement into a normal tissue microenvironment can be rewarded by increased proliferation. This evolutionary dynamic may contribute to distinct differences between the tumor edges and the tumor cores, which frequently can be seen at analysis of cross-sectional images (Fig 5).

p5a

Figure 5a: CT images obtained with conventional entropy filtering in two patients with non–small cell lung cancer with no apparent textural differences show similar entropy values across all sections. 

p5b

Figure 5b: Contour plots obtained after the CT scans were convolved with the entropy filter. Further subdividing each section in the tumor stack into tumor edge and core regions (dotted black contour) reveals varying textural behavior across sections. Two distinct patterns have emerged, and preliminary analysis shows that the change of mean entropy value between core and edge regions correlates negatively with survival.

Interpretation of the subsegmentation of tumors will require computational models to understand and predict the complex nonlinear dynamics that lead to heterogeneous combinations of radiographic features. We have exploited ecologic methods and models to investigate regional variations in cancer environmental and cellular properties that lead to specific imaging characteristics. Conceptually, this approach assumes that regional variations in tumors can be viewed as a coalition of distinct ecologic communities or habitats of cells in which the environment is governed, at least to first order, by variations in vascular density and blood flow. The environmental conditions that result from alterations in blood flow, such as hypoxia, acidosis, immune response, growth factors, and glucose, represent evolutionary selection forces that give rise to local-regional phenotypic adaptations. Phenotypic alterations can result from epigenetic, genetic, or chromosomal rearrangements, and these in turn will affect prognosis and response to therapy. Changes in habitats or the relative abundance of specific ecologic communities over time and in response to therapy may be a valuable metric with which to measure treatment efficacy and emergence of resistant populations.

 

Emerging Strategies for Tumor Habitat Characterization

A method for converting images to spatially explicit tumor habitats is shown in Figure 4. Here, three-dimensional MR imaging data sets from a glioblastoma are segmented. Each voxel in the tumor is defined by a scale that includes its image intensity in different sequences. In this case, the imaging sets are from (a) a contrast-enhanced T1 sequence, (b) a fast spin-echo T2 sequence, and (c) a fluid-attenuated inversion-recovery (or FLAIR) sequence. Voxels in each sequence can be defined as high or low based on their value compared with the mean signal value. By using just two sequences, a contrast-enhanced T1 sequence and a fluid-attenuated inversion-recovery sequence, we can define four habitats: high or low postgadolinium T1 divided into high or low fluid-attenuated inversion recovery. When these voxel habitats are projected into the tumor volume, we find they cluster into spatially distinct regions. These habitats can be evaluated both in terms of their relative contributions to the total tumor volume and in terms of their interactions with each other, based on the imaging characteristics at the interfaces between regions. Similar spatially explicit analysis can be performed with CT scans (Fig 5).

Analysis of spatial patterns in cross-sectional images will ultimately require methods that bridge spatial scales from microns to millimeters. One possible method is a general class of numeric tools that is already widely used in terrestrial and marine ecology research to link species occurrence or abundance with environmental parameters. Species distribution models (4851) are used to gain ecologic and evolutionary insights and to predict distributions of species or morphs across landscapes, sometimes extrapolating in space and time. They can easily be used to link the environmental selection forces in MR imaging-defined habitats to the evolutionary dynamics of cancer cells.

Summary

Imaging can have an enormous role in the development and implementation of patient-specific therapies in cancer. The achievement of this goal will require new methods that expand and ultimately replace the current subjective qualitative assessments of tumor characteristics. The need for quantitative imaging has been clearly recognized by the National Cancer Institute and has resulted in formation of the Quantitative Imaging Network. A critical objective of this imaging consortium is to use objective, reproducible, and quantitative feature metrics extracted from clinical images to develop patient-specific imaging-based prognostic models and personalized cancer therapies.

It is increasingly clear that tumors are not homogeneous organlike systems. Rather, they contain regional coalitions of ecologic communities that consist of evolving cancer, stroma, and immune cell populations. The clinical consequence of such niche variations is that spatial and temporal variations of tumor phenotypes will inevitably evolve and present substantial challenges to targeted therapies. Hence, future research in cancer imaging will likely focus on spatially explicit analysis of tumor regions.

Clinical imaging can readily characterize regional variations in blood flow, cell density, and necrosis. When viewed in a Darwinian evolutionary context, these features reflect regional variations in environmental selection forces and can, at least in principle, be used to predict the likely adaptive strategies of the local cancer population. Hence, analyses of radiologic data can be used to inform evolutionary models and then can be mapped to regional population dynamics. Ecologic and evolutionary principles may provide a theoretical framework to link imaging to the cellular and molecular features of cancer cells and ultimately lead to a more comprehensive understanding of specific cancer biology in individual patients.

 

Essentials

  • • Marked heterogeneity in genetic properties of different cells in the same tumor is typical and reflects ongoing intratumoral evolution.
  • • Evolution within tumors is governed by Darwinian dynamics, with identifiable environmental selection forces that interact with phenotypic (not genotypic) properties of tumor cells in a predictable and reproducible manner; clinical imaging is uniquely suited to measure temporal and spatial heterogeneity within tumors that is both a cause and a consequence of this evolution.
  • • Subjective radiologic descriptors of cancers are inadequate to capture this heterogeneity and must be replaced by quantitative metrics that enable statistical comparisons between features describing intratumoral heterogeneity and clinical outcomes and molecular properties.
  • • Spatially explicit mapping of tumor regions, for example by superimposing multiple imaging sequences, may permit patient-specific characterization of intratumoral evolution and ecology, leading to patient- and tumor-specific therapies.
  • • We summarize current information on quantitative analysis of radiologic images and propose future quantitative imaging must become spatially explicit to identify intratumoral habitats before and during therapy.

Disclosures of Conflicts of Interest: R.A.G. No relevant conflicts of interest to disclose. O.G. No relevant conflicts of interest to disclose.R.J.G. No relevant conflicts of interest to disclose.

 

Acknowledgments

The authors thank Mark Lloyd, MS; Joel Brown, PhD; Dmitry Goldgoff, PhD; and Larry Hall, PhD, for their input to image analysis and for their lively and informative discussions.

Footnotes

  • Received December 18, 2012; revision requested February 5, 2013; revision received March 11; accepted April 9; final version accepted April 29.
  • Funding: This research was supported by the National Institutes of Health (grants U54CA143970-01, U01CA143062; R01CA077575, andR01CA170595).

References

    1. Kurland BF,
    2. Gerstner ER,
    3. Mountz JM,
    4. et al

    . Promise and pitfalls of quantitative imaging in oncology clinical trials. Magn Reson Imaging2012;30(9):1301–1312.

    1. Levy MA,
    2. Freymann JB,
    3. Kirby JS,
    4. et al

    . Informatics methods to enable sharing of quantitative imaging research data. Magn Reson Imaging2012;30(9):1249–1256.

    1. Mirnezami R,
    2. Nicholson J,
    3. Darzi A

    . Preparing for precision medicine. N Engl J Med 2012;366(6):489–491.

    1. Yachida S,
    2. Jones S,
    3. Bozic I,
    4. et al

    . Distant metastasis occurs late during the genetic evolution of pancreatic cancer. Nature 2010;467(7319):1114–1117.

    1. Gerlinger M,
    2. Rowan AJ,
    3. Horswell S,
    4. et al

    . Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med2012;366(10):883–892.

    1. Gerlinger M,
    2. Swanton C

    . How Darwinian models inform therapeutic failure initiated by clonal heterogeneity in cancer medicine. Br J Cancer2010;103(8):1139–1143.

    1. Kern SE

    . Why your new cancer biomarker may never work: recurrent patterns and remarkable diversity in biomarker failures. Cancer Res2012;72(23):6097–6101.

    1. Nowell PC

    . The clonal evolution of tumor cell populations. Science1976;194(4260):23–28.

    1. Greaves M,
    2. Maley CC

    . Clonal evolution in cancer. Nature2012;481(7381):306–313.

    1. Vincent TL,
    2. Brown JS

    . Evolutionary game theory, natural selection and Darwinian dynamics. Cambridge, England: Cambridge University Press, 2005.

    1. Gatenby RA,
    2. Gillies RJ

    . A microenvironmental model of carcinogenesis. Nat Rev Cancer 2008;8(1):56–61.

    1. Bowers MA,
    2. Matter SF

    . Landscape ecology of mammals: relationships between density and patch size. J Mammal 1997;78(4):999–1013.

    1. Dorner BK,
    2. Lertzman KP,
    3. Fall J

    . Landscape pattern in topographically complex landscapes: issues and techniques for analysis. Landscape Ecol2002;17(8):729–743.

    1. González-García I,
    2. Solé RV,
    3. Costa J

    . Metapopulation dynamics and spatial heterogeneity in cancer. Proc Natl Acad Sci U S A2002;99(20):13085–13089.

    1. Patel LR,
    2. Nykter M,
    3. Chen K,
    4. Zhang W

    . Cancer genome sequencing: understanding malignancy as a disease of the genome, its conformation, and its evolution. Cancer Lett 2012 Oct 27. [Epub ahead of print]

    1. Jaffe CC

    . Measures of response: RECIST, WHO, and new alternatives. J Clin Oncol 2006;24(20):3245–3251.

    1. Burton A

    . RECIST: right time to renovate? Lancet Oncol2007;8(6):464–465.

    1. Lambin P,
    2. Rios-Velazquez E,
    3. Leijenaar R,
    4. et al

    . Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer 2012;48(4):441–446.

    1. Nair VS,
    2. Gevaert O,
    3. Davidzon G,
    4. et al

    . Prognostic PET 18F-FDG uptake imaging features are associated with major oncogenomic alterations in patients with resected non-small cell lung cancer. Cancer Res2012;72(15):3725–3734.

    1. Diehn M,
    2. Nardini C,
    3. Wang DS,
    4. et al

    . Identification of noninvasive imaging surrogates for brain tumor gene-expression modules. Proc Natl Acad Sci U S A 2008;105(13):5213–5218.

    1. Segal E,
    2. Sirlin CB,
    3. Ooi C,
    4. et al

    . Decoding global gene expression programs in liver cancer by noninvasive imaging. Nat Biotechnol 2007;25(6):675–680.

    1. Tixier F,
    2. Le Rest CC,
    3. Hatt M,
    4. et al

    . Intratumor heterogeneity characterized by textural features on baseline 18F-FDG PET images predicts response to concomitant radiochemotherapy in esophageal cancer. J Nucl Med2011;52(3):369–378.

    1. Pang KK,
    2. Hughes T

    . MR imaging of the musculoskeletal soft tissue mass: is heterogeneity a sign of malignancy? J Chin Med Assoc2003;66(11):655–661.

    1. Ganeshan B,
    2. Panayiotou E,
    3. Burnand K,
    4. Dizdarevic S,
    5. Miles K

    . Tumour heterogeneity in non-small cell lung carcinoma assessed by CT texture analysis: a potential marker of survival. Eur Radiol 2012;22(4):796–802.

    1. Asselin MC,
    2. O’Connor JP,
    3. Boellaard R,
    4. Thacker NA,
    5. Jackson A

    . Quantifying heterogeneity in human tumours using MRI and PET. Eur J Cancer2012;48(4):447–455.

    1. Ahmed A,
    2. Gibbs P,
    3. Pickles M,
    4. Turnbull L

    . Texture analysis in assessment and prediction of chemotherapy response in breast cancer. J Magn Reson Imaging doi:10.1002/jmri.23971 2012. Published online December 13, 2012.

    1. Kawata Y,
    2. Niki N,
    3. Ohmatsu H,
    4. et al

    . Quantitative classification based on CT histogram analysis of non-small cell lung cancer: correlation with histopathological characteristics and recurrence-free survival. Med Phys2012;39(2):988–1000.

    1. Rubin DL

    . Creating and curating a terminology for radiology: ontology modeling and analysis. J Digit Imaging 2008;21(4):355–362.

    1. Opulencia P,
    2. Channin DS,
    3. Raicu DS,
    4. Furst JD

    . Mapping LIDC, RadLex™, and lung nodule image features. J Digit Imaging 2011;24(2):256–270.

    1. Channin DS,
    2. Mongkolwat P,
    3. Kleper V,
    4. Rubin DL

    . The Annotation and Image Mark-up project. Radiology 2009;253(3):590–592.

    1. Rubin DL,
    2. Mongkolwat P,
    3. Kleper V,
    4. Supekar K,
    5. Channin DS

    . Medical imaging on the semantic web: annotation and image markup. Presented at the AAAI Spring Symposium Series, Semantic Scientific Knowledge Integration, Palo Alto, Calif, March 26–28, 2008.

    1. Goh V,
    2. Ganeshan B,
    3. Nathan P,
    4. Juttla JK,
    5. Vinayan A,
    6. Miles KA

    . Assessment of response to tyrosine kinase inhibitors in metastatic renal cell cancer: CT texture as a predictive biomarker. Radiology 2011;261(1):165–171.

    1. Miles KA,
    2. Ganeshan B,
    3. Griffiths MR,
    4. Young RC,
    5. Chatwin CR

    . Colorectal cancer: texture analysis of portal phase hepatic CT images as a potential marker of survival. Radiology 2009;250(2):444–452.

    1. Haralick RM,
    2. Shanmugam K,
    3. Dinstein I

    . Textural features for image classification. IEEE Trans Syst Man Cybern 1973;3(6):610–621.

    1. Yang X,
    2. Knopp MV

    . Quantifying tumor vascular heterogeneity with dynamic contrast-enhanced magnetic resonance imaging: a review. J Biomed Biotechnol 2011;2011:732848.

    1. Frouin F,
    2. Bazin JP,
    3. Di Paola M,
    4. Jolivet O,
    5. Di Paola R

    . FAMIS: a software package for functional feature extraction from biomedical multidimensional images. Comput Med Imaging Graph 1992;16(2):81–91.

    1. Frouge C,
    2. Guinebretière JM,
    3. Contesso G,
    4. Di Paola R,
    5. Bléry M

    . Correlation between contrast enhancement in dynamic magnetic resonance imaging of the breast and tumor angiogenesis. Invest Radiol 1994;29(12):1043–1049.

    1. Zagdanski AM,
    2. Sigal R,
    3. Bosq J,
    4. Bazin JP,
    5. Vanel D,
    6. Di Paola R

    . Factor analysis of medical image sequences in MR of head and neck tumors. AJNR Am J Neuroradiol 1994;15(7):1359–1368.

    1. Bonnerot V,
    2. Charpentier A,
    3. Frouin F,
    4. Kalifa C,
    5. Vanel D,
    6. Di Paola R

    . Factor analysis of dynamic magnetic resonance imaging in predicting the response of osteosarcoma to chemotherapy. Invest Radiol 1992;27(10):847–855.

    1. Furman-Haran E,
    2. Grobgeld D,
    3. Kelcz F,
    4. Degani H

    . Critical role of spatial resolution in dynamic contrast-enhanced breast MRI. J Magn Reson Imaging2001;13(6):862–867.

    1. Rose CJ,
    2. Mills SJ,
    3. O’Connor JPB,
    4. et al

    . Quantifying spatial heterogeneity in dynamic contrast-enhanced MRI parameter maps. Magn Reson Med2009;62(2):488–499.

    1. Canuto HC,
    2. McLachlan C,
    3. Kettunen MI,
    4. et al

    . Characterization of image heterogeneity using 2D Minkowski functionals increases the sensitivity of detection of a targeted MRI contrast agent. Magn Reson Med2009;61(5):1218–1224.

    1. Lloyd MC,
    2. Allam-Nandyala P,
    3. Purohit CN,
    4. Burke N,
    5. Coppola D,
    6. Bui MM

    . Using image analysis as a tool for assessment of prognostic and predictive biomarkers for breast cancer: how reliable is it? J Pathol Inform2010;1:29–36.

    1. Kumar V,
    2. Nath K,
    3. Berman CG,
    4. et al

    . Variance of SUVs for FDG-PET/CT is greater in clinical practice than under ideal study settings. Clin Nucl Med2013;38(3):175–182.

    1. Walker-Samuel S,
    2. Orton M,
    3. Boult JK,
    4. Robinson SP

    . Improving apparent diffusion coefficient estimates and elucidating tumor heterogeneity using Bayesian adaptive smoothing. Magn Reson Med 2011;65(2):438–447.

    1. Thews O,
    2. Nowak M,
    3. Sauvant C,
    4. Gekle M

    . Hypoxia-induced extracellular acidosis increases p-glycoprotein activity and chemoresistance in tumors in vivo via p38 signaling pathway. Adv Exp Med Biol 2011;701:115–122.

    1. Thews O,
    2. Dillenburg W,
    3. Rösch F,
    4. Fellner M

    . PET imaging of the impact of extracellular pH and MAP kinases on the p-glycoprotein (Pgp) activity. Adv Exp Med Biol 2013;765:279–286.

    1. Araújo MB,
    2. Peterson AT

    . Uses and misuses of bioclimatic envelope modeling. Ecology 2012;93(7):1527–1539.

    1. Larsen PE,
    2. Gibbons SM,
    3. Gilbert JA

    . Modeling microbial community structure and functional diversity across time and space. FEMS Microbiol Lett2012;332(2):91–98.

    1. Shenton W,
    2. Bond NR,
    3. Yen JD,
    4. Mac Nally R

    . Putting the “ecology” into environmental flows: ecological dynamics and demographic modelling. Environ Manage 2012;50(1):1–10.

    1. Clark MC,
    2. Hall LO,
    3. Goldgof DB,
    4. Velthuizen R,
    5. Murtagh FR,
    6. Silbiger MS

    .Automatic tumor segmentation using knowledge-based techniques. IEEE Trans Med Imaging 1998;17(2):187–201.

Read Full Post »