Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Mimicking vaginal cells and microbiome interactions on chip microfluidic culture
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
Scientists at Harvard University’s Wyss Institute for Biologically Inspired Engineering have developed the world’s first “vagina-on-a-chip,” which uses living cells and bacteria to mimic the microbial environment of the human vagina. It could help to test drugs against bacterial vaginosis, a common microbial imbalance that makes millions of people more susceptible to sexually transmitted diseases and puts them at risk of preterm delivery when pregnant. Vaginal health is difficult to study in a laboratory setting partly because laboratory animals have “totally different microbiomes” than humans. To address this, scientists have created an unique chip, which is an inch-long, rectangular polymer case containing live human vaginal tissue from a donor and a flow of estrogen-carrying material to simulate vaginal mucus.
The organs-on-a-chip mimic real bodily function, making it easier to study diseases and test drugs. Previous examples include models of the lungs and the intestines. In this case, the tissue acts like that of a real vagina in some important ways. It even responds to changes in estrogen by adjusting the expression of certain genes. And it can grow a humanlike microbiome dominated by “good” or “bad” bacteria. The researchers have demonstrated that Lactobacilli growing on the chip’s tissue help to maintain a low pH by producing lactic acid. Conversely, if the researchers introduce Gardnerella, the chip develops a higher pH, cell damage and increased inflammation: classic bacterial vaginosis signs. So, the chip can demonstrate how a healthy / unhealthy microbiome affects the vagina.
The next step is personalization or subject specific culture from individuals. The chip is a real leap forward, it has the prospect of testing how typical antibiotic treatments against bacterial vaginosis affect the different bacterial strains. Critics of organ-on-a-chip technology often raise the point that it models organs in isolation from the rest of the body. There are limitations such as many researchers are interested in vaginal microbiome changes that occur during pregnancy because of the link between bacterial vaginosis and labor complications. Although the chip’s tissue responds to estrogen, but it does not fully mimic pregnancy without feedback loops from other organs. The researchers are already working on connecting the vagina chip to a cervix chip, which could better represent the larger reproductive system.
All these information indicate that the human vagina chip offers a new model to study host-vaginal microbiome interactions in both optimal and non-optimal states, as well as providing a human relevant preclinical model for development and testing of reproductive therapeutics, including live bio-therapeutics products for bacterial vaginosis. This microfluidic human vagina chip that enables flow through an open epithelial lumen also offers a unique advantage for studies on the effect of cervicovaginal mucus on vaginal health as clinical mucus samples or commercially available mucins can be flowed through this channel. The role of resident and circulating immune cells in host-microbiome interactions also can be explored by incorporating these cells into the vagina chip in the future, as this has been successfully done in various other organ chip models.
Medical Device Technology for Alzheimer’s Diseases
Reporter: Danielle Smolyar
Alzheimer’s disease is said to be caused by a large number of proteins that are overproduced around a brain cell. Alzheimer’s is an irreversible disease that overtime decreases a person’s memory and the ability to perform tasks. With this disease, it is hard to function day to day life because it is hard to take on simple daily tasks or activities. It is a powerful and advanced disease that has not yet been found a cure. There have been many trials and scientists and researchers are still trying to figure out and find a cure for this disease because so many people, unfortunately, suffer from it.
“an estimated 5.3 million Americans are currently living with Alzheimer’s disease. By 2025, that number is expected to increase to more than seven million. Doctors diagnose dementia in around 10 million people every year, and 60–70% of these new diagnoses detect Alzheimer’s disease.”
The reality of the disease is tragic and the fact that the numbers keep growing calls for more urgency to find a cure and help innocent people fight off this disease. With society’s new technological and medical advancements, researchers have been working on finding a cure or developing a medical device to help people with Alzheimer’s. The article also states, ”Dr. Thom Wilcockson, from the UK’s Loughborough University, found that
eye-tracking technology could help identify mild cognitive impairment (MCI) in patients who might go on to develop Alzheimer’s disease in the future.”
With this technology and how advanced our society is, technology could eventually find a cure. With this device, it can help and make a considerable change in the number of people who develop Alzheimer’s. This new tool could help people prepare for the sickness or prevent future conditions from getting worse. Ultimately, if we have this technology, it can teach the world and educate the world on this condition and how we can take strides into preventing it from happening.
Dr. Thom Wilcosckon stated that looking for MCI can be a benchmark or sign for doctors to look for early development of Alzheimer’s:
Dr. Wilcockson and the research team worked with 42 patients with a diagnosis of aMCI, 47 with a diagnosis of naMCI, 68 people with dementia caused due to Alzheimer’s disease, and 92 healthy controls as part of their study. During the study, the participants were instructed to complete antisaccade tasks that are simple computer test where participants are told to look away from a distractor stimulus. The researchers found that they were able to differentiate between the two forms of MCI by looking at the eye-tracking results.
This modern technique of being able to pinpoint a specific aspect that would differentiate patients and their sicknesses from one another can cause a massive shift in the Alzheimer’s world. One step at a time, doctors, scientists, and researches are learning more about Alzheimers and are inching closer to hopefully finding a cure in the near future.
SOURCES:
World Alzheimer’s Month: Exploring latest research and devices for early detection
Microglia are part of a larger class of cells—known collectively as glia—that carry out an array of functions in the brain, guiding its development and serving as its immune system by gobbling up diseased or damaged cells and carting away debris. Along with her frequent collaborator and mentor, Stanford biologist Ben Barres, and a growing cadre of other scientists, Stevens, 45, is showing that these long-overlooked cells are more than mere support workers for the neurons they surround. Her work has raised a provocative suggestion: that brain disorders could somehow be triggered by our own bodily defenses gone bad.
In one groundbreaking paper, in January, Stevens and researchers at the Broad Institute of MIT and Harvard showed that aberrant microglia might play a role in schizophrenia—causing or at least contributing to the massive cell loss that can leave people with devastating cognitive defects. Crucially, the researchers pointed to a chemical pathway that might be targeted to slow or stop the disease. Last week, Stevens and other researchers published a similar finding for Alzheimer’s.
This might be just the beginning. Stevens is also exploring the connection between these tiny structures and other neurological diseases—work that earned her a $625,000 MacArthur Foundation “genius” grant last September.
All of this raises intriguing questions. Is it possible that many common brain disorders, despite their wide-ranging symptoms, are caused or at least worsened by the same culprit, a component of the immune system? If so, could many of these disorders be treated in a similar way—by stopping these rogue cells?
VIEW VIDEO
Science 31 Mar 2016; http://dx.doi.org:/10.1126/science.aad8373Complement and microglia mediate early synapse loss in Alzheimer mouse models. Soyon Hong1, Victoria F. Beja-Glasser1,*, Bianca M. Nfonoyim1,*,…., Ben A. Barres6, Cynthia A. Lemere,2, Dennis J. Selkoe2,7, Beth Stevens1,8,†
Synapse loss in Alzheimer’s disease (AD) correlates with cognitive decline. Involvement of microglia and complement in AD has been attributed to neuroinflammation, prominent late in disease. Here we show in mouse models that complement and microglia mediate synaptic loss early in AD. C1q, the initiating protein of the classical complement cascade, is increased and associated with synapses before overt plaque deposition. Inhibition of C1q, C3 or the microglial complement receptor CR3, reduces the number of phagocytic microglia as well as the extent of early synapse loss. C1q is necessary for the toxic effects of soluble β-amyloid (Aβ) oligomers on synapses and hippocampal long-term potentiation (LTP). Finally, microglia in adult brains engulf synaptic material in a CR3-dependent process when exposed to soluble Aβ oligomers. Together, these findings suggest that the complement-dependent pathway and microglia that prune excess synapses in development are inappropriately activated and mediate synapse loss in AD.
Genome-wide association studies (GWAS) implicate microglia and complement-related pathways in AD (1). Previous research has demonstrated both beneficial and detrimental roles of complement and microglia in plaque-related neuropathology (2, 3); however, their roles in synapse loss, a major pathological correlate of cognitive decline in AD (4), remain to be identified. Emerging research implicates microglia and immune-related mechanisms in brain wiring in the healthy brain (1). During development, C1q and C3 localize to synapses and mediate synapse elimination by phagocytic microglia (5–7). We hypothesized that this normal developmental synaptic pruning pathway is activated early in the AD brain and mediates synapse loss.
Scientists have known about glia for some time. In the 1800s, the pathologist Rudolf Virchow noted the presence of small round cells packing the spaces between neurons and named them “nervenkitt” or “neuroglia,” which can be translated as nerve putty or glue. One variety of these cells, known as astrocytes, was defined in 1893. And then in the 1920s, the Spanish scientist Pio del Río Hortega developed novel ways of staining cells taken from the brain. This led him to identify and name two more types of glial cells, including microglia, which are far smaller than the others and are characterized by their spidery shape and multiple branches. It is only when the brain is damaged in adulthood, he suggested, that microglia spring to life—rushing to the injury, where it was thought they helped clean up the area by eating damaged and dead cells. Astrocytes often appeared on the scene as well; it was thought that they created scar tissue.
This emergency convergence of microglia and astrocytes was dubbed “gliosis,” and by the time Ben Barres entered medical school in the late 1970s, it was well established as a hallmark of neurodegenerative diseases, infection, and a wide array of other medical conditions. But no one seemed to understand why it occurred. That intrigued Barres, then a neurologist in training, who saw it every time he looked under a microscope at neural tissue in distress. “It was just really fascinating,” he says. “The great mystery was: what is the point of this gliosis? Is it good? Is it bad? Is it driving the disease process, or is it trying to repair the injured brain?”
Barres began looking for the answer. He learned how to grow glial cells in a dish and apply a new recording technique to them. He could measure their electrical qualities, which determine the biochemical signaling that all brain cells use to communicate and coördinate activity.
Barres’s group had begun to identify the specific compounds astrocytes secreted that seemed to cause neurons to grow synapses. And eventually, they noticed that these compounds also stimulated production of a protein called C1q.
Conventional wisdom held that C1q was activated only in sick cells—the protein marked them to be eaten up by immune cells—and only outside the brain. But Barres had found it in the brain. And it was in healthy neurons that were arguably at their most robust stage: in early development. What was the C1q protein doing there?
The answer lies in the fact that marking cells for elimination is not something that happens only in diseased brains; it is also essential for development. As brains develop, their neurons form far more synaptic connections than they will eventually need. Only the ones that are used are allowed to remain. This pruning allows for the most efficient flow of neural transmissions in the brain, removing noise that might muddy the signal.
Microglia play a major role in the cellular response associated with the pathological lesions of Alzheimer’s disease. As brain-resident macrophages, microglia elaborate and operate under several guises that seem reminiscent of circulating and tissue monocytes of the leucocyte repertoire. Although microglia bear the capacity to synthesize amyloid β, current evidence is most consistent with their phagocytic role. This largely involves the removal of cerebral amyloid and possibly the transformation of amyloid β into fibrils. The phagocytic functions also encompass the generation of cytokines, reactive oxygen and nitrogen species, and various proteolytic enzymes, events that may exacerbate neuronal damage rather than incite outgrowth or repair mechanisms. Microglia do not appear to function as true antigen-presenting cells. However, there is circumstantial evidence that suggests functional heterogeneity within microglia. Pharmacological agents that suppress microglial activation or reduce microglial-mediated oxidative damage may prove useful strategies to slow the progression of Alzheimer’s disease.
The most visible and, until very recently, the only hypothesis regarding the involvement of microglial cells in Alzheimer’s disease (AD) pathogenesis is centered around the notion that activated microglia are neurotoxin-producing immune effector cells actively involved in causing the neurodegeneration that is the cause for AD dementia. The concept of detrimental neuroinflammation has gained a strong foothold in the AD arena and is being expanded to other neurodegenerative diseases. This review takes a comprehensive and critical look at the overall evidence supporting the neuroinflammation hypothesis and points out some weaknesses. The current work also reviews evidence for an alternative theory, the microglial dysfunction hypothesis, which, although eliminating some of the shortcomings, does not necessarily negate the amyloid/neuroinflammation theory. The microglial dysfunction theory offers a different perspective on the identity of activated microglia and their role in AD pathogenesis taking into account the most recent insights gained from studying basic microglial biology.
REFERENCES
Keren Asraf, Nofar Torika, Ella Roasso, Sigal Fleisher-Berkovich, Differential effect of intranasally administrated kinin B1 and B2 receptor antagonists in Alzheimer’s disease mice, Biological Chemistry, 2016, 397, 4CrossRef
Honghua Zheng, Chia-Chen Liu, Yuka Atagi, Xiao-Fen Chen, Lin Jia, Longyu Yang,Wencan He, Xilin Zhang, Silvia S. Kang, Terrone L. Rosenberry, John D. Fryer, Yun-Wu Zhang, Huaxi Xu, Guojun Bu, Opposing Roles of the Triggering Receptor Expressed on Myeloid Cells 2 (TREM2) and TREM-like Transcript 2 (TREML2) in Microglia Activation, Neurobiology of Aging, 2016CrossRef
Yuanyuan Li, Xufei Du, Gang Pei, Jiulin Du, Jian Zhao, Patricia Gaspar, β-Arrestin1 regulates the morphology and dynamics of microglia in zebrafishin vivo,European Journal of Neuroscience, 2016, 43, 2, 131Wiley Online Library
YongCheol Yoo, Kyunghee Byun, Taewook Kang, Delger Bayarsaikhan, Jin Young Kim, Seyeoun Oh, Young Hye Kim, Se-Young Kim, Won-Il Chung, Seung U. Kim,Bonghee Lee, Young Mok Park, Amyloid-Beta-Activated Human Microglial Cells Through ER-Resident Proteins, Journal of Proteome Research, 2015, 14, 1, 214CrossRef
J.S. Baizer, K.M. Wong, S. Manohar, S.H. Hayes, D. Ding, R. Dingman, R.J. Salvi, Effects of acoustic trauma on the auditory system of the rat: The role of microglia, Neuroscience, 2015, 303, 299CrossRef
Frank L. Heppner, Richard M. Ransohoff, Burkhard Becher, Immune attack: the role of inflammation in Alzheimer disease, Nature Reviews Neuroscience, 2015, 16, 6, 358CrossRef
Zhen Fan, Yahyah Aman, Imtiaz Ahmed, Gaël Chetelat, Brigitte Landeau, K. Ray Chaudhuri, David J. Brooks, Paul Edison, Influence of microglial activation on neuronal function in Alzheimer’s and Parkinson’s disease dementia, Alzheimer’s & Dementia, 2015, 11, 6, 608CrossRef
Athanasios Lourbopoulos, Ali Ertürk, Farida Hellal, Microglia in action: how aging and injury can change the brain’s guardians, Frontiers in Cellular Neuroscience, 2015, 9CrossRef
Melanie Meyer-Luehmann, Marco Prinz, Myeloid Cells in Alzheimer’s Disease: Culprits, Victims or Innocent Bystanders?, Trends in Neurosciences, 2015, 38, 10, 659CrossRef
Jessica M. Collins, Anna E. King, Adele Woodhouse, Matthew T.K. Kirkcaldie,James C. Vickers, The effect of focal brain injury on beta-amyloid plaque deposition, inflammation and synapses in the APP/PS1 mouse model of Alzheimer’s disease, Experimental Neurology, 2015, 267, 219CrossRef
Gaurav Singhal, Emily J. Jaehne, Frances Corrigan, Bernhard T. Baune, Cellular and molecular mechanisms of immunomodulation in the brain through environmental enrichment, Frontiers in Cellular Neuroscience, 2014, 8CrossRef
Jinhua Ma, Bo-Ryoung Choi, ChiHye Chung, Sun Min, Won Jeon, Jung-Soo Han, Chronic brain inflammation causes a reduction in GluN2A and GluN2B subunits of NMDA receptors and an increase in the phosphorylation of mitogen-activated protein kinases in the hippocampus, Molecular Brain, 2014, 7, 1, 33CrossRef
Linn Malmsten, Swetha Vijayaraghavan, Outi Hovatta, Amelia Marutle, Taher Darreh-Shori, Fibrillar β-amyloid 1-42 alters cytokine secretion, cholinergic signalling and neuronal differentiation, Journal of Cellular and Molecular Medicine,2014, 18, 9, 1874Wiley Online Library
Xiao-shuang Jiang, Ying-qin Ni, Tian-jin Liu, Meng Zhang, Rui Jiang, Ge-zhi Xu, Generation and characterization of immortalized rat retinal microglial cell lines,Journal of Neuroscience Research, 2014, 92, 4, 424Wiley Online Library
Wolfgang J Streit, Qing-Shan Xue, Human CNS immune senescence and neurodegeneration, Current Opinion in Immunology, 2014, 29, 93CrossRef
Ashley M. Fenn, John C. Gensel, Yan Huang, Phillip G. Popovich, Jonathan Lifshitz,Jonathan P. Godbout, Immune Activation Promotes Depression 1 Month After Diffuse Brain Injury: A Role for Primed Microglia, Biological Psychiatry, 2014, 76, 7, 575CrossRef
N. Xie, C. Wang, Y. Lian, C. Wu, H. Zhang, Q. Zhang, Inhibition of mitochondrial fission attenuates Aβ-induced microglia apoptosis, Neuroscience, 2014, 256, 36CrossRef
Noël C. Derecki, Natalie Katzmarski, Jonathan Kipnis, Melanie Meyer-Luehmann, Microglia as a critical player in both developmental and late-life CNS pathologies,Acta Neuropathologica, 2014, 128, 3, 333CrossRef
Petra Majerova, Monika Zilkova, Zuzana Kazmerova, Andrej Kovac, Kristina Paholikova, Branislav Kovacech, Norbert Zilka, Michal Novak, Microglia display modest phagocytic capacity for extracellular tau oligomers, Journal of Neuroinflammation, 2014, 11, 1CrossRef
Review – Part of the Special Issue: Alzheimer’s Disease – Amyloid, Tau and Beyond. Biochemical Pharmacology 15 Apr 2014; 88(4):594–604 doi:10.1016/j.bcp.2014.01.008
Microglia, the immune cells of the central nervous system, have long been a subject of study in the Alzheimer’s disease (AD) field due to their dramatic responses to the pathophysiology of the disease. With several large-scale genetic studies in the past year implicating microglial molecules in AD, the potential significance of these cells has become more prominent than ever before. As a disease that is tightly linked to aging, it is perhaps not entirely surprising that microglia of the AD brain share some phenotypes with aging microglia. Yet the relative impacts of both conditions on microglia are less frequently considered in concert. Furthermore, microglial “activation” and “neuroinflammation” are commonly analyzed in studies of neurodegeneration but are somewhat ill-defined concepts that in fact encompass multiple cellular processes. In this review, we have enumerated six distinct functions of microglia and discuss the specific effects of both aging and AD. By calling attention to the commonalities of these two states, we hope to inspire new approaches for dissecting microglial mechanisms.
A Olmos-Alonso, STT Schetters, S Sri, K Askew, …, VH Perry, D Gomez-Nicola. Pharmacological targeting of CSF1R inhibits microglial proliferation and prevents the progression of Alzheimer’s-like pathology. Brain 8 Jan 2016. http://dx.doi.org/10.1093/brain/awv379
The proliferation and activation of microglial cells is a hallmark of several neurodegenerative conditions. This mechanism is regulated by the activation of the colony-stimulating factor 1 receptor (CSF1R), thus providing a target that may prevent the progression of conditions such as Alzheimer’s disease. However, the study of microglial proliferation in Alzheimer’s disease and validation of the efficacy of CSF1R-inhibiting strategies have not yet been reported. In this study we found increased proliferation of microglial cells in human Alzheimer’s disease, in line with an increased upregulation of the CSF1R-dependent pro-mitogenic cascade, correlating with disease severity. Using a transgenic model of Alzheimer’s-like pathology (APPswe, PSEN1dE9; APP/PS1 mice) we define a CSF1R-dependent progressive increase in microglial proliferation, in the proximity of amyloid-β plaques. Prolonged inhibition of CSF1R in APP/PS1 mice by an orally available tyrosine kinase inhibitor (GW2580) resulted in the blockade of microglial proliferation and the shifting of the microglial inflammatory profile to an anti-inflammatory phenotype. Pharmacological targeting of CSF1R in APP/PS1 mice resulted in an improved performance in memory and behavioural tasks and a prevention of synaptic degeneration, although these changes were not correlated with a change in the number of amyloid-β plaques. Our results provide the first proof of the efficacy of CSF1R inhibition in models of Alzheimer’s disease, and validate the application of a therapeutic strategy aimed at modifying CSF1R activation as a promising approach to tackle microglial activation and the progression of Alzheimer’s disease.
The neuropathology of Alzheimer’s disease shows a robust innate immune response characterized by the presence of activated microglia, with increased or de novo expression of diverse macrophage antigens (Akiyama et al., 2000; Edison et al., 2008), and production of inflammatory cytokines (Dickson et al., 1993; Fernandez-Botran et al., 2011). Evidence indicates that non-steroidal anti-inflammatory drugs (NSAIDs) protect from the onset or progression of Alzheimer’s disease (Hoozemans et al., 2011), suggestive of the idea that inflammation is a causal component of the disease rather than simply a consequence of the neurodegeneration. In fact, inflammation (Holmes et al., 2009), together with tangle pathology (Nelson et al., 2012) or neurodegeneration-related biomarkers (Wirth et al., 2013) correlate better with cognitive decline than amyloid-b accumulation, but the underlying mechanisms of the sequence of events that contribute to the clinical symptoms are poorly understood. The contribution of inflammation to disease pathogenesis is supported by recent genome-wide association studies, highlighting immune-related genes such as CR1 (Jun et al., 2010), TREM2 (Guerreiro et al., 2013; Jonsson et al., 2013) or HLA-DRB5–HLA-DRB1 in association with Alzheimer’s disease (European Alzheimer’s Disease et al., 2013). Additionally, a growing body of evidence suggests that systemic inflammation may interact with the innate immune response in the brain to act as a ‘driver’ of disease progression and exacerbate symptoms (Holmes et al., 2009, 2011). Microglial cells are the master regulators of the neuroin- flammatory response associated with brain disease (GomezNicola and Perry, 2014a, b). Activated microglia have been demonstrated in transgenic models of Alzheimer’s disease (LaFerla and Oddo, 2005; Jucker, 2010) and have been recently shown to dominate the gene expression landscape of patients with Alzheimer’s disease (Zhang et al., 2013). Recently, microglial activation through the transcription factor PU.1 has been reported to be capital for the progression of Alzheimer’s disease, highlighting the role of microglia in the disease-initiating steps (Gjoneska et al., 2015). Results from our group, using a murine model of chronic neurodegeneration (prion disease), show large numbers of microglia with an activated phenotype (Perry et al., 2010) and a cytokine profile similar to that of Alzheimer’s disease (Cunningham et al., 2003). The expansion of the microglial population during neurodegeneration is almost exclusively dependent upon proliferation of resident cells (GomezNicola et al., 2013, 2014a; Li et al., 2013). An increased microglial proliferative activity has also been described in a mouse model of Alzheimer’s disease (Kamphuis et al., 2012) and in post-mortem samples from patients with Alzheimer’s disease (Gomez-Nicola et al., 2013, 2014b). This proliferative activity is regulated by the activation of the colony stimulating factor 1 receptor (CSF1R; GomezNicola et al., 2013). Pharmacological strategies inhibiting the kinase activity of CSF1R provide beneficial effects on the progression of chronic neurodegeneration, highlighting the detrimental contribution of microglial proliferation (Gomez-Nicola et al., 2013). The presence of a microglial proliferative response with neurodegeneration is also supported by microarray analysis correlating clinical scores of incipient Alzheimer’s disease with the expression of Cebpa and Spi1 (PU.1), key transcription factors controlling microglial lineage commitment and proliferation (Blalock et al., 2004). Consistent with these data, Csf1r is upregulated in mouse models of amyloidosis (Murphy et al., 2000), as well as in human post-mortem samples from patients with Alzheimer’s disease (Akiyama et al., 1994). Although these ideas would lead to the evaluation of the efficacy of CSF1R inhibitors in Alzheimer’s disease, we have little evidence regarding the level of microglial proliferation in Alzheimer’s disease or the effects of CSF1R targeting in animal models of Alzheimer’s disease-like pathology. In this study, we set out to define the microglial proliferative response in both human Alzheimer’s disease and a mouse model of Alzheimer’s disease-like pathology, as well as the activation of the CSF1R pathway. We provide evidence for a consistent and robust activation of a microglial proliferative response, associated with the activation of CSF1R. We provide proof-of-target engagement and efficacy of an orally available CSF1R inhibitor (GW2580), which inhibits microglial proliferation and partially prevents the pathological progression of Alzheimer’s disease-like pathology, supporting the evaluation of CSF1R-targeting approaches as a therapy for Alzheimer’s disease.
Post-mortem samples of Alzheimer’s disease For immunohistochemical analysis, human brain autopsy tissue samples (temporal cortex, paraffin-embedded, formalin- fixed, 96% formic acid-treated, 6-mm sections) from the National CJD Surveillance Unit Brain Bank (Edinburgh, UK) were obtained from cases of Alzheimer’s disease (five females and five males, age 58–76) or age-matched controls (four females and five males, age 58–79), in whom consent for use of autopsy tissues for research had been obtained. All cases ful- filled the criteria for the pathological diagnosis of Alzheimer’s disease. Ethical permission for research on autopsy materials stored in the National CJD Surveillance Unit was obtained from Lothian Region Ethics Committee
Figure 1 Characterization of the microglial proliferative response in Alzheimer’s disease. (A–C) Immunohistochemical analysis and quantification of the number of total microglial cells (Iba1+ ; A) or proliferating microglial cells (Iba1+Ki67 + ; B) in the grey (GM) and white matter (WM) of the temporal cortex of Alzheimer’s disease cases (AD) and age-matched non-demented controls (NDC). (C) Representative pictures of the localization of a marker of proliferation (Ki67, dark blue) in microglial cells (Iba1+ , brown) in the grey matter of the temporal cortex of non-demented controls or Alzheimer’s disease cases. (D) RT-PCR analysis of the mRNA expression of CSF1R, CSF1, IL34, SPI1 (PU.1), CEBPA, RUNX1 and PCNA in the temporal cortex of Alzheimer’s disease cases and age-matched non-demented controls. Expression of mRNA represented as mean SEM and indicated as relative expression to the normalization factor (geometric mean of four housekeeping genes; GAPDH, HPRT, 18S and GUSB) using the 2-CT method. Statistical differences: *P 50.05, **P 50.01, ***P 50.001. Data were analysed with a two-way ANOVA and a post hoc Tukey test (A and B) or with a two-tailed Fisher t-test (D). Scale bar in C = 50 mm.
Increased microglial proliferation and CSF1R activity are closely associated with the progression of Alzheimer’s disease-like pathology
Pharmacological targeting of CSF1R activation with an orally-available inhibitor blocks microglial proliferation in APP/PS1 mice
CSF1R inhibition prevents the progression of Alzheimer’s disease-like pathology
The innate immune component has a clear influence over the onset and progression of Alzheimer’s disease. The analysis of therapeutic approaches aimed at controlling neuroinflammation in Alzheimer’s disease is moving forward at the preclinical and clinical level, with several clinical trials aimed at modulating inflammatory components of the disease. We have previously demonstrated that the proliferation of microglial cells is a core component of the neuroinflammatory response in a model of prion disease, another chronic neurodegenerative disease, and is controlled by the activation of CSF1R (Gomez-Nicola et al., 2013). This aligns with recent reports pinpointing the causative effect of the activation of the microglial proliferative response on the neurodegenerative events of human and mouse Alzheimer’s disease, highlighting the activity of the master regulator PU.1 (Gjoneska et al., 2015). Our results provide a proof of efficacy of CSF1R inhibition for the blockade of microglial proliferation in a model of Alzheimer’s disease-like pathology. Treatment with the orally available CSF1R kinase-inhibitor (GW2580) proves to be an effective disease-modifying approach, partially improving memory and behavioural performance, and preventing synaptic degeneration. These results support the previously reported link of the inflammatory response generated by microglia in models of Alzheimer’s disease with the observed synaptic and behavioural deficits, regardless of amyloid deposition (Jones and Lynch, 2014).
Our findings support the relevance of CSF1R signalling and microglial proliferation in chronic neurodegeneration and validate the evaluation of CSF1R inhibitors in clinical trials for Alzheimer’s disease. Our findings show that the inhibition of microglial proliferation in a model of Alzheimer’s disease-like pathology does not modify the burden of amyloid-b plaques, suggesting an uncoupling of the amyloidogenic process from the pathological progression of the disease.
Other Related Articles published in this Open Access Online Scientific Journal include the following:
Transdermal implant releases antibodies to trigger immune system to clear Alzheimer’s plaques March 21, 2016
Test with mice over 39 weeks showed dramatic reduction of amyloid beta plaque load in the brain and reduced phosphorylation of the protein tau, two signs of Alzheimer’s
An implant that can prevent Alzheimer’s disease. A new capsule can be implanted under the skin to release antibodies that “tag” amyloid beta, signalling the patient’s immune system to clear it before it forms Alzheimer’s plaques. (credit: École polytechnique fédérale de Lausanne
EPFL scientists have developed an implantable capsule containing genetically engineered cells that can recruit a patient’s immune system to combat Alzheimer’s disease.
Placed under the skin, the capsule releases antibody proteins that make their way to the brain and “tag” amyloid beta proteins, signalling the patient’s own immune system to attack and clear the amyloid beta proteins, which are toxic to neurons.
To be most effective, this treatment has to be given as early as possible, before the first signs of cognitive decline. Currently, this requires repeated vaccine injections, which can cause side effects. The new implant can deliver a steady, safe flow of antibodies.
Protection from immune-system rejection
Cell encapsulation device for long-term subcutaneous therapeutic antibody delivery. (B) Macroscopic view of the encapsulation device, composed of a transparent frame supporting polymer permeable membranes and reinforced with an outer polyester mesh. (C) Dense neovascularization develops around a device containing antibody-secreting C2C12 myoblasts, 8 months after implantation in the mouse subcutaneous tissue. (D and E) Representative photomicrographs showing encapsulated antibody-secreting C2C12 myoblasts surviving at high density within the flat sheet device 39 weeks after implantation. (E) Higher magnification: note that the cells produce a collagen-rich matrix stained in blue with Masson’s trichrome protocol. Asterisk: polypropylene porous membrane. Scale bars = 750 mm (B and C),100 mm (D), 50 mm (E). (credit: Aurelien Lathuiliere et al./BRAIN)
The lab of Patrick Aebischer at EPFL designed the “macroencapsulation device” (capsule) with two permeable membranes sealed together with a polypropylene frame, containing a hydrogel that facilitates cell growth. All the materials used are biocompatible and the device is reproducible for large-scale manufacturing.
The cells of choice are taken from muscle tissue, and the permeable membranes let them interact with the surrounding tissue to get all the nutrients and molecules they need. The cells have to be compatible with the patient to avoid triggering the immune system against them, like a transplant can. To do that, the capsule’s membranes shield the cells from being identified and attacked by the immune system. This protection also means that cells from a single donor can be used on multiple patients.
The researchers tested the device mice in a genetic line commonly used to simulate Alzheimer’s disease over a course of 39 weeks, showing dramatic reduction of amyloid beta plaque load in the brain. The treatment also reduced the phosphorylation of the protein tau, another sign of Alzheimer’s observed in these mice.
“The proof-of-concept work demonstrates clearly that encapsulated cell implants can be used successfully and safely to deliver antibodies to treat Alzheimer’s disease and other neurodegenerative disorders that feature defective proteins,” according to the researchers.
The work is published in the journal BRAIN. It involved a collaboration between EPFL’s Neurodegenerative Studies Laboratory (Brain Mind Institute), the Swiss Light Source (Paul Scherrer Institute), and F. Hoffmann-La Roche. It was funded by the Swiss Commission for Technology and Innovation and F. Hoffmann-La Roche Ltd.
Abstract of A subcutaneous cellular implant for passive immunization against amyloid-β reduces brain amyloid and tau pathologies
Passive immunization against misfolded toxic proteins is a promising approach to treat neurodegenerative disorders. For effective immunotherapy against Alzheimer’s disease, recent clinical data indicate that monoclonal antibodies directed against the amyloid-β peptide should be administered before the onset of symptoms associated with irreversible brain damage. It is therefore critical to develop technologies for continuous antibody delivery applicable to disease prevention. Here, we addressed this question using a bioactive cellular implant to deliver recombinant anti-amyloid-β antibodies in the subcutaneous tissue. An encapsulating device permeable to macromolecules supports the long-term survival of myogenic cells over more than 10 months in immunocompetent allogeneic recipients. The encapsulated cells are genetically engineered to secrete high levels of anti-amyloid-β antibodies. Peripheral implantation leads to continuous antibody delivery to reach plasma levels that exceed 50 µg/ml. In a proof-of-concept study, we show that the recombinant antibodies produced by this system penetrate the brain and bind amyloid plaques in two mouse models of the Alzheimer’s pathology. When encapsulated cells are implanted before the onset of amyloid plaque deposition in TauPS2APP mice, chronic exposure to anti-amyloid-β antibodies dramatically reduces amyloid-β40 and amyloid-β42 levels in the brain, decreases amyloid plaque burden, and most notably, prevents phospho-tau pathology in the hippocampus. These results support the use of encapsulated cell implants for passive immunotherapy against the misfolded proteins, which accumulate in Alzheimer’s disease and other neurodegenerative disorders.
Passive immunization using monoclonal antibodies has recently emerged for the treatment of neurological diseases. In particular, monoclonal antibodies can be administered to target the misfolded proteins that progressively aggregate and propagate in the CNS and contribute to the histopathological signature of neurodegenerative diseases. Alzheimer’s disease is the most prevalent proteinopathy, characterized by the deposition of amyloid plaques and neurofibrillary tangles. According to the ‘amyloid cascade hypothesis’, which is supported by strong genetic evidence (Goate and Hardy, 2012), the primary pathogenic event in Alzheimer’s disease is the accumulation and aggregation of amyloid-β into insoluble extracellular plaques in addition to cerebral amyloid angiopathy (Hardy and Selkoe, 2002). High levels of amyloid-β may cause a cascade of deleterious events, including neurofibrillary tangle formation, neuronal dysfunction and death. Anti-amyloid-β antibodies have been developed to interfere with the amyloid-β cascade. Promising data obtained in preclinical studies have validated immunotherapy against Alzheimer’s disease, prompting a series of clinical trials (Bard et al., 2000;Bacskai et al., 2002; Oddo et al., 2004; Wilcock et al., 2004a; Bohrmann et al., 2012). Phase III trials using monoclonal antibodies directed against soluble amyloid-β (bapineuzumab and solanezumab) in patients with mild-to-moderate Alzheimer’s disease showed some effects on biomarkers that are indicative of target engagement. These trials, however, missed the primary endpoints, and it is therefore believed that anti-amyloid-β immunotherapy should be administered at the early presymptomatic stage (secondary prevention) to better potentiate therapeutic effects (Doody et al., 2014; Salloway et al., 2014). For the treatment of Alzheimer’s disease, it is likely that long-term treatment using a high dose of monoclonal antibody will be required. However, bolus administration of anti-amyloid-β antibodies may aggravate dose-dependent adverse effects such as amyloid-related imaging abnormalities (ARIA) (Sperling et al., 2012). In addition, the cost of recombinant antibody production and medical burden associated with repeated subcutaneous or intravenous bolus injections may represent significant constraints, especially in the case of preventive immunotherapy initiated years before the onset of clinical symptoms in patients predisposed to develop Alzheimer’s disease.
Therefore, alternative methods need to be developed for the continuous, long-term administration of antibodies. Here, we used an implant based on a high-capacity encapsulated cell technology (ECT) (Lathuiliere et al., 2014b). The ECT device contains myogenic cells genetically engineered for antibody production. Macromolecules can be exchanged between the implanted cells and the host tissue through a permeable polymer membrane. As the membrane shields the implanted cells from immune rejection in allogeneic conditions, it is possible to use a single donor cell source for multiple recipients. We demonstrate that anti-amyloid immunotherapy using an ECT device implanted in the subcutaneous tissue can achieve therapeutic effects inside the brain. Chronic exposure to anti-amyloid-β monoclonal antibodies produced in vivo using the ECT technology leads to a significant reduction of the amyloid brain pathology in two mouse models of Alzheimer’s disease.
Microglial phagocytosis study
The measurement of antibody-mediated amyloid-β phagocytosis was performed as proposed previously (Webster et al., 2001). In this study, we used either purified preparations of full mAb-11 IgG2a antibody, or a purified Fab antibody fragment. A suspension of 530 µM fluorescent fibrillar amyloid-β42 was prepared in 10 mM HEPES (pH 7.4) by stirring overnight at room temperature. The resulting suspension contained 30 µM fluorescein-conjugated amyloid-β42 and 500 µM unconjugated amyloid-β42 (Bachem). IgG-fibrillar amyloid-β42 immune complexes were obtained by preincubating fluorescent fibrillar amyloid-β42 at a concentration of 50 µM in phosphate-buffered saline (PBS) with various concentrations of purified mAb-11 IgG2a or Fab antibody fragment for 30 min at 37 °C. The immune complexes were washed twice by centrifugation for 5 min at 14 000g and resuspended in the initial volume to obtain a fluorescent fibrillar amyloid-β42 solution (total amyloid-β42 concentration: 530 µM). The day before the experiment, 8 × 104 C8-B4 cells were plated in 24-well plates. The medium was replaced with serum-free DMEM before the addition of the peptides. The cells were incubated for 30 min with fibrillar amyloid-β42 or IgG-fibrillar amyloid-β42 added to the culture medium. Next, the cells were washed twice with Hank’s Balanced Salt Solution (HBSS) and subsequently detached by trypsinization, which also eliminates surface-bound fibrillar amyloid-β42. The cells were fixed for 10 min in 4% paraformaldehyde and finally resuspended in PBS. The cell fluorescence was determined with a flow cytometer (Accuri C6; BD Biosciences), and the data were analysed using the FlowJo software (TreeStar Inc.). To determine the effect of the anti-amyloid-β antibodies on amyloid-β phagocytosis, the concentration of fluorescent fibrillar amyloid-β42 was set at 1.5 µM, which is in the linear region of the dose-response curve depicting fibrillar amyloid-β42 phagocytosis in C8-B4 cells (Fig. 2C). All experiments were performed in duplicate.
The implantation of genetically engineered cells within a retrievable subcutaneous device leads to the continuous production of monoclonal antibodies in vivo. This technology achieves steady therapeutic monoclonal antibody levels in the plasma, offering an effective alternative to bolus injections for passive immunization against chronic diseases. Peripheral delivery of anti-amyloid-β monoclonal antibody by ECT leads to a significant reduction of amyloid burden in two mouse models of Alzheimer’s disease. The effect of the ECT treatment is more pronounced when passive immunization is preventively administered in TauPS2APP mice, most notably decreasing the phospho-tau pathology.
With the recent development of biomarkers to monitor Alzheimer’s pathology, it is recognized that a steady increase in cerebral amyloid over the course of decades precedes the appearance of the first cognitive symptoms (reviewed in Sperling et al., 2011). The current consensus therefore suggests applying anti-amyloid-β immunotherapy during this long asymptomatic phase to avoid the downstream consequences of amyloid deposition and to leverage neuroprotective effects. Several preventive clinical trials have been recently initiated for Alzheimer’s disease. The Alzheimer’s Prevention Initiative (API) and the Dominantly Inherited Alzheimer Network (DIAN) will test antibody candidates in presymptomatic dominant mutation carriers, while the Anti-Amyloid treatment in the Asymptomatic Alzheimer’s disease (A4) trial enrols asymptomatic subjects after risk stratification. If individuals with a high risk of developing Alzheimer’s disease can be identified using current biomarker candidates, these patients are the most likely to benefit from chronic long-term anti-amyloid-β immunotherapy. However, such a treatment may pose a challenge to healthcare systems, as the production capacity of the antibody and its related cost would become a challenging issue (Skoldunger et al., 2012). Therefore, the development of alternative technologies to chronically administer anti-amyloid-β antibody is an important aspect for therapeutic interventions at preclinical disease stages.
Here, we show that the ECT technology for the peripheral delivery of anti-amyloid-β monoclonal antibodies can significantly reduce cerebral amyloid pathology in two mouse models of Alzheimer’s disease. The subcutaneous tissue is a site of implantation easily accessible and therefore well adapted to preventive treatment. It is, however, challenging to reach therapeutic efficacy, as only a small fraction of the produced anti-amyloid-β monoclonal antibodies are expected to cross the blood–brain barrier, although they can next persist in the brain for several months (Wang et al., 2011; Bohrmann et al., 2012). Our results are consistent with previous reports, which have shown that the systemic administration of anti-amyloid-β antibodies can decrease brain amyloid burden in preclinical Alzheimer’s disease models (Bard et al., 2000, 2003; DeMattos et al., 2001; Wilcock et al., 2004a, b; Buttini et al., 2005; Adolfsson et al., 2012).
Remarkably, striking differences exist among therapeutic anti-amyloid-β antibodies in their ability to clear already existing plaques. Soluble amyloid-β species can saturate the small fraction of pan-amyloid-β antibodies entering the CNS and inhibit further target engagement (Demattos et al., 2012). Therefore, antibodies recognizing soluble amyloid-β may fail to bind and clear insoluble amyloid deposits (Das et al., 2001; Racke et al., 2005; Levites et al., 2006; Bohrmann et al., 2012). Furthermore, antibody-amyloid-β complexes are drained towards blood vessels, promoting cerebral amyloid angiopathy (CAA) and subsequent microhaemorrhages. The mAb-11 antibody used in the present study is similar to gantenerumab, which is highly specific for amyloid plaques and reduces amyloid burden in patients with Alzheimer’s disease (Bohrmann et al., 2012; Demattos et al., 2012; Ostrowitzki et al., 2012). We find that the murine IgG2a mAb-11 antibody efficiently enhances the phagocytosis of amyloid-β fibrils by microglial cells. In addition, ECT administration of the mAb-11 F(ab’)2 fragment lacking the Fc region fails to recruit microglial cells, and leads only to a trend towards clearance of the amyloid plaques. Therefore, our results suggest a pivotal role for microglial cells in the clearance of amyloid plaques following mAb-11 delivery by ECT. Importantly, we do not find any evidence that this treatment may cause microhaemorrhages in the mouse models used in this study. It remains entirely possible that direct binding to amyloid plaques of a F(ab’)2 fragment lacking effector functionality can contribute to therapeutic efficacy, as suggested by previous studies using antibody fragments (Bacskai et al., 2002;Tamura et al., 2005; Wang et al., 2010; Cattepoel et al., 2011). However, compared to IgG2a, the lower plasma levels achieved with F(ab’)2 are likely to limit the efficacy of peripheral ECT-mediated immunization. The exact role of the effector domain and its interaction with immune cells expressing Fc receptors, could be determined by comparing the therapeutic effects of a control antibody carrying a mutated Fc portion, similar to a previous study which addressed this question using deglycosylated anti-amyloid-β antibodies (Wilcock et al., 2006a; Fuller et al., 2014).
Remarkably, continuous administration of mAb-11 initiated before plaque deposition had a dramatic effect on the amyloid pathology in TauPS2APP mice, underlining the efficacy of preventive anti-amyloid-β treatments. In this mouse model, where tau hyperphosphorylation is enhanced by amyloid-β (Grueninger et al., 2010), the treatment decreases the number of AT8- and phospho-S422-positive neurons in the hippocampus. Furthermore, the number of MC1-positive hippocampal neurons is significantly reduced, which also indicates an effect of anti-amyloid-β immunotherapy on the accumulation of misfolded tau. These results highlight the effect of amyloid-β clearance on other manifestations of the Alzheimer’s pathology. In line with these findings, previous studies have shown evidence for a decrease in tau hyperphosphorylation following immunization against amyloid-β, both in animal models and in patients with Alzheimer’s disease (Oddo et al., 2004; Wilcock et al., 2009;Boche et al., 2010; Serrano-Pozo et al., 2010; Salloway et al., 2014).
Similar to the subcutaneous injection of recombinant proteins (Schellekens, 2005), ECT implants can elicit significant immune responses against the secreted recombinant antibody. An anti-drug antibody response was detected in half of the mice treated with the mAb-11-releasing devices, in the absence of any anti-CD4 treatment. The glycosylation profile of the mAb-11 synthesized in C2C12 myoblasts is comparable to standard material produced by myeloma or HEK293 cells (Lathuiliere et al., 2014a). Although we cannot exclude that local release by ECT leads to antibody aggregation and denaturation, it is unlikely that this mode of administration further contributes to compound immunogenicity. Because the Fab regions of the chimeric recombinant mAb-11 IgG2a contain human CDRs, it remains to be determined whether the ECT-mediated delivery of antibodies fully matched with the host species would trigger an anti-drug antibody response.
Further developments will be needed to scale up this delivery system to humans. The possibility of using a single allogeneic cell source for all intended recipients is a crucial advantage of the ECT technology to standardize monoclonal antibody delivery. However, the development of renewable cell sources of human origin will be essential to ECT application in the clinic. Although the ARPE-19 cell line has been successfully adapted to ECT and used in clinical trials (Dunn et al., 1996; Zhang et al., 2011), the development of human myogenic cells (Negroni et al., 2009) is an attractive alternative that is worth exploring. Based on the PK analysis of recombinant mAb-11 antibody subcutaneously injected in mice (Supplementary material), we estimate that the flat sheet devices chronically release mAb-11 at a rate of 6.8 and 11.8 µg/h, to reach a plasma level of 50 µg/ml in the implanted animals. In humans, injected IgG1 has a longer half-life (21–25 days), with a volume of distribution of ∼100 ml/kg and an estimated clearance of 0.2 ml/h/kg. These values indicate that the predicted antibody exposure in humans, based on the rate of mAb-11 secretion achieved by ECT in mice, would be only 10 to 20-fold lower than the typical regimens based on monthly bolus injection of 1 mg/kg anti-amyloid-β monoclonal antibody. Hence, it is realistic to consider ECT for therapeutic monoclonal antibody delivery in humans, as the flat sheet device could be scaled up to contain higher amounts of cells. Furthermore, recent progress to engineer antibodies for increased penetration into the brain will enable lowering dosing of biotherapeutics to achieve therapeutic efficacy (Bien-Ly et al., 2014; Niewoehner et al., 2014). For some applications, intrathecal implantation could be preferred to chronically deliver monoclonal antibodies directly inside the CNS (Aebischer et al., 1996; Marroquin Belaunzaran et al., 2011).
Overall, ECT provides a novel approach for the local and systemic delivery of recombinant monoclonal antibodies in the CNS. It will expand the possible therapeutic options for immunotherapy against neurodegenerative disorders associated with the accumulation of misfolded proteins, including Alzheimer’s and Parkinson’s diseases, dementia with Lewy bodies, frontotemporal lobar dementia and amyotrophic lateral sclerosis (Gros-Louis et al., 2010; Bae et al., 2012; Rosenmann, 2013).
The most powerful part of the article is the methodology of the transdermal patch of genetically engineered cells which appear to give a constant stream of antibody therapy. This would be better for the authors to have highlighted because the protein delivery system is great work. However it is a shame that the Alzheimer’s field is still relying on these mouse models which are appearing to lead the filed down a path which leads to failed clinical trials. Lilly has just dropped their Alzheimer’s program and unless the filed shifts to a different hypothesis of disease etilology I feel the whole field will be stuck where it is.
, et al. An effector-reduced anti-beta-amyloid (Abeta) antibody with unique abeta binding properties promotes neuroprotection and glial engulfment of Abeta. J Neurosci 2012; 32: 9677–89.
, et al. Intrathecal delivery of CNTF using encapsulated genetically modified xenogeneic cells in amyotrophic lateral sclerosis patients. Nat Med 1996; 2:696–9.
. A detailed study of gold-nanoparticle loaded cells using X-ray based techniques for cell-tracking applications with single-cell sensitivity. Nanoscale 2013; 5: 3337–45.
, et al. Epitope and isotype specificities of antibodies to beta -amyloid peptide for protection against Alzheimer’s disease-like neuropathology. Proc Natl Acad Sci USA 2003;100: 2023–8.
, et al. Peripherally administered antibodies against amyloid beta-peptide enter the central nervous system and reduce pathology in a mouse model of Alzheimer disease. Nat Med 2000; 6: 916–19.
, et al. Reduction of aggregated Tau in neuronal processes but not in the cell bodies after Abeta42 immunisation in Alzheimer’s disease. Acta Neuropathol 2010; 120:13–20.
, et al. Gantenerumab: a novel human anti-Abeta antibody demonstrates sustained cerebral amyloid-beta binding and elicits cell-mediated removal of human amyloid-beta. J Alzheimers Dis 2012; 28: 49–69.
. Chronic intranasal treatment with an anti-Abeta(30-42) scFv antibody ameliorates amyloid pathology in a transgenic mouse model of Alzheimer’s disease. PLoS ONE 2011; 6:e18296
, et al. A lentiviral vector encoding the human Wiskott-Aldrich syndrome protein corrects immune and cytoskeletal defects in WASP knockout mice. Gene Ther2005; 12: 597–606.
. Peripheral anti-A beta antibody alters CNS and plasma A beta clearance and decreases brain A beta burden in a mouse model of Alzheimer's disease. Proc Natl Acad Sci USA 2001; 98: 8850–5.
. Intracerebroventricular infusion of monoclonal antibody or its derived Fab fragment against misfolded forms of SOD1 mutant delays mortality in a mouse model of ALS. J Neurochem2010; 113: 1188–99.
, et al. Genetic engineering of cell lines using lentiviral vectors to achieve antibody secretion following encapsulated implantation. Biomaterials 2014a;35: 792–802.
. A high-capacity cell macroencapsulation system supporting the long-term survival of genetically engineered allogeneic cells. Biomaterials 2014b; 35: 779–91.
, et al. Intraneuronal beta-amyloid aggregates, neurodegeneration, and neuron loss in transgenic mice with five familial Alzheimer’s disease mutations: potential factors in amyloid plaque formation. J Neurosci 2006; 26: 10129–40
, et al. Exacerbation of cerebral amyloid angiopathy-associated microhemorrhage in amyloid precursor protein transgenic mice by immunotherapy is dependent on antibody recognition of deposited forms of amyloid beta. J Neurosci 2005; 25: 629–36.
, et al. Human combinatorial Fab library yielding specific and functional antibodies against the human fibroblast growth factor receptor 3. J Biol Chem 2003;278: 38194–205.
, et al. Recommendations for the validation of immunoassays used for detection of host antibodies against biotechnology products. J Pharm Biomed Anal 2008; 48: 1267–81.
, et al. Amyloid-related imaging abnormalities in patients with Alzheimer’s disease treated with bapineuzumab: a retrospective analysis. Lancet Neurol 2012; 11:241–9.
, et al. Toward defining the preclinical stages of Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement 2011; 7: 280–92.
. Robust amyloid clearance in a mouse model of Alzheimer’s disease provides novel insights into the mechanism of amyloid-beta immunotherapy. J Neurosci 2011;31: 4124–36.
, et al. Intramuscular delivery of a single chain antibody gene prevents brain Abeta deposition and cognitive impairment in a mouse model of Alzheimer’s disease. Brain Behav Immun 2010; 24: 1281–93.
. Amyloid reduction by amyloid-beta vaccination also reduces mouse tau pathology and protects from neuron loss in two mouse models of Alzheimer’s disease. J Neurosci 2009; 29: 7957–65.
, et al. Passive amyloid immunotherapy clears amyloid and transiently activates microglia in a transgenic mouse model of amyloid deposition. J Neurosci2004a; 24: 6144–51.
, et al. Passive immunotherapy against Abeta in aged APP-transgenic mice reverses cognitive deficits and depletes parenchymal amyloid deposits in spite of increased vascular amyloid and microhemorrhage. J Neuroinflammation 2004b; 1: 24.
, et al. Ciliary neurotrophic factor delivered by encapsulated cell intraocular implants for treatment of geographic atrophy in age-related macular degeneration. Proc Natl Acad Sci USA 2011; 108: 6241–5.
Recently a German inventor, Clemens Bimek, developed a novel, reversible, hormone free, uncomplicated and lifelong contraceptive device for controlling male fertility. His invention is named as Bimek SLV, which is basically a valve that stops the flow of sperm through the vas deferens with the literal flip of a mechanical switch inside the scortum, rendering its user temporarily sterile. Toggled through the skin of the scrotum, the device stays closed for three months to prevent accidental switching. Moreover, the switch can’t open on its own. The tiny valves are less than an inch long and weigh is less than a tenth of an ounce. They are surgically implanted on the vas deferens, the ducts which carry sperm from the testicles, through a simple half-hour operation.
The valves are made of PEEK OPTIMA, a medical-grade polymer that has long been employed as a material for implants. The device is patented back in 2000 and is scheduled to undergo clinical trials at the beginning of this year. The inventor claims that Bimek SLV’s efficacy is similar to that of vasectomy, it does not impact the ability to gain and maintain an erection and ejaculation will be normal devoid of the sperm cells. The valve’s design enables sperm to exit the side of the vas deferens when it’s closed without any semen blockage. Leaked sperm cells will be broken down by the immune system. The switch to stop sperm flow can be kept working for three months or 30 ejaculations. After switching on the sperm flow the inventor suggested consulting urologist to ensure that all the blocked sperms are cleared off the device. The recovery time after switching on the sperm flow is only one day, according to Bimek SLV. However, men are encouraged to wait one week before resuming sexual activities.
Before the patented technology can be brought to market, it must undergo a rigorous series of clinical trials. Bimek and his business partners are currently looking for men interested in testing the device. If the clinical trials are successful then this will be the first invention of its kind that gives men the ability to control their fertility and obviously this method will be preferred over vasectomy.
The Vibrant Philly Biotech Scene: Focus on KannaLife Sciences and the Discipline and Potential of Pharmacognosy
Curator and Interviewer: Stephen J. Williams, Ph.D.
This post is the third in a series of posts highlighting interviews with Philadelphia area biotech startup CEO’s and show how a vibrant biotech startup scene is evolving in the city as well as the Delaware Valley area. Philadelphia has been home to some of the nation’s oldest biotechs including Cephalon, Centocor, hundreds of spinouts from a multitude of universities as well as home of the first cloned animal (a frog), the first transgenic mouse, and Nobel laureates in the field of molecular biology and genetics. Although some recent disheartening news about the fall in rankings of Philadelphia as a biotech hub and recent remarks by CEO’s of former area companies has dominated the news, biotech incubators like the University City Science Center and Bucks County Biotechnology Center as well as a reinvigorated investment community (like PCCI and MABA) are bringing Philadelphia back. And although much work is needed to bring the Philadelphia area back to its former glory days (including political will at the state level) there are many bright spots such as the innovative young companies as outlined in these posts.
Below is the interview with Dr. Kinney and Mr. Kikis of KannaLife Sciences and Leaders in Pharmaceutical Business Intelligence (LPBI)
PA Biotech Questions answered by Dr. William Kinney, Chief Scientific Officer of KannaLife Sciences
LPBI: Your parent company is based in New York. Why did you choose the Bucks County Pennsylvania Biotechnology Center?
Dr. Kinney: The Bucks County Pennsylvania Biotechnology Center has several aspects that were attractive to us. They have a rich talent pool of pharmaceutically trained medicinal chemists, an NIH trained CNS pharmacologist, a scientific focus on liver disease, and a premier natural product collection.
LBPI: The Blumberg Institute and Natural Products Discovery Institute has acquired a massive phytochemical library. How does this resource benefit the present and future plans for KannaLife?
Dr. Kinney: KannaLife is actively mining this collection for new sources of neuroprotective agents and is in the process of characterizing the active components of a specific biologically active plant extract. Jason Clement of the NPDI has taken a lead on these scientific studies and is on our Advisory Board.
LPBI: Was the state of Pennsylvania and local industry groups support KannaLife’s move into the Doylestown incubator?
Dr. Kinney: The move was not State influenced by state or industry groups.
LPBI: Has the partnership with Ben Franklin Partners and the Center provided you with investment opportunities?
Dr. Kinney: Ben Franklin Partners has not yet been consulted as a source of capital.
LPBI: The discipline of pharmacognosy, although over a century old, has relied on individual investigators and mainly academic laboratories to make initial discoveries on medicinal uses of natural products. Although there have been many great successes (taxol, many antibiotics, glycosides, etc.) many big pharmaceutical companies have abandoned this strategy considering it a slow, innefective process. Given the access you have to the chemical library there at Buck County Technology Center, the potential you had identified with cannabanoids in diseases related to oxidative stress, how can KannaLife enhance the efficiency of finding therapeutic and potential preventive uses for natural products?
Dr. Kinney: KannaLife has the opportunity to improve upon natural molecules that have shown medically uses, but have limitations related to safety and bioavailability. By applying industry standard medicinal chemistry optimization and assay methods, progress is being made in improving upon nature. In addition KannaLife has access to one of the most commercially successful natural products scientists and collections in the industry.
LPBI: How does the clinical & regulatory experience in the Philadelphia area help a company like Kannalife?
Dr. Kinney: Within the region, KannaLife has access to professionals in all areas of drug development either by hiring displaced professionals or partnering with regional contract research organizations.
LPBI You are focusing on an interesting mechanism of action (oxidative stress) and find your direction appealing (find compounds to reverse this, determine relevant disease states {like HCE} then screen these compounds in those disease models {in hippocampal slices}). As oxidative stress is related to many diseases are you trying to develop your natural products as preventative strategies, even though those type of clinical trials usually require massive numbers of trial participants or are you looking to partner with a larger company to do this?
Dr. Kinney: Our strategy is to initially pursue Hepatic Encephalophy (HE) as the lead orphan disease indication and then partner with other organizations to broaden into other areas that would benefit from a neuroprotective agent. It is expected the HE will be responsive to an acute treatment regimen. We are pursuing both natural products and new chemical entities for this development path.
General Questions answered by Thoma Kikis, Founder/CMO of KannaLife Sciences
LPBI: How did KannaLife get the patent from the National Institutes of Health?
My name is Thoma Kikis I’m the co-founder of KannaLife Sciences. In 2010, my partner Dean Petkanas and I founded KannaLife and we set course applying for the exclusive license of the ‘507 patent held by the US Government Health and Human Services and National Institutes of Health (NIH). We spent close to 2 years working on acquiring an exclusive license from NIH to commercially develop Patent 6,630,507 “Cannabinoids as Antioxidants and Neuroprotectants.” In 2012, we were granted exclusivity from NIH to develop a treatment for a disease called Hepatic Encephalopathy (HE), a brain liver disease that stems from cirrhosis.
Cannabinoids are the chemicals that compose the Cannabis plant. There are over 85 known isolated Cannabinoids in Cannabis. The cannabis plant is a repository for chemicals, there are over 400 chemicals in the entire plant. We are currently working on non-psychoactive cannabinoids, cannabidiol being at the forefront.
As we started our work on HE and saw promising results in the area of neuroprotection we sought out another license from the NIH on the same patent to treat CTE (Chronic Traumatic Encephalopathy), in August of 2014 we were granted the additional license. CTE is a concussion related traumatic brain disease with long term effects mostly suffered by contact sports players including football, hockey, soccer, lacrosse, boxing and active military soldiers.
To date we are the only license holders of the US Government held patent on cannabinoids.
LPBI: How long has this project been going on?
We have been working on the overall project since 2010. We first started work on early research for CTE in early-2013.
LPBI: Tell me about the project. What are the goals?
Our focus has always been on treating diseases that effect the Brain. Currently we are looking for solutions in therapeutic agents designed to reduce oxidative stress, and act as immuno-modulators and neuroprotectants.
KannaLife has an overall commitment to discover and understand new phytochemicals. This diversification of scientific and commercial interests strongly indicates a balanced and thoughtful approach to our goals of providing standardized, safer and more effective medicines in a socially responsible way.
Currently our research has focused on the non-psychoactive cannabidiol (CBD). Exploring the appropriate uses and limitations and improving its safety and Metered Dosing. CBD has a limited therapeutic window and poor bioavailability upon oral dosing, making delivery of a consistent therapeutic dose challenging. We are also developing new CBD-like molecules to overcome these limitations and evaluating new phytochemicals from non-regulated plants.
KannaLife’s research is led by experienced pharmaceutically trained professionals; Our Scientific team out of the Pennsylvania Biotechnology Center is led by Dr. William Kinney and Dr. Douglas Brenneman both with decades of experience in pharmaceutical R&D.
LPBI: How do cannabinoids help neurological damage? -What sort of neurological damage do they help?
So far our pre-clinical results show that cannabidiol is a good candidate as a neuroprotectant as the patent attests to. Our current studies have been to protect neuronal cells from toxicity. For HE we have been looking specifically at ammonia and ethanol toxicity.
– How did it go from treating general neurological damage to treating CTE? Is there any proof yet that cannabinoids can help prevent CTE? What proof?
We started examining toxicity first with ammonia and ethanol in HE and then posed the question; If CBD is a neuroprotectant against toxicity then we need to examine what it can do for other toxins. We looked at CTE and the toxin that causes it, tau. We just acquired the license in August from the NIH for CTE and are beginning our pre-clinical work in the area of CTE now with Dr. Ron Tuma and Dr. Sara Jane Ward at Temple University in Philadelphia.
LPBI: How long until a treatment could be ready? What’s the timeline?
We will have research findings in the coming year. We plan on filing an IND (Investigational New Drug application) with the FDA for CBD and our molecules in 2015 for HE and file for CTE once our studies are done.
LPBI: What other groups are you working with regarding CTE?
We are getting good support from former NFL players who want solutions to the problem of concussions and CTE. This is a very frightening topic for many players, especially with the controversy and lawsuits surrounding it. I have personally spoken to several former NFL players, some who have CTE and many are frightened at what the future holds.
We enrolled a former player, Marvin Washington. Marvin was an 11 year NFL vet with NY Jets, SF 49ers and won a SuperBowl on the 1998 Denver Broncos. He has been leading the charge on KannaLife’s behalf to raise awareness to the potential solution for CTE.
We tried approaching the NFL in 2013 but they didn’t want to meet. I can understand that they don’t want to take a position. But ultimately, they’re going to have to make a decision and look into different research to treat concussions. They have already given the NIH $30 Million for research into football related injuries and we hold a license with the NIH, so we wanted to have a discussion. But currently cannabinoids are part of their substance abuse policy connected to marijuana. Our message to the NFL is that they need to lead the science, not follow it.
Can you imagine the NFL’s stance on marijuana treating concussions and CTE? These are topics they don’t want to touch but will have to at some point.
LPBI: Thank you both Dr. Kinney and Mr. Kikis.
Please look for future posts in this series on the Philly Biotech Scene on this site
Also, if you would like your Philadelphia biotech startup to be highlighted in this series please contact me or
Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.
A report on how gamification mobile applications, like CyberDoctor’s PatientPartner, may improve patient adherence to oral chemotherapy.
(includes interviews with CyberDoctor’s CEO Akhila Satish and various oncologists)
Writer/Curator: Stephen J. Williams, Ph.D.
UPDATE 5/15/2019
Please see below for an UPDATE on this post including results from the poll conducted here on the value of a gamification strategy for oral chemotherapy patient adherence as well as a paper describing a well designed development of an application specifically to address this clinical problem.
Studies have pointed to a growing need to monitor and improve medical adherence, especially with outpatient prescription drugs across many diseases, including cancer.
The trend to develop oral chemotherapies, so patients can take their medications in the convenience of their home, has introduced produced a unique problem concerning cancer patient-medication adherence. Traditionally, chemotherapies were administered by a parental (for example intravenous) route by clinic staff, however, as noted by Jennifer M Gangloff in her article Troubling Trend: Medication Adherence:
with the trend of cancer patients taking their oral medication at home, the burden of adherence has shifted from clinicians to the patients and their families.
A few highlights from Jennifer Gangloff’s article highlight the degree and scope of the problem:
There is a wide range of adherence for oral chemo– as low as 16% up to 100% adherence rates have been seen in multiple studies
High cost in lives and money: estimates in US of 125,000 deaths and $300 billion in healthcare costs due to nonadherence to oral anticancer medications
Factors not related to the patient can contribute to nonadherence including lack of information provided by the healthcare system and socioeconomic factors
Numerous methods to improve adherence issues (hospital informative seminars, talking pill bottles, reminder phone calls etc.) have met with mixed results.
More strikingly, patient adherence rates can drastically decline over treatment, with one study showing an adherence rate drop from 87% to 50% over 4 years of adjuvant tamoxifen therapy.
Tackling The Oral Chemotherapy-Patient Adherence Problem
Documented factors leading to non-adherence to oral oncology medications include
Patient feels better so stop taking the drug
Patient feels worse so stops taking the drug
Confusing and complicated dosing regimen
Inability to afford medications
Poor provider-patient relationships
Adverse effects of medication
Cognitive impairment (“chemo fog”; mental impairment due to chemotherapy
Inadequate education/instruction of discharge
There are many examples of each reason why a patient stopped taking medication. One patient was prescribed capecitabine for her metastatic breast cancer and, upon feeling nausea, started to use antacids, which precipitated toxicities as a result of increased plasma levels of capecitabine.
This review also documented the difficulties in accurately measuring patient adherence including:
Inaccuracy of self-reporting
Lack of applicability of external measurements such as pill counts
Hawthorne effect: i.e. patient pill documentation reminds them to take next dose
The group suggests that using MTM programs, especially telephony systems involving oncology nurses and pharmacists and utilizing:
Therapy support (dosing reminders)
Education
Side effect management
may be a cost-efficient methodology to improve medical adherence.
Although nurses are important intermediary educating patients about their oral chemotherapies, it does not appear that solely relying on nurses to monitor patient adherence will be sufficient, as indicated in a survey-based Japanese study.
Survey results indicated that 90% of nurses reported asking patients on oral chemotherapy about emergency contacts, side effects, and family/friend support. Nurses also provided patients with education materials on their assigned medication.
However, less than one-third of nurses asked if their patients felt confident about managing their oral chemotherapy.
“Nurses were less likely to ask adherence-related questions of patients with refilled prescriptions than of new patients,” the researchers wrote. “Regarding unused doses of anticancer agents, 35.5% of nurses reported that they did not confirm the number of unused doses when patients had refilled prescriptions.”
From the Roswell Park Cancer Institute blog post Making Mobile Health Work
US physicians are recognizing the need for the adoption of mobile in their practice but choice of apps and mobile strategies must be carefully examined before implementation. In addition, most physicians are using mobile communications as a free-complementary service and these physicians are not being reimbursed for their time.
Some companies are providing their own oncology-related mobile app services:
San Francisco, August 13, 2013 – CollabRx, Inc. (NASDAQ: CLRX), a healthcare information technology company focused on informing clinical decision making in molecular medicine, today announced a multi-year agreement with Everyday Health’s MedPage Today. The forthcoming app, which will target oncologists and pathologists, will focus on the molecular aspects of laboratory testing and therapy development. Over time, the expectation is that this app will serve as a comprehensive point of care resource for physicians and patients to obtain highly credible, expert-vetted and dynamically updated information to guide cancer treatment planning.
The McKesson Foundation’s Mobilizing for Health initiative
has awarded a grant to Partners HealthCare’s Center for Connected Health to develop a mobile health program that uses a smartphone application to help patients with cancer adhere to oral chemotherapy treatments and monitor their symptoms, FierceMobileHealthcare reports.
CancerNet announces mobile application (from cancer.net)
The report suggests that there are too many apps either offering information, suggesting behavior/lifestyle changes, or measuring compliance data but little evidence to suggest any of these are working the way they intended. The article suggests the plethora of apps may just be adding to the confusion.
MyCyberDoctor™, a True Gamification App, Shows Great Results in Improving Diabetics Medical Adherence and Health Outcome
Most of the mobile health apps discussed above, would be classified as tracking apps, because the applications simply record a patient’s actions, whether filling a prescription, interacting with a doctor, nurse, pharmacist, or going to a website to gain information. However, as discussed before, there is no hard evidence this is really impacting health outcomes.
Another type of application, termed gamification apps, rely on role-playing by the patient to affect patient learning and ultimately behavior.
An interested twist on this method was designed by Akhila Satish, CEO and developer of CyberDoctor and a complementary application PatientPartner.
As reported here, the PatientPartner application was used in the first IRB-approved mhealth clinical-trial to see if the gamification app could improve medical adherence and outcomes in diabetic patients. PatientPartner is a story-driven game in changing health behavior and biomarkers (blood glucose levels in this trial). In the clinical trial, 100 non-adherent patients with diabetes played the PatientPartner game for 15 minutes. Results were amazing, as the trial demonstrated an increase in patient adherence, with only 15 minutes of game playing.
Results from the study
Patients with diabetes who used PatientPartner showed significant improvement in three key areas – medication, diet, and exercise:
Medication adherence increased by 37%, from 58% to 95% – equivalent to three additional days of medication adherence per week.
Diet adherence increased by 24% – equivalent to two days of additional adherence a week.
Exercise adherence increased by 14% – equivalent to one additional day of adherence per week.
HbA1c (a blood sugar measure) decreased from 10.7% to 9.7%.
As mentioned in the article:
The unique, universal, non-disease specific approach allows PatientPartner to be effective in improving adherence in all patient populations.
PatientPartner is available in the iTunes store and works on the iPhone and iPod Touch. For information on PatientPartner, visit www.mypatientpartner.com.
Ms. Satish, who was named one of the top female CEO’s at the Health Conference, gratuitously offered to answer a few questions for Leaders in Pharmaceutical Business Intelligence (LPBI) on the feasibility of using such a game (role-playing) application to improve medical adherence in the oncology field.
LPBI: The results you had obtained with patient-compliance in the area of diabetes are compelling and the clinical trial well-designed. In the oncology field, due to the increase in use of oral chemotherapeutics, patient-compliance has become a huge issue. Other than diabetes, are there plans for MyCyberDoctor and PatientPartner to be used in other therapeutic areas to assist with patient-compliance and patient-physician relations?
Ms. Satish: Absolutely! We tested the application in diabetes because we wanted to measure adherence from an objective blood marker (hbA1c). However, the method behind PatientPartner- teaching patients how to make healthy choices- is universal and applicable across therapeutic areas.
LPBI: Recently, there have been a plethora of apps developed which claim to impact patient-compliance and provide information. Some of these apps have been niche (for example only providing prescription information but tied to pharmacy records and company databases). Your app seems to be the only one with robust clinical data behind it and approaches from a different angle, namely adjusting behavior using a gamefying experience and teaching the patient the importance of compliance. How do you feel this approach geared more toward patient education sets PatientPartner apart from other compliance-based apps?
Ms. Satish: PatientPartner really focuses on the how of patient decision making, rather than the specifics of each decision that is made. It’s a unique approach, and part of the reason PatientPartner works so effectively with such a short initial intervention! We are able to achieve more with less “app” time as a result of this method.
LPBI: There have been multiple studies attempting to correlate patient adherence, decision-making, and health outcome to socioeconomic status. In some circumstances there is a socioeconomic correlation while other cases such as patient-decision to undergo genetic testing or compliance to breast cancer treatment in rural areas, level of patient education may play a bigger role. Do you have data from your diabetes trial which would suggest any differences in patient adherence, outcome to any socioeconomic status? Do you feel use of PatientPartner would break any socioeconomic barriers to full patient adherence?
Ms. Satish: Within our trial, we had several different clinical sites. This helped us test the product out in a broad, socioeconomically diverse population. It is our hope that with a tool as easy to scale and use as PatientPartner we have the opportunity to see the product used widely, even in populations that are traditionally harder to reach.
LPBI: There has been a big push for the development of individual, personalized physician networks which use the internet as the primary point of contact between a primary physician and the patient. Individuals may sign up to these networks bypassing the traditional insurance-based networks. How would your application assist in these types of personalized networks?
Ms. Satish: PatientPartner can easily be plugged into any existing framework of communication between patient and provider. We facilitate patient awareness, engagement and accountability- all of which are important regardless of the network structure.
LBPI: Thank you Akhila!
A debate has begun about regulating mobile health applications, and although will be another post, I would just like to summarize a nice article in May, 2014 Oncology Times by Sarah Digiulo “Mobile Health Apps: Should They be Regulated?
In general, in the US there are HIPAA regulations about the dissemination of health related information between a patient and physician. Most of the concerns are related to personal health information made public in an open-access platform such as Twitter or Facebook.
In addition, according to Dr. Don Dizon M.D., Director of the Oncology Sexual Health Clinic at Massachusetts General Hospital, it may be more difficult to design applications directed against a vast, complex disease like cancer with its multiple subtypes than for diabetes.
Mobile Health Applications on Rise in Developing World: Worldwide Opportunity
According to International Telecommunication Union (ITU) statistics, world-wide mobile phone use has expanded tremendously in the past 5 years, reaching almost 6 billion subscriptions. By the end of this year it is estimated that over 95% of the world’s population will have access to mobile phones/devices, including smartphones.
This presents a tremendous and cost-effective opportunity in developing countries, and especially rural areas, for physicians to reach patients using mHealth platforms.
Drs. Clara Aranda-Jan Neo Mohutsiwa and Svetla Loukanova had conducted a systematic review of the literature on mHealth projects conducted in Africa[1] to assess the reliability of mobile phone and applications to assist in patient-physician relationships and health outcomes. The authors reviewed forty four studies on mHealth projects in Africa, determining their:
strengths
weaknesses
opportunities
threats
to patient outcomes using these mHealth projects. In general, the authors found that mHealth projects were beneficial for health-related outcomes and their success related to
accessibility
acceptance and low-cost
adaptation to local culture
government involvement
while threats to such projects could include
lack of funding
unreliable infrastructure
unclear healthcare system responsibilities
Dr.Sreedhar Tirunagari, an oncologist in India, agrees that mHealth, especially gamification applications could greatly foster better patient education and adherencealthough he notes that mHealth applications are not really used in India and may not be of much use for those oncology patients living in rural areas, as cell phone use is not as prevalent as in the bigger inner cities such as Delhi and Calcutta.
1) do you see a use for such apps which either track drug compliance or use gamification systems to teach patients the importance of continuing their full schedule of drug therapy
2) do you feel patient- drug compliance issues in the oncology practice is due to lack of information available to the patient or issues related to drug side effects?
“I think that Apps could help in this setting, we are in
Informatics era but..
The main question is that chronic patients are special ones.
Cancer patients have to deal with prognosis, even in therapies
with curative intent such as aromatase inhibitors are potent
Drugs that can cure; only in the future the patients know.
But meanwhile he or she has to deal with side-effects every day. A PC can help but suffer this symptoms…it. Is a real problem believe me!”
“The main app is his/her doctor”
I would like to invite all oncologists to answer the poll question ABOVE about the use of such gamification apps, like PatientPartner, for improving medical adherence to oral chemotherapy.
UPDATE 5/15/2019
The results of the above poll, although limited, revealed some interesting insights. Although only five oncologists answered the poll whether they felt gamification applications could help with oral chemotherapy patient adherence, all agreed it would be worthwhile to develop apps based on gamification to assist in the outpatient setting. In addition, one oncologist felt that the success of mobile patient adherence application would depend on the type of cancer. None of the oncologist who answered the survey thought that gamification apps would have no positive effect on patient adherence to their chemotherapy. With this in light, a recent paper by Joel Fishbein of University of Colorado and Joseph Greer from Massachusetts General Hospital, describes the development of a mobile application, in clinical trial, to promote patient adherence to their oral chemotherapy.
Mobile Applications to Promote Adherence to Oral Chemotherapy and Symptom Management: A Protocol for Design and Development
Oral chemotherapy is increasingly used in place of traditional intravenous chemotherapy to treat patients with cancer. While oral chemotherapy includes benefits such as ease of administration, convenience, and minimization of invasive infusions, patients receive less oversight, support, and symptom monitoring from clinicians. Additionally, adherence is a well-documented challenge for patients with cancer prescribed oral chemotherapy regimens. With the ever-growing presence of smartphones and potential for efficacious behavioral intervention technology, we created a mobile health intervention for medication and symptom management.
OBJECTIVE:
The objective of this study was to develop and evaluate the usability and acceptability of a smartphone app to support adherence to oral chemotherapy and symptom management in patients with cancer.
METHODS:
We used a 5-step development model to create a comprehensive mobile app with theoretically informed content. The research and technical development team worked together to develop and iteratively test the app. In addition to the research team, key stakeholders including patients and family members, oncology clinicians, health care representatives, and practice administrators contributed to the content refinement of the intervention. Patient and family members also participated in alpha and beta testing of the final prototype to assess usability and acceptability before we began the randomized controlled trial.
RESULTS:
We incorporated app components based on the stakeholder feedback we received in focus groups and alpha and beta testing. App components included medication reminders, self-reporting of medication adherence and symptoms, an education library including nutritional information, Fitbit integration, social networking resources, and individually tailored symptom management feedback. We are conducting a randomized controlled trial to determine the effectiveness of the app in improving adherence to oral chemotherapy, quality of life, and burden of symptoms and side effects. At every stage in this trial, we are engaging stakeholders to solicit feedback on our progress and next steps.
CONCLUSIONS:
To our knowledge, we are the first to describe the development of an app designed for people taking oral chemotherapy. The app addresses many concerns with oral chemotherapy, such as medication adherence and symptom management. Soliciting feedback from stakeholders with broad perspectives and expertise ensured that the app was acceptable and potentially beneficial for patients, caregivers, and clinicians. In our development process, we instantiated 7 of the 8 best practices proposed in a recent review of mobile health app development. Our process demonstrated the importance of effective communication between research groups and technical teams, as well as meticulous planning of technical specifications before development begins. Future efforts should consider incorporating other proven strategies in software, such as gamification, to bolster the impact of mobile health apps. Forthcoming results from our randomized controlled trial will provide key data on the effectiveness of this app in improving medication adherence and symptom management.
In this paper, Fishbein et al. describe the methodology of the developoment of a mobile application to promote oral chemotherapy adherence. This mobile app intervention was named CORA or ChemOtheRapyAssistant.
Of the approximately 325,000 health related apps on the market (as of 2017), the US Food and Drug Administration (FDA) have only reviewed approximately 20 per year and as of 2016 cleared only about 36 health related apps.
According to industry estimates, 500 million smartphone users worldwide will be using a health care application by 2015, and by 2018, 50 percent of the more than 3.4 billion smartphone and tablet users will have downloaded mobile health applications. However, there is not much scientific literature providing a framework for design and creation of quality health related mobile applications.
Methods
The investigators separated the app development into two phases: Phase 1 consisted of the mobile application development process and initial results of alpha and beta testing to determine acceptability among the major stakeholders including patients, caregivers, oncologists, nurses, pharmacists, pharmacologists, health payers, and patient advocates. Phase 1 methodology and results were the main focus of this paper. Phase 2 consists of an ongoing clinical trial to determine efficacy and reliability of the application in a larger number of patients at different treatment sites and among differing tumor types.
The 5 step development process in phase 1 consisted of identifying features, content, and functionality of a mobile app in an iterative process, including expert collaboration and theoretical framework to guide initial development.
There were two distinct teams: a research team and a technical team. The multidisciplinary research team consisted of the principal investigator, co-investigators (experts in oncology, psychology and psychiatry), a project director, and 3 research assistants.
The technical team consisted of programmers and project managers at Partners HealthCare Connected Health. Stakeholders served as expert consultants including oncologists, health care representatives, practice administrators, patients, and family members (care givers). All were given questionaires (HIPAA compliant) and all involved in alpha and beta testing of the product.
There were 5 steps in the development process
Implementing a theoretical framework: Patients and their family caregivers now bear the primary responsibility for their medical adherence especially to oral chemotherapy which is now more frequently administered in the home setting not in the clinical setting. Four factors were identified as the most important barriers to oral chemotherapy adherence: complexity of medication regimes, symptom burden, poor self-management of side effects, and low clinical support. These four factors were integral in the design of the mobile app and made up a conceptual framework in its design.
Conducting Initial Focus Group Interviews with key stakeholders: Stakeholders were taken from within and outside the local community. In all 32 stakeholders served as study collaborators including 8 patient/families, 8 oncologists/clinicians, 8 cancer practice administrators, and 8 representatives of the health system, community, and overall society. The goal of these focus groups were to obtain feedback on the proposed study and design included perceived importance of monitoring of adherence to oral chemotherapy, barriers to communication between patients and oncology teams regarding side effects and medication adherence, potential role of mobile apps to address barriers of quality of cancer care, potential feasibility, acceptability, and usage and feedback on the overall study design.
Creation of Wireframes (like storyboards or page designs) and Collecting Initial Feedback: The research and design team, in conjunction with stakeholder input, created content wireframes, or screen blueprints) to provide a visual guide as to what the app would look like. These wireframes also served as basis for what the patient interviews would look like on the application. A total of 10 MGH (Massachusetts General Hospital) patients (6 female, 4 male) and most with higher education (BS or higher) participated in the interviews and design of wireframes. Eight MGH clinicians participated in this phase of wireframe design.
Developing, Programming, and Refining the App: CORA was designed to be supported by PHP/MySQL databases and run on LAMP hosts (Linux, Apache, MySQL, Perl/PHP/Python) and fully HIPAA compliant. Alpha testing was conducted with various stakeholders and the app refined by the development team (technical team) after feedback.
Final beta testing and App prototype for clinical trial: The research team considered the first 5 participants enrolled in the subsequent clinical trial for finalization of the app prototype.
There were 7 updated versions of the app during the initial clinical trial phase and 4 updates addressed technical issues related to smartphone operating system upgrades.
Finally, the investigators list a few limitations in their design and study of this application. First the patient population was homogenous as all were from an academic hospital setting. Second most of the patients were of Caucasian ethnic background and most were highly educated, all of which may introduce study bias. In addition, CORA was available on smartphone and tablet only, so a larger patient population who either have no access to these devices or are not technically savvy may experience issues related to this limitation.
In addition other articles on this site related to Mobile Health applications and Health Outcomes include
Aranda-Jan CB, Mohutsiwa-Dibe N, Loukanova S: Systematic review on what works, what does not work and why of implementation of mobile health (mHealth) projects in Africa. BMC public health 2014, 14:188.
Summary of Translational Medicine – e-Series A: Cardiovascular Diseases, Volume Four – Part 1
Author and Curator: Larry H Bernstein, MD, FCAP
and
Curator: Aviva Lev-Ari, PhD, RN
Part 1 of Volume 4 in the e-series A: Cardiovascular Diseases and Translational Medicine, provides a foundation for grasping a rapidly developing surging scientific endeavor that is transcending laboratory hypothesis testing and providing guidelines to:
Target genomes and multiple nucleotide sequences involved in either coding or in regulation that might have an impact on complex diseases, not necessarily genetic in nature.
Target signaling pathways that are demonstrably maladjusted, activated or suppressed in many common and complex diseases, or in their progression.
Enable a reduction in failure due to toxicities in the later stages of clinical drug trials as a result of this science-based understanding.
Enable a reduction in complications from the improvement of machanical devices that have already had an impact on the practice of interventional procedures in cardiology, cardiac surgery, and radiological imaging, as well as improving laboratory diagnostics at the molecular level.
Enable the discovery of new drugs in the continuing emergence of drug resistance.
Enable the construction of critical pathways and better guidelines for patient management based on population outcomes data, that will be critically dependent on computational methods and large data-bases.
What has been presented can be essentially viewed in the following Table:
Summary Table for TM – Part 1
There are some developments that deserve additional development:
1. The importance of mitochondrial function in the activity state of the mitochondria in cellular work (combustion) is understood, and impairments of function are identified in diseases of muscle, cardiac contraction, nerve conduction, ion transport, water balance, and the cytoskeleton – beyond the disordered metabolism in cancer. A more detailed explanation of the energetics that was elucidated based on the electron transport chain might also be in order.
2. The processes that are enabling a more full application of technology to a host of problems in the environment we live in and in disease modification is growing rapidly, and will change the face of medicine and its allied health sciences.
Electron Transport and Bioenergetics
Deferred for metabolomics topic
Synthetic Biology
Introduction to Synthetic Biology and Metabolic Engineering
http://www.ibiology.org Lecturers generously donate their time to prepare these lectures. The project is funded by NSF and NIGMS, and is supported by the ASCB and HHMI.
Dr. Prather explains that synthetic biology involves applying engineering principles to biological systems to build “biological machines”.
Dr. Prather has received numerous awards both for her innovative research and for excellence in teaching. Learn more about how Kris became a scientist at
Prather 1: Synthetic Biology and Metabolic Engineering 2/6/14IntroductionLecture Overview In the first part of her lecture, Dr. Prather explains that synthetic biology involves applying engineering principles to biological systems to build “biological machines”. The key material in building these machines is synthetic DNA. Synthetic DNA can be added in different combinations to biological hosts, such as bacteria, turning them into chemical factories that can produce small molecules of choice. In Part 2, Prather describes how her lab used design principles to engineer E. coli that produce glucaric acid from glucose. Glucaric acid is not naturally produced in bacteria, so Prather and her colleagues “bioprospected” enzymes from other organisms and expressed them in E. coli to build the needed enzymatic pathway. Prather walks us through the many steps of optimizing the timing, localization and levels of enzyme expression to produce the greatest yield. Speaker Bio: Kristala Jones Prather received her S.B. degree from the Massachusetts Institute of Technology and her PhD at the University of California, Berkeley both in chemical engineering. Upon graduation, Prather joined the Merck Research Labs for 4 years before returning to academia. Prather is now an Associate Professor of Chemical Engineering at MIT and an investigator with the multi-university Synthetic Biology Engineering Reseach Center (SynBERC). Her lab designs and constructs novel synthetic pathways in microorganisms converting them into tiny factories for the production of small molecules. Dr. Prather has received numerous awards both for her innovative research and for excellence in teaching.
Calcium Cycling in Synthetic and Contractile Phasic or Tonic Vascular Smooth Muscle Cells
in INTECH
Current Basic and Pathological Approaches to
the Function of Muscle Cells and Tissues – From Molecules to HumansLarissa Lipskaia, Isabelle Limon, Regis Bobe and Roger Hajjar
Additional information is available at the end of the chapter http://dx.doi.org/10.5772/48240
1. Introduction
Calcium ions (Ca ) are present in low concentrations in the cytosol (~100 nM) and in high concentrations (in mM range) in both the extracellular medium and intracellular stores (mainly sarco/endo/plasmic reticulum, SR). This differential allows the calcium ion messenger that carries information
as diverse as contraction, metabolism, apoptosis, proliferation and/or hypertrophic growth. The mechanisms responsible for generating a Ca signal greatly differ from one cell type to another.
In the different types of vascular smooth muscle cells (VSMC), enormous variations do exist with regard to the mechanisms responsible for generating Ca signal. In each VSMC phenotype (synthetic/proliferating and contractile [1], tonic or phasic), the Ca signaling system is adapted to its particular function and is due to the specific patterns of expression and regulation of Ca.
For instance, in contractile VSMCs, the initiation of contractile events is driven by mem- brane depolarization; and the principal entry-point for extracellular Ca is the voltage-operated L-type calcium channel (LTCC). In contrast, in synthetic/proliferating VSMCs, the principal way-in for extracellular Ca is the store-operated calcium (SOC) channel.
Whatever the cell type, the calcium signal consists of limited elevations of cytosolic free calcium ions in time and space. The calcium pump, sarco/endoplasmic reticulum Ca ATPase (SERCA), has a critical role in determining the frequency of SR Ca release by upload into the sarcoplasmic
sensitivity of SR calcium channels, Ryanodin Receptor, RyR and Inositol tri-Phosphate Receptor, IP3R.
Synthetic VSMCs have a fibroblast appearance, proliferate readily, and synthesize increased levels of various extracellular matrix components, particularly fibronectin, collagen types I and III, and tropoelastin [1].
Contractile VSMCs have a muscle-like or spindle-shaped appearance and well-developed contractile apparatus resulting from the expression and intracellular accumulation of thick and thin muscle filaments [1].
Schematic representation of Calcium Cycling in Contractile and Proliferating VSMCs
Figure 1. Schematic representation of Calcium Cycling in Contractile and Proliferating VSMCs.
Left panel: schematic representation of calcium cycling in quiescent /contractile VSMCs. Contractile re-sponse is initiated by extracellular Ca influx due to activation of Receptor Operated Ca (through phosphoinositol-coupled receptor) or to activation of L-Type Calcium channels (through an increase in luminal pressure). Small increase of cytosolic due IP3 binding to IP3R (puff) or RyR activation by LTCC or ROC-dependent Ca influx leads to large SR Ca IP3R or RyR clusters (“Ca -induced Ca SR calcium pumps (both SERCA2a and SERCA2b are expressed in quiescent VSMCs), maintaining high concentration of cytosolic Ca and setting the sensitivity of RyR or IP3R for the next spike.
Contraction of VSMCs occurs during oscillatory Ca transient.
Middle panel: schematic representa tion of atherosclerotic vessel wall. Contractile VSMC are located in the media layer, synthetic VSMC are located in sub-endothelial intima.
Right panel: schematic representation of calcium cycling in quiescent /contractile VSMCs. Agonist binding to phosphoinositol-coupled receptor leads to the activation of IP3R resulting in large increase in cytosolic Ca calcium pumps (only SERCA2b, having low turnover and low affinity to Ca depletion leads to translocation of SR Ca sensor STIM1 towards PM, resulting in extracellular Ca influx though opening of Store Operated Channel (CRAC). Resulted steady state Ca transient is critical for activation of proliferation-related transcription factors ‘NFAT).
Abbreviations: PLC – phospholipase C; PM – plasma membrane; PP2B – Ca /calmodulin-activated protein phosphatase 2B (calcineurin); ROC- receptor activated channel; IP3 – inositol-1,4,5-trisphosphate, IP3R – inositol-1,4,5- trisphosphate receptor; RyR – ryanodine receptor; NFAT – nuclear factor of activated T-lymphocytes; VSMC – vascular smooth muscle cells; SERCA – sarco(endo)plasmic reticulum Ca sarcoplasmic reticulum.
Time for New DNA Synthesis and Sequencing Cost Curves
By Rob Carlson
I’ll start with the productivity plot, as this one isn’t new. For a discussion of the substantial performance increase in sequencing compared to Moore’s Law, as well as the difficulty of finding this data, please see this post. If nothing else, keep two features of the plot in mind: 1) the consistency of the pace of Moore’s Law and 2) the inconsistency and pace of sequencing productivity. Illumina appears to be the primary driver, and beneficiary, of improvements in productivity at the moment, especially if you are looking at share prices. It looks like the recently announced NextSeq and Hiseq instruments will provide substantially higher productivities (hand waving, I would say the next datum will come in another order of magnitude higher), but I think I need a bit more data before officially putting another point on the plot.
cost-of-oligo-and-gene-synthesis
Illumina’s instruments are now responsible for such a high percentage of sequencing output that the company is effectively setting prices for the entire industry. Illumina is being pushed by competition to increase performance, but this does not necessarily translate into lower prices. It doesn’t behoove Illumina to drop prices at this point, and we won’t see any substantial decrease until a serious competitor shows up and starts threatening Illumina’s market share. The absence of real competition is the primary reason sequencing prices have flattened out over the last couple of data points.
Note that the oligo prices above are for column-based synthesis, and that oligos synthesized on arrays are much less expensive. However, array synthesis comes with the usual caveat that the quality is generally lower, unless you are getting your DNA from Agilent, which probably means you are getting your dsDNA from Gen9.
Note also that the distinction between the price of oligos and the price of double-stranded sDNA is becoming less useful. Whether you are ordering from Life/Thermo or from your local academic facility, the cost of producing oligos is now, in most cases, independent of their length. That’s because the cost of capital (including rent, insurance, labor, etc) is now more significant than the cost of goods. Consequently, the price reflects the cost of capital rather than the cost of goods. Moreover, the cost of the columns, reagents, and shipping tubes is certainly more than the cost of the atoms in the sDNA you are ostensibly paying for. Once you get into longer oligos (substantially larger than 50-mers) this relationship breaks down and the sDNA is more expensive. But, at this point in time, most people aren’t going to use longer oligos to assemble genes unless they have a tricky job that doesn’t work using short oligos.
Looking forward, I suspect oligos aren’t going to get much cheaper unless someone sorts out how to either 1) replace the requisite human labor and thereby reduce the cost of capital, or 2) finally replace the phosphoramidite chemistry that the industry relies upon.
IDT’s gBlocks come at prices that are constant across quite substantial ranges in length. Moreover, part of the decrease in price for these products is embedded in the fact that you are buying smaller chunks of DNA that you then must assemble and integrate into your organism of choice.
Someone who has purchased and assembled an absolutely enormous amount of sDNA over the last decade, suggested that if prices fell by another order of magnitude, he could switch completely to outsourced assembly. This is a potentially interesting “tipping point”. However, what this person really needs is sDNA integrated in a particular way into a particular genome operating in a particular host. The integration and testing of the new genome in the host organism is where most of the cost is. Given the wide variety of emerging applications, and the growing array of hosts/chassis, it isn’t clear that any given technology or firm will be able to provide arbitrary synthetic sequences incorporated into arbitrary hosts.
Dr. Jon Rowley and Dr. Uplaksh Kumar, Co-Founders of RoosterBio, Inc., a newly formed biotech startup located in Frederick, are paving the way for even more innovation in the rapidly growing fields of Synthetic Biology and Regenerative Medicine. Synthetic Biology combines engineering principles with basic science to build biological products, including regenerative medicines and cellular therapies. Regenerative medicine is a broad definition for innovative medical therapies that will enable the body to repair, replace, restore and regenerate damaged or diseased cells, tissues and organs. Regenerative therapies that are in clinical trials today may enable repair of damaged heart muscle following heart attack, replacement of skin for burn victims, restoration of movement after spinal cord injury, regeneration of pancreatic tissue for insulin production in diabetics and provide new treatments for Parkinson’s and Alzheimer’s diseases, to name just a few applications.
While the potential of the field is promising, the pace of development has been slow. One main reason for this is that the living cells required for these therapies are cost-prohibitive and not supplied at volumes that support many research and product development efforts. RoosterBio will manufacture large quantities of standardized primary cells at high quality and low cost, which will quicken the pace of scientific discovery and translation to the clinic. “Our goal is to accelerate the development of products that incorporate living cells by providing abundant, affordable and high quality materials to researchers that are developing and commercializing these regenerative technologies” says Dr. Rowley
NHMU Lecture featuring – J. Craig Venter, Ph.D. Founder, Chairman, and CEO – J. Craig Venter Institute; Co-Founder and CEO, Synthetic Genomics Inc.
J. Craig Venter, Ph.D., is Founder, Chairman, and CEO of the J. Craig Venter Institute (JVCI), a not-for-profit, research organization dedicated to human, microbial, plant, synthetic and environmental research. He is also Co-Founder and CEO of Synthetic Genomics Inc. (SGI), a privately-held company dedicated to commercializing genomic-driven solutions to address global needs.
In 1998, Dr. Venter founded Celera Genomics to sequence the human genome using new tools and techniques he and his team developed. This research culminated with the February 2001 publication of the human genome in the journal, Science. Dr. Venter and his team at JVCI continue to blaze new trails in genomics. They have sequenced and a created a bacterial cell constructed with synthetic DNA, putting humankind at the threshold of a new phase of biological research. Whereas, we could previously read the genetic code (sequencing genomes), we can now write the genetic code for designing new species.
The science of synthetic genomics will have a profound impact on society, including new methods for chemical and energy production, human health and medical advances, clean water, and new food and nutritional products. One of the most prolific scientists of the 21st century for his numerous pioneering advances in genomics, he guides us through this emerging field, detailing its origins, current challenges, and the potential positive advances.
His work on synthetic biology truly embodies the theme of “pushing the boundaries of life.” Essentially, Venter is seeking to “write the software of life” to create microbes designed by humans rather than only through evolution. The potential benefits and risks of this new technology are enormous. It also requires us to examine, both scientifically and philosophically, the question of “What is life?”
J Craig Venter wants to digitize DNA and transmit the signal to teleport organisms
A technology trend analyst offers an overview of synthetic biology, its potential applications, obstacles to its development, and prospects for public approval.
In addition to boosting the economy, synthetic biology projects currently in development could have profound implications for the future of manufacturing, sustainability, and medicine.
Before society can fully reap the benefits of synthetic biology, however, the field requires development and faces a series of hurdles in the process. Do researchers have the scientific know-how and technical capabilities to develop the field?
Biology + Engineering = Synthetic Biology
Bioengineers aim to build synthetic biological systems using compatible standardized parts that behave predictably. Bioengineers synthesize DNA parts—oligonucleotides composed of 50–100 base pairs—which make specialized components that ultimately make a biological system. As biology becomes a true engineering discipline, bioengineers will create genomes using mass-produced modular units similar to the microelectronics and computer industries.
Currently, bioengineering projects cost millions of dollars and take years to develop products. For synthetic biology to become a Schumpeterian revolution, smaller companies will need to be able to afford to use bioengineering concepts for industrial applications. This will require standardized and automated processes.
A major challenge to developing synthetic biology is the complexity of biological systems. When bioengineers assemble synthetic parts, they must prevent cross talk between signals in other biological pathways. Until researchers better understand these undesired interactions that nature has already worked out, applications such as gene therapy will have unwanted side effects. Scientists do not fully understand the effects of environmental and developmental interaction on gene expression. Currently, bioengineers must repeatedly use trial and error to create predictable systems.
Similar to physics, synthetic biology requires the ability to model systems and quantify relationships between variables in biological systems at the molecular level.
The second major challenge to ensuring the success of synthetic biology is the development of enabling technologies. With genomes having billions of nucleotides, this requires fast, powerful, and cost-efficient computers. Moore’s law, named for Intel co-founder Gordon Moore, posits that computing power progresses at a predictable rate and that the number of components in integrated circuits doubles each year until its limits are reached. Since Moore’s prediction, computer power has increased at an exponential rate while pricing has declined.
DNA sequencers and synthesizers are necessary to identify genes and make synthetic DNA sequences. Bioengineer Robert Carlson calculated that the capabilities of DNA sequencers and synthesizers have followed a pattern similar to computing. This pattern, referred to as the Carlson Curve, projects that scientists are approaching the ability to sequence a human genome for $1,000, perhaps in 2020. Carlson calculated that the costs of reading and writing new genes and genomes are falling by a factor of two every 18–24 months. (see recent Carlson comment on requirement to read and write for a variety of limiting conditions).
Startup to Strengthen Synthetic Biology and Regenerative Medicine Industries with Cutting Edge Cell Products
Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging
Futurists have touted the twenty-first century as the century of biology based primarily on the promise of genomics. Medical researchers aim to use variations within genes as biomarkers for diseases, personalized treatments, and drug responses. Currently, we are experiencing a genomics bubble, but with advances in understanding biological complexity and the development of enabling technologies, synthetic biology is reviving optimism in many fields, particularly medicine.
Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman,and his most recent book is The Secret Anarchy of Science.
The basic idea is that we take an organism – a bacterium, say – and re-engineer its genome so that it does something different. You might, for instance, make it ingest carbon dioxide from the atmosphere, process it and excrete crude oil.
That project is still under construction, but others, such as using synthesised DNA for data storage, have already been achieved. As evolution has proved, DNA is an extraordinarily stable medium that can preserve information for millions of years. In 2012, the Harvard geneticist George Church proved its potential by taking a book he had written, encoding it in a synthesised strand of DNA, and then making DNA sequencing machines read it back to him.
When we first started achieving such things it was costly and time-consuming and demanded extraordinary resources, such as those available to the millionaire biologist Craig Venter. Venter’s team spent most of the past two decades and tens of millions of dollars creating the first artificial organism, nicknamed “Synthia”. Using computer programs and robots that process the necessary chemicals, the team rebuilt the genome of the bacterium Mycoplasma mycoides from scratch. They also inserted a few watermarks and puzzles into the DNA sequence, partly as an identifying measure for safety’s sake, but mostly as a publicity stunt.
What they didn’t do was redesign the genome to do anything interesting. When the synthetic genome was inserted into an eviscerated bacterial cell, the new organism behaved exactly the same as its natural counterpart. Nevertheless, that Synthia, as Venter put it at the press conference to announce the research in 2010, was “the first self-replicating species we’ve had on the planet whose parent is a computer” made it a standout achievement.
Today, however, we have entered another era in synthetic biology and Venter faces stiff competition. The Steve Jobs to Venter’s Bill Gates is Jef Boeke, who researches yeast genetics at New York University.
Boeke wanted to redesign the yeast genome so that he could strip out various parts to see what they did. Because it took a private company a year to complete just a small part of the task, at a cost of $50,000, he realised he should go open-source. By teaching an undergraduate course on how to build a genome and teaming up with institutions all over the world, he has assembled a skilled workforce that, tinkering together, has made a synthetic chromosome for baker’s yeast.
Stepping into DIYbio and Synthetic Biology at ScienceHack
We got a crash course on genetics and protein pathways, and then set out to design and build our own pathways using both the “Genomikon: Violacein Factory” kit and Synbiota platform. With Synbiota’s software, we dragged and dropped the enzymes to create the sequence that we were then going to build out. After a process of sketching ideas, mocking up pathways, and writing hypotheses, we were ready to start building!
The night stretched long, and at midnight we were forced to vacate the school. Not quite finished, we loaded our delicate bacteria, incubator, and boxes of gloves onto the bus and headed back to complete our bacterial transformation in one of our hotel rooms. Jammed in between the beds and the mini-fridge, we heat-shocked our bacteria in the hotel ice bucket. It was a surreal moment.
While waiting for our bacteria, we held an “unconference” where we explored bioethics, security and risk related to synthetic biology, 3D printing on Mars, patterns in juggling (with live demonstration!), and even did a Google Hangout with Rob Carlson. Every few hours, we would excitedly check in on our bacteria, looking for bacterial colonies and the purple hue characteristic of violacein.
Most impressive was the wildly successful and seamless integration of a diverse set of people: in a matter of hours, we were transformed from individual experts and practitioners in assorted fields into cohesive and passionate teams of DIY biologists and science hackers. The ability of everyone to connect and learn was a powerful experience, and over the course of just one weekend we were able to challenge each other and grow.
Returning to work on Monday, we were hungry for more. We wanted to find a way to bring the excitement and energy from the weekend into the studio and into the projects we’re working on. It struck us that there are strong parallels between design and DIYbio, and we knew there was an opportunity to bring some of the scientific approaches and curiosity into our studio.
The importance of spatially-localized and quantified image interpretation in cancer management
Writer & reporter: Dror Nir, PhD
I became involved in the development of quantified imaging-based tissue characterization more than a decade ago. From the start, it was clear to me that what clinicians needs will not be answered by just identifying whether a certain organ harbors cancer. If imaging devices are to play a significant role in future medicine, as a complementary source of information to bio-markers and gene sequencing the minimum value expected of them is accurate directing of biopsy needles and treatment tools to the malignant locations in the organ. Therefore, the design goal of the first Prostate-HistoScanning (“PHS”) version I went into the trouble of characterizing localized volume of tissue at the level of approximately 0.1cc (1x1x1 mm). Thanks to that, the imaging-interpretation overlay of PHS localizes the suspicious lesions with accuracy of 5mm within the prostate gland; Detection, localisation and characterisation of prostate cancer by prostate HistoScanning(™).
I then started a more ambitious research aiming to explore the feasibility of identifying sub-structures within the cancer lesion itself. The preliminary results of this exploration were so promising that it surprised not only the clinicians I was working with but also myself. It seems, that using quality ultrasound, one can find Imaging-Biomarkers that allows differentiation of inside structures of a cancerous lesions. Unfortunately, for everyone involved in this work, including me, this scientific effort was interrupted by financial constrains before reaching maturity.
My short introduction was made to explain why I find the publication below important enough to post and bring to your attention.
From the Departments of Radiology and Cancer Imaging and Metabolism, Moffitt Cancer Center, 12902 Magnolia Dr, Tampa, FL 33612. Address correspondence toR.A.G. (e-mail: Robert.Gatenby@Moffitt.org).
Abstract
Cancer therapy, even when highly targeted, typically fails because of the remarkable capacity of malignant cells to evolve effective adaptations. These evolutionary dynamics are both a cause and a consequence of cancer system heterogeneity at many scales, ranging from genetic properties of individual cells to large-scale imaging features. Tumors of the same organ and cell type can have remarkably diverse appearances in different patients. Furthermore, even within a single tumor, marked variations in imaging features, such as necrosis or contrast enhancement, are common. Similar spatial variations recently have been reported in genetic profiles. Radiologic heterogeneity within tumors is usually governed by variations in blood flow, whereas genetic heterogeneity is typically ascribed to random mutations. However, evolution within tumors, as in all living systems, is subject to Darwinian principles; thus, it is governed by predictable and reproducible interactions between environmental selection forces and cell phenotype (not genotype). This link between regional variations in environmental properties and cellular adaptive strategies may permit clinical imaging to be used to assess and monitor intratumoral evolution in individual patients. This approach is enabled by new methods that extract, report, and analyze quantitative, reproducible, and mineable clinical imaging data. However, most current quantitative metrics lack spatialness, expressing quantitative radiologic features as a single value for a region of interest encompassing the whole tumor. In contrast, spatially explicit image analysis recognizes that tumors are heterogeneous but not well mixed and defines regionally distinct habitats, some of which appear to harbor tumor populations that are more aggressive and less treatable than others. By identifying regional variations in key environmental selection forces and evidence of cellular adaptation, clinical imaging can enable us to define intratumoral Darwinian dynamics before and during therapy. Advances in image analysis will place clinical imaging in an increasingly central role in the development of evolution-based patient-specific cancer therapy.
Cancers are heterogeneous across a wide range of temporal and spatial scales. Morphologic heterogeneity between and within cancers is readily apparent in clinical imaging, and subjective descriptors of these differences, such as necrotic, spiculated, and enhancing, are common in the radiology lexicon. In the past several years, radiology research has increasingly focused on quantifying these imaging variations in an effort to understand their clinical and biologic implications (1,2). In parallel, technical advances now permit extensive molecular characterization of tumor cells in individual patients. This has led to increasing emphasis on personalized cancer therapy, in which treatment is based on the presence of specific molecular targets (3). However, recent studies (4,5) have shown that multiple genetic subpopulations coexist within cancers, reflecting extensive intratumoral somatic evolution. This heterogeneity is a clear barrier to therapy based on molecular targets, since the identified targets do not always represent the entire population of tumor cells in a patient (6,7). It is ironic that cancer, a disease extensively and primarily analyzed genetically, is also the most genetically flexible of all diseases and, therefore, least amenable to such an approach.
Genetic variations in tumors are typically ascribed to a mutator phenotype that generates new clones, some of which expand into large populations (8). However, although identification of genotypes is of substantial interest, it is insufficient for complete characterization of tumor dynamics because evolution is governed by the interactions of environmental selection forces with the phenotypic, not genotypic, properties of populations as shown, for example, by evolutionary convergence to identical phenotypes among cave fish even when they are from different species (9–11). This connection between tissue selection forces and cellular properties has the potential to provide a strong bridge between medical imaging and the cellular and molecular properties of cancers.
We postulate that differences within tumors at different spatial scales (ie, at the radiologic, cellular, and molecular [genetic] levels) are related. Tumor characteristics observable at clinical imaging reflect molecular-, cellular-, and tissue-level dynamics; thus, they may be useful in understanding the underlying evolving biology in individual patients. A challenge is that such mapping across spatial and temporal scales requires not only objective reproducible metrics for imaging features but also a theoretical construct that bridges those scales (Fig 1).
Figure 1a: Computed tomographic (CT) scan of right upper lobe lung cancer in a 50-year-old woman.
Figure 1b: Isoattenuation map shows regional heterogeneity at the tissue scale (measured in centimeters).
Figure 1c & 1d: (c, d)Whole-slide digital images (original magnification, ×3) of a histologic slice of the same tumor at the mesoscopic scale (measured in millimeters) (c) coupled with a masked image of regional morphologic differences showing spatial heterogeneity (d).
Figure 1e:Subsegment of the whole slide image shows the microscopic scale (measured in micrometers) (original magnification, ×50).
Figure 1f: Pattern recognition masked image shows regional heterogeneity. In a, the CT image of non–small cell lung cancer can be analyzed to display gradients of attenuation, which reveals heterogeneous and spatially distinct environments (b). Histologic images in the same patient (c, e) reveal heterogeneities in tissue structure and density on the same scale as seen in the CT images. These images can be analyzed at much higher definition to identify differences in morphologies of individual cells (3), and these analyses reveal clusters of cells with similar morphologic features (d, f). An important goal of radiomics is to bridge radiologic data with cellular and molecular characteristics observed microscopically.
To promote the development and implementation of quantitative imaging methods, protocols, and software tools, the National Cancer Institute has established the Quantitative Imaging Network. One goal of this program is to identify reproducible quantifiable imaging features of tumors that will permit data mining and explicit examination of links between the imaging findings and the underlying molecular and cellular characteristics of the tumors. In the quest for more personalized cancer treatments, these quantitative radiologic features potentially represent nondestructive temporally and spatially variable predictive and prognostic biomarkers that readily can be obtained in each patient before, during, and after therapy.
Quantitative imaging requires computational technologies that can be used to reliably extract mineable data from radiographic images. This feature information can then be correlated with molecular and cellular properties by using bioinformatics methods. Most existing methods are agnostic and focus on statistical descriptions of existing data, without presupposing the existence of specific relationships. Although this is a valid approach, a more profound understanding of quantitative imaging information may be obtained with a theoretical hypothesis-driven framework. Such models use links between observable tumor characteristics and microenvironmental selection factors to make testable predictions about emergent phenotypes. One such theoretical framework is the developing paradigm of cancer as an ecologic and evolutionary process.
For decades, landscape ecologists have studied the effects of heterogeneity in physical features on interactions between populations of organisms and their environments, often by using observation and quantification of images at various scales (12–14). We propose that analytic models of this type can easily be applied to radiologic studies of cancer to uncover underlying molecular, cellular, and microenvironmental drivers of tumor behavior and specifically, tumor adaptations and responses to therapy (15).
In this article, we review recent developments in quantitative imaging metrics and discuss how they correlate with underlying genetic data and clinical outcomes. We then introduce the concept of using ecology and evolutionary models for spatially explicit image analysis as an exciting potential avenue of investigation.
Quantitative Imaging and Radiomics
In patients with cancer, quantitative measurements are commonly limited to measurement of tumor size with one-dimensional (Response Evaluation Criteria in Solid Tumors [or RECIST]) or two-dimensional (World Health Organization) long-axis measurements (16). These measures do not reflect the complexity of tumor morphology or behavior, and in many cases, changes in these measures are not predictive of therapeutic benefit (17). In contrast, radiomics (18) is a high-throughput process in which a large number of shape, edge, and texture imaging features are extracted, quantified, and stored in databases in an objective, reproducible, and mineable form (Figs 1, 2). Once transformed into a quantitative form, radiologic tumor properties can be linked to underlying genetic alterations (the field is called radiogenomics) (19–21) and to medical outcomes (22–27). Researchers are currently working to develop both a standardized lexicon to describe tumor features (28,29) and a standard method to convert these descriptors into quantitative mineable data (30,31) (Fig 3).
Figure 2: Contrast-enhanced CT scans show non–small cell lung cancer (left) and corresponding cluster map (right). Subregions within the tumor are identified by clustering pixels based on the attenuation of pixels and their cumulative standard deviation across the region. While the entire region of interest of the tumor, lacking the spatial information, yields a weighted mean attenuation of 859.5 HU with a large and skewed standard deviation of 243.64 HU, the identified subregions have vastly different statistics. Mean attenuation was 438.9 HU ± 45 in the blue subregion, 210.91 HU ± 79 in the yellow subregion, and 1077.6 HU ± 18 in the red subregion.
Figure 3: Chart shows the five processes in radiomics.
Several recent articles underscore the potential power of feature analysis. After manually extracting more than 100 CT image features, Segal and colleagues found that a subset of 14 features predicted 80% of the gene expression pattern in patients with hepatocellular carcinoma (21). A similar extraction of features from contrast agent–enhanced magnetic resonance (MR) images of glioblastoma was used to predict immunohistochemically identified protein expression patterns (22). Other radiomic features, such as texture, can be used to predict response to therapy in patients with renal cancer (32) and prognosis in those with metastatic colon cancer (33).
These pioneering studies were relatively small because the image analysis was performed manually, and the studies were consequently underpowered. Thus, recent work in radiomics has focused on technical developments that permit automated extraction of image features with the potential for high throughput. Such methods, which rely heavily on novel machine learning algorithms, can more completely cover the range of quantitative features that can describe tumor heterogeneity, such as texture, shape, or margin gradients or, importantly, different environments, or niches, within the tumors.
Generally speaking, texture in a biomedical image is quantified by identifying repeating patterns. Texture analyses fall into two broad categories based on the concepts of first- and second-order spatial statistics. First-order statistics are computed by using individual pixel values, and no relationships between neighboring pixels are assumed or evaluated. Texture analysis methods based on first-order statistics usually involve calculating cumulative statistics of pixel values and their histograms across the region of interest. Second-order statistics, on the other hand, are used to evaluate the likelihood of observing spatially correlated pixels (34). Hence, second-order texture analyses focus on the detection and quantification of nonrandom distributions of pixels throughout the region of interest.
The technical developments that permit second-order texture analysis in tumors by using regional enhancement patterns on dynamic contrast-enhanced MR images were reviewed recently (35). One such technique that is used to measure heterogeneity of contrast enhancement uses the Factor Analysis of Medical Image Sequences (or FAMIS) algorithm, which divides tumors into regions based on their patterns of enhancement (36). Factor Analysis of Medical Image Sequences–based analyses yielded better prognostic information when compared with region of interest–based methods in numerous cancer types (19–21,37–39), and they were a precursor to the Food and Drug Administration–approved three-time-point method (40). A number of additional promising methods have been developed. Rose and colleagues showed that a structured fractal-based approach to texture analysis improved differentiation between low- and high-grade brain cancers by orders of magnitude (41). Ahmed and colleagues used gray level co-occurrence matrix analyses of dynamic contrast-enhanced images to distinguish benign from malignant breast masses with high diagnostic accuracy (area under the receiver operating characteristic curve, 0.92) (26). Others have shown that Minkowski functional structured methods that convolve images with differently kernelled masks can be used to distinguish subtle differences in contrast enhancement patterns and can enable significant differentiation between treatment groups (42).
It is not surprising that analyses of heterogeneity in enhancement patterns can improve diagnosis and prognosis, as this heterogeneity is fundamentally based on perfusion deficits, which generate significant microenvironmental selection pressures. However, texture analysis is not limited to enhancement patterns. For example, measures of heterogeneity in diffusion-weighted MR images can reveal differences in cellular density in tumors, which can be matched to histologic findings (43). Measures of heterogeneity in T1- and T2-weighted images can be used to distinguish benign from malignant soft-tissue masses (23). CT-based texture features have been shown to be highly significant independent predictors of survival in patients with non–small cell lung cancer (24).
Texture analyses can also be applied to positron emission tomographic (PET) data, where they can provide information about metabolic heterogeneity (25,26). In a recent study, Nair and colleagues identified 14 quantitative PET imaging features that correlated with gene expression (19). This led to an association of metagene clusters to imaging features and yielded prognostic models with hazard ratios near 6. In a study of esophageal cancer, in which 38 quantitative features describing fluorodeoxyglucose uptake were extracted, measures of metabolic heterogeneity at baseline enabled prediction of response with significantly higher sensitivity than any whole region of interest standardized uptake value measurement (22). It is also notable that these extensive texture-based features are generally more reproducible than simple measures of the standardized uptake value (27), which can be highly variable in a clinical setting (44).
Spatially Explicit Analysis of Tumor Heterogeneity
Although radiomic analyses have shown high prognostic power, they are not inherently spatially explicit. Quantitative border, shape, and texture features are typically generated over a region of interest that comprises the entire tumor (45). This approach implicitly assumes that tumors are heterogeneous but well mixed. However, spatially explicit subregions of cancers are readily apparent on contrast-enhanced MR or CT images, as perfusion can vary markedly within the tumor, even over short distances, with changes in tumor cell density and necrosis.
An example is shown in Figure 2, which shows a contrast-enhanced CT scan of non–small cell lung cancer. Note that there are many subregions within this tumor that can be identified with attenuation gradient (attenuation per centimeter) edge detection algorithms. Each subregion has a characteristic quantitative attenuation, with a narrow standard deviation, whereas the mean attenuation over the entire region of interest is a weighted average of the values across all subregions, with a correspondingly large and skewed distribution. We contend that these subregions represent distinct habitats within the tumor, each with a distinct set of environmental selection forces.
These observations, along with the recent identification of regional variations in the genetic properties of tumor cells, indicate the need to abandon the conceptual model of cancers as bounded organlike structures. Rather than a single self-organized system, cancers represent a patchwork of habitats, each with a unique set of environmental selection forces and cellular evolution strategies. For example, regions of the tumor that are poorly perfused can be populated by only those cells that are well adapted to low-oxygen, low-glucose, and high-acid environmental conditions. Such adaptive responses to regional heterogeneity result in microenvironmental selection and hence, emergence of genetic variations within tumors. The concept of adaptive response is an important departure from the traditional view that genetic heterogeneity is the product of increased random mutations, which implies that molecular heterogeneity is fundamentally unpredictable and, therefore, chaotic. The Darwinian model proposes that genetic heterogeneity is the result of a predictable and reproducible selection of successful adaptive strategies to local microenvironmental conditions.
Current cross-sectional imaging modalities can be used to identify regional variations in selection forces by using contrast-enhanced, cell density–based, or metabolic features. Clinical imaging can also be used to identify evidence of cellular adaptation. For example, if a region of low perfusion on a contrast-enhanced study is necrotic, then an adaptive population is absent or minimal. However, if the poorly perfused area is cellular, then there is presumptive evidence of an adapted proliferating population. While the specific genetic properties of this population cannot be determined, the phenotype of the adaptive strategy is predictable since the environmental conditions are more or less known. Thus, standard medical images can be used to infer specific emergent phenotypes and, with ongoing research, these phenotypes can be associated with underlying genetic changes.
This area of investigation will likely be challenging. As noted earlier, the most obvious spatially heterogeneous imaging feature in tumors is perfusion heterogeneity on contrast-enhanced CT or MR images. It generally has been assumed that the links between contrast enhancement, blood flow, perfusion, and tumor cell characteristics are straightforward. That is, tumor regions with decreased blood flow will exhibit low perfusion, low cell density, and high necrosis. In reality, however, the dynamics are actually much more complex. As shown in Figure 4, when using multiple superimposed sequences from MR imaging of malignant gliomas, regions of tumor that are poorly perfused on contrast-enhanced T1-weighted images may exhibit areas of low or high water content on T2-weighted images and low or high diffusion on diffusion-weighted images. Thus, high or low cell densities can coexist in poorly perfused volumes, creating perfusion-diffusion mismatches. Regions with poor perfusion with high cell density are of particular clinical interest because they represent a cell population that is apparently adapted to microenvironmental conditions associated with poor perfusion. The associated hypoxia, acidosis, and nutrient deprivation select for cells that are resistant to apoptosis and thus are likely to be resistant to therapy (46,47).
Figure 4: Left: Contrast-enhanced T1 image from subject TCGA-02-0034 in The Cancer Genome Atlas–Glioblastoma Multiforme repository of MR volumes of glioblastoma multiforme cases. Right: Spatial distribution of MR imaging–defined habitats within the tumor. The blue region (low T1 postgadolinium, low fluid-attenuated inversion recovery) is particularly notable because it presumably represents a habitat with low blood flow but high cell density, indicating a population presumably adapted to hypoxic acidic conditions.
Furthermore, other selection forces not related to perfusion are likely to be present within tumors. For example, evolutionary models suggest that cancer cells, even in stable microenvironments, tend to speciate into “engineers” that maximize tumor cell growth by promoting angiogenesis and “pioneers” that proliferate by invading normal issue and co-opting the blood supply. These invasive tumor phenotypes can exist only at the tumor edge, where movement into a normal tissue microenvironment can be rewarded by increased proliferation. This evolutionary dynamic may contribute to distinct differences between the tumor edges and the tumor cores, which frequently can be seen at analysis of cross-sectional images (Fig 5).
Figure 5a: CT images obtained with conventional entropy filtering in two patients with non–small cell lung cancer with no apparent textural differences show similar entropy values across all sections.
Figure 5b: Contour plots obtained after the CT scans were convolved with the entropy filter. Further subdividing each section in the tumor stack into tumor edge and core regions (dotted black contour) reveals varying textural behavior across sections. Two distinct patterns have emerged, and preliminary analysis shows that the change of mean entropy value between core and edge regions correlates negatively with survival.
Interpretation of the subsegmentation of tumors will require computational models to understand and predict the complex nonlinear dynamics that lead to heterogeneous combinations of radiographic features. We have exploited ecologic methods and models to investigate regional variations in cancer environmental and cellular properties that lead to specific imaging characteristics. Conceptually, this approach assumes that regional variations in tumors can be viewed as a coalition of distinct ecologic communities or habitats of cells in which the environment is governed, at least to first order, by variations in vascular density and blood flow. The environmental conditions that result from alterations in blood flow, such as hypoxia, acidosis, immune response, growth factors, and glucose, represent evolutionary selection forces that give rise to local-regional phenotypic adaptations. Phenotypic alterations can result from epigenetic, genetic, or chromosomal rearrangements, and these in turn will affect prognosis and response to therapy. Changes in habitats or the relative abundance of specific ecologic communities over time and in response to therapy may be a valuable metric with which to measure treatment efficacy and emergence of resistant populations.
Emerging Strategies for Tumor Habitat Characterization
A method for converting images to spatially explicit tumor habitats is shown in Figure 4. Here, three-dimensional MR imaging data sets from a glioblastoma are segmented. Each voxel in the tumor is defined by a scale that includes its image intensity in different sequences. In this case, the imaging sets are from (a) a contrast-enhanced T1 sequence, (b) a fast spin-echo T2 sequence, and (c) a fluid-attenuated inversion-recovery (or FLAIR) sequence. Voxels in each sequence can be defined as high or low based on their value compared with the mean signal value. By using just two sequences, a contrast-enhanced T1 sequence and a fluid-attenuated inversion-recovery sequence, we can define four habitats: high or low postgadolinium T1 divided into high or low fluid-attenuated inversion recovery. When these voxel habitats are projected into the tumor volume, we find they cluster into spatially distinct regions. These habitats can be evaluated both in terms of their relative contributions to the total tumor volume and in terms of their interactions with each other, based on the imaging characteristics at the interfaces between regions. Similar spatially explicit analysis can be performed with CT scans (Fig 5).
Analysis of spatial patterns in cross-sectional images will ultimately require methods that bridge spatial scales from microns to millimeters. One possible method is a general class of numeric tools that is already widely used in terrestrial and marine ecology research to link species occurrence or abundance with environmental parameters. Species distribution models (48–51) are used to gain ecologic and evolutionary insights and to predict distributions of species or morphs across landscapes, sometimes extrapolating in space and time. They can easily be used to link the environmental selection forces in MR imaging-defined habitats to the evolutionary dynamics of cancer cells.
Summary
Imaging can have an enormous role in the development and implementation of patient-specific therapies in cancer. The achievement of this goal will require new methods that expand and ultimately replace the current subjective qualitative assessments of tumor characteristics. The need for quantitative imaging has been clearly recognized by the National Cancer Institute and has resulted in formation of the Quantitative Imaging Network. A critical objective of this imaging consortium is to use objective, reproducible, and quantitative feature metrics extracted from clinical images to develop patient-specific imaging-based prognostic models and personalized cancer therapies.
It is increasingly clear that tumors are not homogeneous organlike systems. Rather, they contain regional coalitions of ecologic communities that consist of evolving cancer, stroma, and immune cell populations. The clinical consequence of such niche variations is that spatial and temporal variations of tumor phenotypes will inevitably evolve and present substantial challenges to targeted therapies. Hence, future research in cancer imaging will likely focus on spatially explicit analysis of tumor regions.
Clinical imaging can readily characterize regional variations in blood flow, cell density, and necrosis. When viewed in a Darwinian evolutionary context, these features reflect regional variations in environmental selection forces and can, at least in principle, be used to predict the likely adaptive strategies of the local cancer population. Hence, analyses of radiologic data can be used to inform evolutionary models and then can be mapped to regional population dynamics. Ecologic and evolutionary principles may provide a theoretical framework to link imaging to the cellular and molecular features of cancer cells and ultimately lead to a more comprehensive understanding of specific cancer biology in individual patients.
Essentials
• Marked heterogeneity in genetic properties of different cells in the same tumor is typical and reflects ongoing intratumoral evolution.
• Evolution within tumors is governed by Darwinian dynamics, with identifiable environmental selection forces that interact with phenotypic (not genotypic) properties of tumor cells in a predictable and reproducible manner; clinical imaging is uniquely suited to measure temporal and spatial heterogeneity within tumors that is both a cause and a consequence of this evolution.
• Subjective radiologic descriptors of cancers are inadequate to capture this heterogeneity and must be replaced by quantitative metrics that enable statistical comparisons between features describing intratumoral heterogeneity and clinical outcomes and molecular properties.
• Spatially explicit mapping of tumor regions, for example by superimposing multiple imaging sequences, may permit patient-specific characterization of intratumoral evolution and ecology, leading to patient- and tumor-specific therapies.
• We summarize current information on quantitative analysis of radiologic images and propose future quantitative imaging must become spatially explicit to identify intratumoral habitats before and during therapy.
Disclosures of Conflicts of Interest: R.A.G. No relevant conflicts of interest to disclose. O.G. No relevant conflicts of interest to disclose.R.J.G. No relevant conflicts of interest to disclose.
Acknowledgments
The authors thank Mark Lloyd, MS; Joel Brown, PhD; Dmitry Goldgoff, PhD; and Larry Hall, PhD, for their input to image analysis and for their lively and informative discussions.
Footnotes
Received December 18, 2012; revision requested February 5, 2013; revision received March 11; accepted April 9; final version accepted April 29.
Funding: This research was supported by the National Institutes of Health (grants U54CA143970-01, U01CA143062; R01CA077575, andR01CA170595).
. Cancer genome sequencing: understanding malignancy as a disease of the genome, its conformation, and its evolution. Cancer Lett 2012 Oct 27. [Epub ahead of print]
. Prognostic PET 18F-FDG uptake imaging features are associated with major oncogenomic alterations in patients with resected non-small cell lung cancer. Cancer Res2012;72(15):3725–3734.
. Intratumor heterogeneity characterized by textural features on baseline 18F-FDG PET images predicts response to concomitant radiochemotherapy in esophageal cancer. J Nucl Med2011;52(3):369–378.
. Texture analysis in assessment and prediction of chemotherapy response in breast cancer. J Magn Reson Imaging doi:10.1002/jmri.23971 2012. Published online December 13, 2012.
. Quantitative classification based on CT histogram analysis of non-small cell lung cancer: correlation with histopathological characteristics and recurrence-free survival. Med Phys2012;39(2):988–1000.
. Medical imaging on the semantic web: annotation and image markup. Presented at the AAAI Spring Symposium Series, Semantic Scientific Knowledge Integration, Palo Alto, Calif, March 26–28, 2008.
. Assessment of response to tyrosine kinase inhibitors in metastatic renal cell cancer: CT texture as a predictive biomarker. Radiology 2011;261(1):165–171.
. Correlation between contrast enhancement in dynamic magnetic resonance imaging of the breast and tumor angiogenesis. Invest Radiol 1994;29(12):1043–1049.
. Characterization of image heterogeneity using 2D Minkowski functionals increases the sensitivity of detection of a targeted MRI contrast agent. Magn Reson Med2009;61(5):1218–1224.
. Using image analysis as a tool for assessment of prognostic and predictive biomarkers for breast cancer: how reliable is it? J Pathol Inform2010;1:29–36.
. Hypoxia-induced extracellular acidosis increases p-glycoprotein activity and chemoresistance in tumors in vivo via p38 signaling pathway. Adv Exp Med Biol 2011;701:115–122.
Recent comprehensive review on the role of ultrasound in breast cancer management
Writer, reporter and curator: Dror Nir, PhD
Word Cloud Created by Noam Steiner Tomer 8/10/2020
The paper below by R Hooley is a beautifully written review on how ultrasound could (and should) be practiced to better support breast cancer screening, staging, and treatment. The authors went as well into the effort of describing the benefits from combining ultrasonography with the other frequently used imaging modalities; i.e. mammography, tomosynthesis and MRI. Post treatment use of ultrasound is not discussed although this is a major task for this modality.
I would like to recommend giving attention to two very small (but for me very important) paragraphs: “Speed of Sound Imaging” and “Lesion Annotation”
Ultrasonography (US) has become an indispensable tool in breast imaging. Breast US was first introduced in the 1950s by using radar techniques adapted from the U.S. Navy (1). Over the next several decades, US in breast imaging was primarily used to distinguish cystic from solid masses. This was clinically important, as a simple breast cyst is a benign finding that does not require further work-up. However, most solid breast lesions remained indeterminate and required biopsy, as US was not adequately specific in differentiating benign from malignant solid breast masses. However, recent advances in US technology have allowed improved characterization of solid masses.
In 1995, Stavros et al (2) published a landmark study demonstrating that solid breast lesions could be confidently characterized as benign or malignant by using high-resolution grays-cale US imaging. Benign US features include few (two or three) gentle lobulations, ellipsoid shape, and a thin capsule, as well as a homogeneously echogenic echotexture. Malignant US features include spiculation, taller-than-wide orientation, angular margins, microcalcifications, and posterior acoustic shadowing. With these sonographic features, a negative predictive value of 99.5% and a sensitivity of 98.4% for the diagnosis of malignancy were achieved. These results have subsequently been validated by others (3,4) and remain the cornerstone of US characterization of breast lesions today. These features are essential in the comprehensive US assessment of breast lesions, described by the Breast Imaging and Reporting Data System (BI-RADS) (5).
US is both an adjunct and a complement to mammography. Advances in US technology include harmonic imaging, compound imaging, power Doppler, faster frame rates, higher resolution transducers, and, more recently, elastography and three-dimensional (3D) US. Currently accepted clinical indications include evaluation of palpable abnormalities and characterization of masses detected at mammography and magnetic resonance (MR) imaging. US may also be used as an adjuvant breast cancer screening modality in women with dense breast tissue and a negative mammogram. These applications of breast US have broadened the spectrum of sonographic features currently assessed, even allowing detection of noninvasive disease, a huge advance beyond the early simplistic cyst-versus-solid assessment. In addition, US is currently the primary imaging modality recommended to guide interventional breast procedures.
The most subtle US features of breast cancers are likely to be best detected by physicians who routinely synthesize findings from multiple imaging modalities and clinical information, as well as perform targeted US to correlate with lesions detected at mammography or MR imaging. Having a strong understanding of the technical applications of US and image optimization, in addition to strong interpretive and interventional US skills, is essential for today’s breast imager.
Optimal Imaging Technique
US is operator dependent, and meticulous attention to scanning technique as well as knowledge of the various technical options available are imperative for an optimized and accurate breast US examination. US is an interactive, dynamic modality. Although breast US scanning may be performed by a sonographer or mammography technologist, the radiologist also benefits greatly from hands-on scanning (Fig 1). Berg et al (6) demonstrated that US interpretive performance was improved if the radiologist had direct experience performing breast US scanning, including rescanning after the technologist. Real-time scanning also provides the opportunity for thorough evaluation of lesions and permits detailed lesion analysis compared with analyzing static images on a workstation. Subtle irregular or indistinct margins, artifacts, and architectural distortions may be difficult to capture on static images. Real-time scanning also allows the operator to assess lesion mobility, location, and relationship to adjacent structures and allows direct assessment of palpable lesions and other clinical findings. Moreover, careful review of any prior imaging studies is imperative to ensure accurate lesion correlation.
The US examination is generally well tolerated by the patient. Gentle but firm transducer pressure and optimal patient positioning are essential, with the patient’s arm relaxed and flexed behind the head. Medial lesions should generally be scanned in the supine position, and lateral lesions, including the axilla, should usually be scanned with the patient in the contralateral oblique position. This allows for elimination of potential artifact secondary to inadequate compression of breast tissue.
Gray-Scale Imaging
Typical US transducers used in breast imaging today have between 192 and 256 elements along the long axis. When scanning the breast, a linear 12–5-MHz transducer is commonly used. However, in small-breasted women (with breast thickness < 3 cm) or when performing targeted US to evaluate a superficial lesion, a linear 17–5-MHz transducer may be used. Such high-frequency transducers provide superb spatial and soft-tissue resolution, permitting substantially improved differentiation of subtle shades of gray, margin resolution, and lesion conspicuity in the background of normal breast tissue (Fig 2). However, the cost of such a high insonating frequency is decreased penetration due to attenuation of the ultrasound beam, making visualization of deep posterior tissue difficult (ie, greater than 3 cm in depth by using a linear 17–5-MHz transducer or greater than 5 cm in depth by using a linear 12–5-MHz transducer).
During the initial US survey of the region of interest in the breast, the depth should be set so that the pectoralis muscle is visualized along the posterior margin of the field of view. Initial gain settings should be adjusted so that fat at all levels is displayed as a midlevel gray. Simple cysts are anechoic. Compared with breast fat, most solid masses are hypoechoic, while the skin, Cooper ligaments, and fibrous tissue are echogenic. Time gain compensation, which adjusts image brightness at different depths from the skin to compensate for attenuation of the ultrasound beam as it penetrates into the breast tissue, may be set manually or, with appropriate equipment, may be adjusted automatically during real-time scanning or even during postprocessing of the image.
When searching for a lesion initially identified at mammography or MR imaging, careful correlation with lesion depth and surrounding anatomic structures is imperative. Lesion location may be affected by the patient’s position, which differs during mammography, US, and MR imaging examinations. Attention to surrounding background tissue may assist in accurate lesion correlation across multiple modalities. If a mass identified at mammography or MR imaging is surrounded entirely by fat or fibroglandular tissue, at US it should also be surrounded by hypoechoic fat or echogenic fibroglandular tissue, respectively. Similarly, careful attention to the region of clinical concern is necessary when scanning a palpable abnormality to ensure that the correct area is scanned. The examiner should place a finger on the palpable abnormality and then place the transducer directly over the region. Occasionally, the US examination may be performed in the sitting position if a breast mass can only be palpated when the patient is upright.
After a lesion is identified, or while searching for a subtle finding, the depth or field of view may be adjusted as needed. The depth should be decreased to better visualize more superficial structures or increased to better visualize deeper posterior lesions. The use of multiple focal zones also improves resolution at multiple depths simultaneously and should be used, if available. Although this reduces the frame rate, the reduction is typically negligible when scanning relatively superficial structures within the breast. If a single focal zone is selected to better evaluate a single lesion, the focal zone should be centered at the same level as the area of interest or minimally posterior to the area of interest, for optimal visualization.
Spatial Compounding, Speckle Reduction, and Harmonic Imaging
Spatial compound imaging and speckle reduction are available on most high-end US units and should be routinely utilized throughout the breast US examination. Unlike standard US imaging, in which ultrasound pulses are transmitted in a single direction perpendicular to the long axis of the transducer, spatial compounding utilizes electronic beam steering to acquire multiple images obtained from different angles within the plane of imaging (7–9). A single composite image is then obtained in real-time by averaging frames obtained by ultrasound beams acquired from these multiple angles (10). Artifactual echoes, including speckle and other spurious noise, as well as posterior acoustic patterns, including posterior enhancement (characteristic of simple cysts) and posterior acoustic shadowing (characteristic of some solid masses), are substantially reduced. However, returning echoes from real structures are enhanced, providing improved contrast resolution (9) so that ligaments, edge definition, and lesion margins, including spiculations, echogenic halos, posterior and lateral borders, as well as microcalcifications, are better defined. Speckle reduction is a real-time postprocessing technique that also enhances contrast resolution, improves border definition, is complementary to spatial compounding, and can be used simultaneously.
When a lesion is identified, harmonic imaging may also be applied—usually along with spatial compounding—to better characterize a cyst or a subtle solid mass. The simultaneous use of spatial compounding and harmonic imaging may decrease the frame rate, although this usually does not impair real-time evaluation. Harmonic imaging relies on filtering the multiple higher harmonic frequencies, which are multiples of the fundamental frequencies. All tissue is essentially nonlinear to sound propagation and the ultrasound pulse is distorted as it travels through breast tissue, creating harmonic frequencies (9). The returning ultrasound signal therefore contains both the original fundamental frequency and its multiples, or harmonics. Harmonic imaging allows the higher harmonic frequencies to be selected and used to create the gray-scale images (8, 9). Lower-frequency superficial reverberation echoes are thereby reduced, allowing improved characterization of simple cysts (particularly if small) through the elimination of artifactual internal echoes often seen in fluid. Harmonic imaging also improves lateral resolution (10) and may also improve contrast between fatty tissue and subtle lesions, allowing better definition of subtle lesion margins and posterior shadowing (Fig 3).
Speed of Sound Imaging
Conventional US systems set the speed of sound in tissue at a uniform 1540 m/sec (10). However, the speed of sound in tissues of different composition is variable and this variability may compromise US image quality. Breast tissue usually contains fat, and the speed of sound in fat, of approximately 1430–1470 m/sec, is slower than the assumed standard (11). Accurate speed of sound imaging, in which the US transducer may be optimized for the presence of fat within breast tissue, has been shown to improve lateral resolution (12). Additionally, it can be used to better characterize tissue interfaces, lesion margins, and microcalcifications (13) and may also be useful to identify subtle hypoechoic lesions surrounded by fatty breast tissue. Speed of sound imaging is available on most high-end modern US units and is an optional adjustment, depending on whether predominately fatty, predominately dense, or mixed breast tissue is being scanned.
Lesion Annotation
When a mass is identified and the US settings are optimized, the mass should be scanned with US “sweeps” through the entire lesion in multiple planes. Images of the lesion in the radial and antiradial views should be captured and annotated with “right” or “left,” clock face position, and centimeters from the nipple. Radial and anti-radial scanning planes are preferred over standard transverse and sagittal scanning planes because scanning the breast along the normal axis of the mammary ducts and lobar tissues allows improved understanding of the site of lesion origin and better visualization of ductal extension and helps narrow the differential diagnosis (14). Images should be captured with and without calipers to allow margin assessment on static images. Lesion size should be measured in three dimensions, reporting the longest horizontal diameter first, followed by the anteroposterior diameter, then the orthogonal horizontal.
Extended-Field-of-View Imaging
Advanced US technology permits extended-field-of-view imaging beyond the footprint of the transducer. By using a freehand technique, the operator slides the transducer along the desired region to be imaged. The resultant images are stored in real-time and, by applying pattern recognition, a single large-field-of-view image is obtained (7). This can be helpful in measuring very large lesions as well as the distance between multiple structures in the breast and for assessing the relationship of multifocal disease (located in the same quadrant as the index cancer or within 4–5 cm of the index cancer, along the same duct system) and/or multicentric disease (located in a different quadrant than the index cancer, or at a distance greater than 4–5 cm, along a different duct system).
Doppler US
Early studies investigating the use of color, power, and quantitative spectral Doppler US in the breast reported that the presence of increased vascularity, as well as changes in the pulsatility and resistive indexes, showed that these Doppler findings could be used to reliably characterize malignant lesions (15,16). However, other investigators have demonstrated substantial overlap of many of these Doppler characteristics in both benign and malignant breast lesions (17). Gokalp et al (18) also demonstrated that the addition of power Doppler US and spectral analysis to BI-RADS US features of solid breast masses did not improve specificity. While the current BI-RADS US lexicon recommends evaluation of lesion vascularity, it is not considered mandatory (5).
Power Doppler is generally more sensitive than color Doppler to low-flow volumes typical of breast lesions. Light transducer pressure is necessary to prevent occlusion of slow flow owing to compression of the vessel lumen. Currently both power and color Doppler are complementary tools to gray-scale imaging, and power Doppler may improve sensitivity in detecting malignant breast lesions (18,19). Demonstration of irregular branching central or penetrating vascularity within a solid mass raises suspicion of malignant neovascularity (20). Recently, the parallel artery and vein sign has been described as a reliable feature that has the potential to enable prediction of benignity in solid masses so that biopsy may be avoided. In a single study, a paired artery and vein was present in 13.2% of over 1000 masses at US-guided CNB and although an infrequent finding, the specificity for benignity was 99.3% and the false-negative rate was only 1.4%, with two malignancies among 142 masses in which the parallel artery and vein sign was identified (21).
Color and power Doppler US are also useful to evaluate cysts and complex cystic masses that contain a solid component. High-grade invasive cancer and metastatic lymph nodes may occasionally appear anechoic. Demonstration of flow within an otherwise simple appearing cyst, a complicated cyst, or a complex mass confirms the presence of a suspicious solid component, which requires biopsy. In addition, twinkle artifact seen with color Doppler US is useful to identify a biopsy marker clip or subtle echogenic microcalcifications (Fig 4). This Doppler color artifact occurs secondary to the presence of a strong reflecting granular surface and results in a rapidly changing mix of color adjacent to and behind the reflector (22). Care must be taken to avoid mistaking twinkle artifact for true vascular flow and, if in doubt, a spectral Doppler tracing can be obtained, as a normal vascular waveform will not be seen with a twinkle artifact.
Elastography
At physical examination, it has long been recognized that malignant tumors tend to feel hard when compared with benign lesions. US elastography can be used to measure tissue stiffness with the potential to improve specificity in the diagnosis of breast masses. There are two forms of US elastography available today: strain and shear wave. With either technique, acoustic information regarding lesion stiffness is converted into a black-and-white or color-scaled image that can also be superimposed on top of a B-mode gray-scale image.
Strain elastography requires gentle compression with a US probe or natural motion (such as heart beat, vascular pulsation, or respiration) and results in tissue displacement, or strain. Strain (ie, tissue compression and motion) is decreased in hard tissues compared with soft tissue (23). The information obtained with strain elastography provides qualitative information, although strain ratios may be calculated by comparing the strain of a lesion to the surrounding normal tissue. Benign breast lesions generally have lower ratios in comparison to malignant lesions (24,25).
Shear-wave elastography is based on the principle of acoustic radiation force. With use of light transducer pressure, transient automatic pulses can be generated by the US probe, inducing transversely oriented shear waves in tissue. The US system captures the velocity of these shear waves, which travel faster in hard tissue compared with soft tissue (26). Shear-wave elastography provides quantitative information because the elasticity of the tissue can be measured in meters per second or in kilopascals, a unit of pressure.
Elastography features such as strain ratios, size ratios, shape, homogeneity, and maximum lesion stiffness may complement conventional US in the analysis of breast lesions. Malignant masses evaluated with elastography tend to be more irregular, heterogeneous, and typically appear larger at elastography than at grayscale imaging (Fig 5) (27,28). Although malignant lesions generally also exhibit maximum stiffness greater than 80–100 kPa (28,29), caution is necessary when applying these numerical values to lesion analysis. Berg et al (28) reported three cancers among 115 masses with maximum stiffness between 20 and 30 kPa, for a 2.6% malignancy rate; 25 cancers among 281 masses with maximum stiffness between 30 and 80 kPa, for an 8.9% malignancy rate; and 61 cancers among 153 masses with maximum stiffness between 80 and 160 kPa, for a 39.9% malignancy rate (28). Invasive cancers with high histologic grade, large tumor size, nodal involvement, and vascular invasion have also been shown to be significantly correlated with high mean stiffness at shear-wave elastography (30).
Elastography may be useful in improving the specificity of US evaluation of BI-RADS 3 and 4A lesions, including complicated cysts. Berg and colleagues (28) showed that by using qualitative shear-wave elastography and color assessment of lesion stiffness, oval shape, and a maximum elasticity value of less than 80 kPa, unnecessary biopsy of low-suspicion BI-RADS 4A masses could be reduced without a significant loss in sensitivity. Several investigators have proposed a variety of imaging classifications using strain elastography, mostly based on the color pattern (27,31,32). A “bull’s eye” artifact has also been described as a characteristic feature present in benign breast cysts, which may appear as a round or oval lesion with a stiff rim associated with two soft spots, one located centrally and the other posteriorly (33).
Despite these initial promising studies regarding the role of US elastography in the analysis of breast lesions, limitations do exist. Strain and shear-wave elastography are quite different methods of measuring breast tissue stiffness, and the application of these methods varies across different commercial manufacturers. Inter- and intraobserver variability may be relatively high because the elastogram may be affected by differences in degree and method of compression. With strain elastography, a quality indicator that is an associated color bar or numerical value may be helpful to ensure proper light compression. Shear-wave elastography has been shown to be less operator-dependent, as tissue compression is initiated by the US probe in a standard, reproducible fashion (34) and only light transducer pressure is necessary. In addition, there is currently no universal color-coding standard and, depending on the manufacturer and/or operator preference, stiff lesions may be arbitrarily coded to appear red while soft lesions appear blue, or vice versa. Some elastography features such as the “bull’s eye” artifact are only seen on specific US systems. Lesions deeper than 2 cm are less accurately characterized by means of elastography. Moreover, one must be aware that soft cancers and hard benign lesions exist. Therefore, careful correlation of elastography with B-mode US features and mammography is essential. Future studies and further technical advances, including the creation of more uniformity across different US manufacturers, will ultimately determine the usefulness of elastography in clinical practice.
Three-dimensional US
Both handheld and automated high-resolution linear 3D transducers are now available for use in breast imaging. With a single pass of the ultrasound beam, a 3D reconstructed image can be formed in the coronal, sagittal, and transverse planes, potentially allowing more accurate assessment of anatomic structures and tumor margins (Fig 6). Few studies regarding the performance of 3D US in the breast exist, but a preliminary study demonstrated improved characterization of malignant lesions (35). Automated supine whole-breast US using 3D technology is now widely available for use in the screening setting (see section on screening breast US). Three-dimensional US may also be used in addition to computed tomography for image-guided radiation therapy (36) and has a potential role in assessing tumor response to neoadjuvant chemotherapy.
US Features of Benign and Malignant Breast Lesions
Cysts
Although for many years the main function of breast US was to differentiate cysts from solid masses, this differentiation can at times be problematic, particularly if the lesion is small or located deep in the breast. Simple cysts are defined as circumscribed, anechoic masses with a thin imperceptible wall and enhanced through transmission (provided spatial compounding is not used). By convention, simple cysts may also contain up to a single thin septation. Simple cysts are confidently characterized with virtually 100% accuracy at US (14,37), provided that they are not very small (< 5 mm in size) or not located in deep tissue. Complicated cysts are hypoechoic with no discernable Doppler flow, contain internal echoes, and may also exhibit indistinct margins, and/or lack posterior acoustic enhancement. Clustered microcysts consist of a cluster of tiny (<2–3 mm in size) anechoic foci with thin (< 0.5 mm in thickness) intervening septations.
Complicated cysts are very common sonographic findings and the majority are benign. In multiple studies, which evaluated over 1400 complicated cysts and microcysts, the malignancy rate ranged from 0% to 0.8% (38–44). Most complicated cysts and clustered microcysts with a palpable or mammographic correlate are classified as BI-RADS 3 and require short-interval imaging follow-up or, occasionally, US-guided aspiration. However, in the screening US setting, if multiple and bilateral complicated and simple cysts are present (ie, at least three cysts with at least one cyst in each breast), these complicated cysts can be assessed as benign, BI-RADS 2, requiring no additional follow-up (38).
Complicated cysts should never demonstrate internal vascularity at color Doppler interrogation. The presence of a solid component, mural nodule, thickened septation, or thickened wall within a cystic mass precludes the diagnosis of a benign complicated cyst. These complex masses require biopsy, as some cancers may have cystic components. The application of compound imaging and harmonics, color Doppler, and potentially elastography may help differentiate benign complicated cysts from malignant cystic-appearing masses and reduce the need for additional follow-up or biopsy.
Solid Masses
Sonographic features of benign-appearing solid masses include an oval or ellipsoid shape, “wider-than-tall” orientation parallel to the skin, circumscribed margins, gentle and smooth (less than three) lobulations, as well as absence of any malignant features (2,45) (Fig 2b). Lesions with these features are commonly fibroadenomas or other benign masses and can often be safely followed, even if the mass is palpable (46–48). Malignant features of solid masses include spiculations, angular margins, marked hypoechogenicity, posterior acoustic shadowing, microcalcifications, ductal extension, branching pattern, and 1–2-mm microlobulations (2,45) (Figs 1b,5, 6). These are also often taller-than-wide lesions with a nonparallel orientation to the skin and may occasionally be associated with thickened Cooper ligaments and/or or skin thickening. Most cancers have more than one malignant feature, spiculation being the most specific and angular margins the most common (2).
There is, however, considerable overlap between these benign and malignant US features and careful scanning technique, as well as direct correlation with mammography, is essential. For example, some high-grade invasive ductal carcinomas with central necrosis, as well as the well-differentiated mucinous and medullary subtypes, may present as circumscribed, oval, hypoechoic masses that may look like complicated cysts with low-level internal echoes at US. Benign focal fibrous breast tissue or postoperative scars can appear as irregular shadowing masses on US images. Furthermore, while echogenic lesions are often benign and frequently represent lipomas or fibrous tissue, echogenic cancers do rarely occur (Figs 7, 8) (49,50). The presence of a single malignant feature, despite the presence of multiple benign features, precludes a benign classification and mandates biopsy, with the exception of fat necrosis and postoperative scars exhibiting typical benign mammographic features. Likewise, a mass with a benign US appearance should be biopsied if it exhibits any suspicious mammographic features.
Ductal Carcinoma in Situ
Ductal carcinoma in situ (DCIS) is characteristically associated with microcalcifications detected at mammography, but may also be detected at US since they are often associated with a subtle hypoechoic mass, which may indicate an invasive mammographically occult component. US features associated with DCIS most commonly include a hypoechoic mass with an irregular shape, microlobulated margins, no posterior acoustic features, and no internal vascularity. Ductal abnormalities, intracystic lesions, and architectural distortions may also be present (51–53). Noncalcified DCIS manifesting as a solid mass at US is more frequently found in non–high-grade than high-grade DCIS, which is more often associated with microcalcifications and ductal changes (54). US can depict microcalcifications, particularly those in clusters greater than 10 mm in size and located in a hypoechoic mass or a ductlike structure (Fig 9) (55). Malignant calcifications are more likely to be detected sonographically than are benign calcifications, which may be obscured by surrounding echogenic breast tissue (55,56). Although US is inferior to mammography in the detection of suspicious microcalcifications, the main benefit of US detection of DCIS is to identify the invasive component and guide biopsy procedures.
Breast US in Clinical Practice
Current indications for breast US as recommended by the American College of Radiology Practice Guidelines include the evaluation of palpable abnormalities or other breast symptoms, assessment of mammographic or MR imaging–detected abnormalities, and evaluation of breast implants (57). Additionally, US is routinely used for guidance during interventional procedures, treatment planning for radiation therapy, screening in certain groups of women, and evaluation of axillary lymph nodes. Much literature has been written on these uses and a comprehensive discussion is beyond the scope of this article. A few important and timely topics, however, will be reviewed.
BI-RADS US
The BI-RADS US lexicon was introduced in 2003, and subsequently, there have been several studies assessing the accuracy of BI-RADS US classification of breast lesions. Low to moderate interobserver agreement has been found in the description of margins (especially noncircumscribed margins), echogenicity, and posterior acoustic features. Abdullah et al (58) reported low interobserver agreement especially for small masses and for malignant masses. Given the importance of margin analysis in the characterization of benign and malignant lesions, this variability is potentially problematic. Studies have also shown variable results in the use of the final assessment categories. In clinical settings, Raza et al (46) showed inconsistent use of the BI-RADS 3 (probably benign) category in 14.0% of cases when biopsy was recommended. Abdullah et al also demonstrated fair and poor interobserver agreement for BI-RADS 4 (suspicious for malignancy) a, b, and c subcategories (58). However, Henig et al (59) reported more promising results, with malignancy rates in categories 3, 4, and 5 to be similar to those seen with mammographic categorization (1.2%, 17%, and 94%, respectively).
Evaluation of Mammographic Findings
Targeted US is complementary to diagnostic mammography because of its ability to differentiate cystic and solid lesions.US is also useful in the work up of subtle asymmetries, as it can help identify or exclude the presence of an underlying mass. True hypoechoic lesions can often be differentiated from prominent fat lobules by scanning in multiple planes, because true lesions usually do not blend or elongate into adjacent tissue. With the introduction of digital breast tomosynthesis for mammographic imaging, US will play yet another important role. As mammographic lesions can often be detected, localized, and have adequate margin assessment on 3D images, patients with lesions detected on digital breast tomosynthesis images at screening may often be referred directly to US, avoiding additional mammographic imaging and its associated costs and radiation exposure (Fig 10). This will place an even greater importance on high-quality US.
Evaluation of the Symptomatic Patient:Palpable Masses, Breast Pain, and Nipple Discharge
US is essential in the evaluation of patients with the common clinical complaint of either a palpable mass or focal persistent breast pain. Unlike focal breast pain, which may be occasionally associated with benign or malignant lesions, diffuse breast pain (bilateral or unilateral), as well as cyclic breast pain, requires only clinical follow-up, as it is usually physiologic with an extremely low likelihood of malignancy (60,61). In patients with isolated focal breast pain, the role of sonography may be limited to patient reassurance (61). In women younger than 30 years of age, with a palpable lump or focal breast pain, US is the primary imaging test, with a sensitivity and negative predictive value of nearly 100% (62). Symptomatic women older than 30 years usually require both US and mammography, and in these patients, the negative predictive value approaches 100% (63,64). Lehman et al (65) demonstrated that in symptomatic women aged 30–39 years, the risk of malignancy was 1.9% and the added value of adjunct mammography in addition to US was low. Identification of a benign-appearing solid lesion at US in a symptomatic woman can negate the need for needle biopsy, as many of these masses can safely be monitored with short-interval follow-up US (46–48), usually performed at 6 months. A suspicious mass identified at US can promptly undergo biopsy with US guidance.
US can also be used as an alternative or an addition to ductography in patients who present with unilateral, spontaneous bloody, clear, or serosanguinous nipple discharge (66). Among women with worrisome nipple discharge, ductography can demonstrate an abnormality in 59%–82% of women (67,68), MR imaging may demonstrate a suspicious abnormality in 34% of women (68), and US has been shown to demonstrate a subareolar mass or an intraductal mass or filling defect in up to 14% of women (67). If US can be used to identify a retroareolar mass or an intraductal mass, US-guided biopsy can be performed and ductography may be avoided (Fig 11). US may be limited, however, as small peripherally located intraductal masses or masses without an associated dilated duct may not be identified. Therefore, galactography, MR imaging, and/or major duct excision may still be necessary in the symptomatic patient with a negative US examination.
Finally, in the pregnant or lactating patient who presents with a palpable breast mass, focal breast pain, or bloody nipple discharge, US is also the initial imaging modality of choice. Targeted US examination in these patients can be used to identify most benign and malignant masses, including fibroadenomas, galactocoeles, lactating adenomas, abscesses, and invasive carcinomas. In a recent study by Robbins et al (69), a negative predictive value of 100% was found among 122 lesions evaluated with US in lactating, pregnant, or postpartum women. This is much higher than the pregnancy-associated breast cancer sensitivity of mammography, which has been reported in the range of 78%–87% (70,71). The diminished sensitivity of mammography is likely due to increased parenchymal density seen in these patients. However, since lactating breast parenchyma is more echogenic than most breast masses, hypoechoic breast cancers are more readily detected at US in pregnant patients.
Supplemental Screening Breast US
Because of the known limitations of mammography, particularly in women with dense breast tissue, supplemental screening with whole-breast US, in addition to mammography, is increasingly gaining widespread acceptance. Numerous independent studies have demonstrated that the addition of a single screening or whole-breast US examination in women with dense breast tissue at mammography will yield an additional 2.3–4.6 mammographically occult cancers per 1000 women (72–80). Mammographically occult cancers detected on US images are generally small node-negative invasive cancers (Fig 12) (81). However, few studies have investigated the performance of incident screening breast US, and the optimal screening US interval is unknown. Berg and colleagues (82) recently demonstrated that incident annual supplemental screening US in intermediate- and high-risk women with mammographically dense breast tissue enabled detection of an additional 3.7 cancers per 1000 women screened.
Handheld screening breast US is highly operator-dependent and the majority of screening breast US studies have relied on physician-performed examinations. As per the ACRIN 6666 protocol, a normal screening US examination should consist of a minimum of one image in each quadrant and one behind the nipple (83). Two studies have also demonstrated that technologist-performed handheld screening breast US can achieve similar cancer detection rates (76,78).
Automated whole-breast US is a recently developed alternative to traditional handheld screening breast US, in which standardized, uniform image sets may be readily obtained by a nonradiologist. Automated whole-breast US systems may utilize a standard US unit and a linear-array transducer attached to a computer-guided mechanical arm or a dedicated screening US unit with a 15-cm wide transducer (84,85). With these systems, over 3000 overlapping sagittal, transverse, and coronal images are obtained and available for later review by the radiologist, with associated 3D reconstruction. The advantages include less operator dependence, increased radiologist efficiency, and increased reproducibility, which could aid in follow-up of lesions.
A multi-institutional study has shown that supplemental automated whole-breast US can depict an additional 3.6 cancer per 1000 women screened, similar to physician-performed handheld screening US (85). However, disadvantages include the limited ability to scan the entire breast, particularly posterior regions in large breasts, time-consuming review of a large number of images by the radiologist, and the need to recall patients for a second US examination to re-evaluate indeterminate findings. Moreover, few investigators have compared the use of handheld with automated breast US screening. A single small recent study by Chang et al (86) demonstrated that of 14 cancers initially detected at handheld screening, only 57%–79% were also detected by three separate readers on automated whole-breast US images, with the two cancers missed by all three readers at automated whole-breast US, each less than 1 cm in size.
The use of supplemental screening breast US, performed in addition to mammography, remains controversial despite proof of the ability to detect small mammographically occult cancers. US has limited value for the detection of small clustered microcalcifications without an associated mass lesion. Low positive predictive values of biopsies performed of less than 12% have been consistently reported (77,87). No outcome study has been able to demonstrate a direct decrease in patient mortality due to the detection of these additional small and mammographically occult cancers. This would require a long, randomized screening trial, which is not feasible. Rationally, however, the early detection and treatment of additional small breast cancers should improve outcomes and reduce overall morbidity and mortality. Many insurance companies will not reimburse for screening breast US and historically, this examination has not been widely accepted in the United States.
Nevertheless, because of both the known efficacy of supplemental screening breast US and overall increased breast cancer awareness, more patients and clinicians are requesting this examination. In fact, some states now mandate that radiologists inform women of their breast density and advise them to discuss supplemental screening with their doctors. Although supplemental screening breast MR imaging is usually preferred for women who are at very high risk for breast cancer (ie, women with a lifetime risk of over 20%, for example those women who are BRCA positive or have multiple first-degree relatives with a history of premenopausal breast cancer), screening breast US should be considered in women at very high risk for breast cancer who cannot tolerate breast MR imaging, as well as those women with dense breast tissue and intermediate risk (ie, lifetime risk of 15%–20%, for example those women whose only risk factor is a personal history of breast cancer or previous biopsy of a high-risk lesion), or even average risk. Future studies are needed to establish strategies to reduce false-positive results and continue to optimize both technologist-performed handheld screening US and automated whole-breast US in women with mammographically dense breast tissue.
Use of US for MR Imaging–depicted Abnormalities
MR imaging of the breast is now an integral part of breast imaging, most commonly performed to screen high-risk women and to further assess the stage in patients with newly diagnosed breast cancers. While MR has a higher sensitivity than mammography for detecting breast cancer, the specificity is relatively low (88). Lesions detected on MR images are often mammographically occult, but many can be detected with targeted US (Fig 13). Besides further US characterization of an MR imaging–detected lesion, US may be used to guide intervention for lesions initially detected at MR imaging. US-guided biopsies are considerably less expensive, less time consuming, and more comfortable for the patient than MR imaging–guided biopsies.
Some suspicious lesions detected at MR imaging will represent invasive ductal or lobular cancers, but many may prove to be intraductal disease, which can be challenging to detect at US. Meticulous scanning technique is required for an MR imaging–directed US examination, with knowledge of subtle sonographic signs and close correlation with the MR imaging findings and location. Precontrast T1 images are helpful to facilitate localization of lesions in relation to fibroglandular tissue (89). Because MR imaging abnormalities tend to be vascular, increased vascularity may also assist in detection of a subtle sonographic correlate (90). Having the MR images available for simultaneous review while performing the US examination will ideally permit such associative correlation. At the authors’ facility, computer monitors displaying images from the picture archiving and communication system are available in all US rooms for this purpose.
Recent studies have shown that 46%–71% of lesions at MR imaging can be detected with focused US (90–94). Enhancing masses detected on MR images are identified on focused US images in 58%–65% of cases compared with nonmass enhancement, which is identified on focused US images in only 12%–32% of cases (90–92). Some studies have shown that US depiction of an MR imaging correlate was independent of size (91,93,95). However, Meissnitzer et al (92) showed that size dependence is also important: For masses 5 mm or smaller, only 50% were seen, versus 56% for masses 6–10 mm, 73% for masses 11–15 mm, and 86% for masses larger than 15 mm. Likewise, this study also demonstrated that for nonmass lesions, a US correlate was found for 13% of those measuring 6–10 mm, 25% of those 11–15 mm, and 42% of those larger than 15 mm (92). In addition, many of these studies determined that when a sonographic correlate was discovered, the probability of malignancy was increased (90–92). Since typical US malignant features such as spiculation and posterior shadowing may be absent and the pretest probability is higher for MR imaging–detected lesions, a lower threshold for biopsy should be considered when performing MR imaging–directed US compared with routine targeted US (90) or screening US.
Because lesions are often very subtle at MR-directed US examination and because of differences in patient positioning during the two examinations, careful imaging–histologic correlation is required when performing US-guided biopsy of MR imaging–detected abnormalities. For lesions sampled with a vacuum-assisted device and US guidance, Sakamoto et al (96) found a higher rate of false-negative biopsy results for MR imaging–detected lesions than for US-detected lesions, suggesting that precise US-MR imaging correlation may not have occurred. Meissnitzer et al (92) showed that although 91% of MR imaging–detected lesions had an accurate US correlate, 9% were found to be inaccurate. With ever-improving techniques and experience in breast US, the US visualization of MR imaging–detected abnormalities will likely continue to improve. Nevertheless, if a suspicious lesion is not identified sonographically, MR imaging–guided biopsy should still be performed, because the malignancy rate of sonographically occult MR imaging–detected lesions has been shown to range from 14% to 22% (91,95).
Preoperative Staging of Cancer with US
Breast MR imaging has been shown to be more sensitive than US in the detection of additional foci of mammographically occult disease in women with newly diagnosed breast cancer (97–99). Nevertheless, when a highly suspicious mass is identified at mammography and US, immediate US evaluation of the remainder of the ipsilateral breast, the contralateral breast, and the axilla should be considered. If additional lesions are identified, preoperative staging with MR imaging can be avoided and US-guided biopsy can be promptly performed, saving the patient valuable time and expense (100). In a study by Moon et al (101), of 201 patients with newly diagnosed breast cancer, staging US demonstrated mammographically occult multifocal or multicentric disease in 28 patients (14%) and contralateral breast cancers in eight patients (4%), resulting in a change in therapy in 32 patients (16%).
US can also be used to identify abnormal axillary, supraclavicular, and internal mammary lymph nodes. Abnormal lymph nodes characteristically demonstrate focal or diffuse cortical thickening (≥3 mm in thickness), a round (rather than oval or reniform) shape, loss of the echogenic fatty hilum and/or nonhilar, disorganized, irregular blood vessels (102,103) (Fig 14). A positive US-guided CNB or fine-needle aspiration of a clinically abnormal axillary lymph node in a patient with a known breast cancer can aid patient management, by avoiding the need for sentinel node biopsy and allowing the patient instead to proceed directly to axillary lymph node dissection or neoadjuvant chemotherapy.
Interventional Breast US
US-guided interventional procedures have increased in volume in recent years and US is now the primary biopsy guidance technique used in many breast imaging centers. Most palpable lesions, as well as lesions detected at mammography, MR imaging, or screening US, can be sampled with US. With current high-resolution transducers, even suspicious intraductal microcalcifications may be detected and sampled.
While US-guided procedures require technical skills that must be developed and can be challenging, once mastered this technique allows precise real-time sampling of the lesion, which is not possible with either stereotactic or MR imaging–guided procedures. US-guided procedures do not require ionizing radiation or intravenous contrast material. US procedures are more tolerable for patients than stereotactic (104) or MR imaging–guided procedures because US-guided procedures are faster and more comfortable, as breast compression and uncomfortable biopsy coils or tables are not necessary and the procedure may be performed with the patient supine (104–106).
Most literature has shown that automated 14-gauge CNB devices are adequate for the majority of US-guided biopsies (107–115). Image-guided CNB is preferable to fine-needle aspiration cytology of breast masses because of superior sensitivity, specificity, and diagnostic accuracy (116). DCIS, malignant invasion, and hormone receptor status of invasive breast cancers can be determined with CNB samples, but not with fine-needle aspiration cytology. Fine-needle aspiration may be performed, however, in complicated cysts and symptomatic simple cysts. In these cases, the cyst aspirate fluid can often be discarded; cytology is usually only necessary if the fluid is frankly bloody (117).
The choice of performing fine-needle aspiration or CNB of a suspicious axillary lymph node depends on radiologist preference and the availability of an experienced cytopathologist, although CNB is usually more accurate than fine-needle aspiration biopsy (118,119). Fine-needle aspiration may be preferred for suspicious deep lymph nodes in proximity to the axillary vessels, whereas CNB may be preferred in large nodes with thickened cortices, particularly if determination of hormone receptor status or immunohistochemistry is desired, since more tissue is required for these assays. If lymphoma is suspected, a core should be placed in saline and also in conventional formalin.
While the underestimation rate of malignancy can be considerable for high-risk lesions such as atypical hyperplasia, such histology is not commonly found in lesions undergoing US-guided CNB. Multiple studies have shown a false-negative rate for US CNB biopsy of around 2%–3% (107–115). Although the contiguous and larger samples obtained with a vacuum-assisted biopsy device undoubtedly reduce sampling error, the vacuum-assisted biopsy is a more expensive and more invasive procedure (109). In the authors’ experience, vacuum-assisted US biopsy is to be considered for small masses, intraductal or intracystic lesions, or lesions with subtle microcalcifications. These may be difficult to adequately sample with a spring-loaded automatic firing device. Alternatively, for more accurate sampling of such challenging cases, as well as some axillary lymph nodes and masses smaller than 1 cm in size, automated CNB needles designed to place the inner trough of the needle within a lesion before firing can be utilized (Fig 15). With this technique, the sampling trough of the CNB needle can be clearly visualized within the lesion before the overlying outer sheath is fired. Regardless of needle choice, a postbiopsy clip marker should be placed followed by a postbiopsy mammogram to document clip position. This will assist with follow-up imaging, facilitating mammography and/or MR imaging correlation.
There has been recent interest in the percutaneous removal of benign breast lesions by using US-guided vacuum-assisted biopsy. While in general, proved benign concordant lesions can safely remain in the breast, some patients desire removal. Percutaneous US-guided removal with a vacuum-assisted biopsy device can replace surgical removal in some cases, particularly for small lesions (1 cm in size or less). Several reports have shown promising results demonstrating rates of complete lesion excision, varying from 61% to 94% (120–124). Dennis et al (125) demonstrated that vacuum-assisted US-guided biopsy could be used to excise intraductal lesions resulting in resolution of problematic nipple discharge in 97% of patients. Even on long-term follow-up, most studies show low rates of residual masses, more commonly observed in larger fibroadenomas.
Intraoperative Breast US
The use of two-dimensional and 3D intraoperative US may decrease the incidence of positive margins and decrease re-excision rates (126–130) particularly in the setting of lumpectomy for palpable cancers, when US is used to assess the adequacy of surgical margins to determine the need for additional tissue removal. Similarly, intraoperative US has also been utilized to improve detection and removal of metastatic lymph nodes during sentinel lymph node assessment (131).
Future Directions
Intravenous US microbubble contrast agents have been used to enhance US diagnosis by means of analysis, enhancement patterns, the rates of uptake and washout, and identification of tumor angiogenesis. In addition, preliminary research has shown that intravenous US contrast agents may be able to depict tissue function with the potential to deliver targeted gene therapy to selected tumor cells (132). However, there are currently no intravenous US contrast agents approved for use in breast imaging by the U.S. Food and Drug Administration. Other potential advances in breast US include fusion imaging, which involves the direct overlay of correlative MR imaging with targeted US. Another evolving area is that of US computer-aided detection, which may be of particular benefit when combined with automated whole-breast screening US.
Summary
Technical advances in US now allow comprehensive US diagnosis, management, and treatment of breast lesions. Optimal use of US technology, meticulous scanning technique with careful attention to lesion morphology, and recognition and synthesis of findings from multiple imaging modalities are essential for optimal patient management. In the future, as radiologists utilize US for an ever-increasing scope of indications, become aware of the more subtle sonographic findings of breast cancer, and apply newly developing tools, the value of breast US will likely continue to increase and evolve.
Essentials
• Breast US is operator dependent; knowledge and understanding of the various technical options currently available are important for image optimization and accurate diagnosis.
• US is an interactive, dynamic modality and real-time scanning is necessary to assess subtle findings associated with malignancy.
• Ability to synthesize the information obtained from the breast US examination with concurrent mammography, MR imaging, and clinical breast examination is necessary for accurate diagnosis.
• The use of screening breast US in addition to mammography, particularly in women with dense breast tissue, is becoming more widely accepted in the United States.
• Breast US guidance is the primary biopsy method used in most breast imaging practices, and the radiologist should be familiar with various biopsy devices and techniques to adequately sample any breast mass identified at US.
Disclosures of Conflicts of Interest: R.J.H. No relevant conflicts of interest to disclose. L.M.S. Financial activities related to the present article: none to disclose. Financial activities not related to the present article: educational consultant in vascular US to Philips Healthcare; payment for lectures on breast US from Educational Symposia; payment for development of educational presentations from Philips Healthcare. Other relationships: none to disclose. L.E.P. Financial activities related to the present article: none to disclose. Financial activities not related to the present article: consultant to Hologic. Other relationships: none to disclose.
Abbreviations:
BI-RADS = Breast Imaging and Reporting Data System
. Training the ACRIN 6666 Investigators and effects of feedback on breast ultrasound interpretive performance and agreement in BI-RADS ultrasound feature analysis. AJR Am J Roentgenol 2012;199(1):224–235.
.Quantitative vascularity of breast masses by Doppler imaging: regional variations and diagnostic implications. J Ultrasound Med 2000;19(7):427–440;quiz 441–442.
. Does power Doppler ultrasonography improve the BI-RADS category assessment and diagnostic accuracy of solid breast lesions?Acta Radiol 2011;52(7):706–710.
. Breast ultrasound elastography: results of 193 breast lesions in a prospective study with histopathologic correlation. Eur J Radiol 2011;77(3):450–456.
. Pattern classification of ShearWave™ Elastography images for differential diagnosis between benign and malignant solid breast masses. Acta Radiol 2011;52(10):1069–1075.
. The contribution of three-dimensional power Doppler imaging in the preoperative assessment of breast tumors: a preliminary report. Obstet Gynecol Int 2009;2009:530579.
. Short-term follow-up of palpable breast lesions with benign imaging features: evaluation of 375 lesions in 320 women. AJR Am J Roentgenol 2009;193(6):1723–1730.
. Ultrasound findings and histological features of ductal carcinoma in situ detected by ultrasound examination alone. Breast Cancer 2010;17(2):136–141.
.Targeted ultrasound in women younger than 30 years with focal breast signs or symptoms: outcomes analyses and management implications. AJR Am J Roentgenol 2010;195(6):1472–1477.
.Accuracy and value of breast ultrasound for primary imaging evaluation of symptomatic women 30-39 years of age. AJR Am J Roentgenol2012;199(5):1169–1177.
. Role of breast magnetic resonance imaging (MRI) in patients with unilateral nipple discharge: preliminary study.Radiol Med (Torino) 2008;113(2):249–264.
. Comparison of the performance of screening mammography, physical examination, and breast US and evaluation of factors that influence them: an analysis of 27,825 patient evaluations. Radiology 2002;225(1):165–175.
. Clinically and mammographically occult breast lesions: detection and classification with high-resolution sonography. Semin Ultrasound CT MR 2000;21(4):325–336.
. Mammography and subsequent whole-breast sonography of nonpalpable breast cancers: the importance of radiologic breast density. AJR Am J Roentgenol 2003;180(6):1675–1679.
. Breast screening with ultrasound in women with mammography-negative dense breasts: evidence on incremental cancer detection and false positives, and associated cost. Eur J Cancer2008;44(4):539–544.
. Detection of breast cancer with addition of annual screening ultrasound or a single screening MRI to mammography in women with elevated breast cancer risk. JAMA 2012;307(13):1394–1404.
. Breast cancer detection: radiologists’ performance using mammography with and without automated whole-breast ultrasound. Eur Radiol 2010;20(11):2557–2564.
. Breast cancers initially detected by hand-held ultrasound: detection performance of radiologists using automated breast ultrasound data. Acta Radiol 2011;52(1):8–14.
. MR-directed (“second-look”) ultrasound examination for breast lesions detected initially on MRI: MR and sonographic findings. AJR Am J Roentgenol 2010;194(2):370–377.
. Targeted ultrasound of the breast in women with abnormal MRI findings for whom biopsy has been recommended. AJR Am J Roentgenol 2009;193(4):1025–1029.
. Utility of second-look ultrasound in the management of incidental enhancing lesions detected by breast MR imaging. Radiol Med (Torino) 2010;115(8):1234–1245.
. False-negative ultrasound-guided vacuum-assisted biopsy of the breast: difference with US-detected and MRI-detected lesions. Breast Cancer 2010;17(2):110–117.
. Diagnostic accuracy of mammography, clinical examination, US, and MR imaging in preoperative assessment of breast cancer. Radiology 2004;233(3):830–849.
. Multifocal, multicentric, and contralateral breast cancers: bilateral whole-breast US in the preoperative evaluation of patients.Radiology 2002;224(2):569–576.
. Axillary ultrasound and fine-needle aspiration in the preoperative evaluation of the breast cancer patient: an algorithm based on tumor size and lymph node appearance. AJR Am J Roentgenol 2010;195(5):1261–1267.
. Cortical morphologic features of axillary lymph nodes as a predictor of metastasis in breast cancer: in vitro sonographic study. AJR Am J Roentgenol 2008;191(3):646–652.
. Preferential use of sonographically guided biopsy to minimize patient discomfort and procedure time in a percutaneous image-guided breast biopsy program. J Ultrasound Med 2002;21(11):1221–1226.
. Comparison of automated versus vacuum-assisted biopsy methods for sonographically guided core biopsy of the breast.AJR Am J Roentgenol 2003;180(2):347–351.
. False-negative core needle biopsies of the breast: an analysis of clinical, radiologic, and pathologic findings in 27 concecutive cases of missed breast cancer. Cancer2003;97(8):1824–1831.
. Accuracy of sonographically guided 14-gauge core-needle biopsy: results of 715 consecutive breast biopsies with at least two-year follow-up of benign lesions.J Clin Ultrasound 2005;33(2):47–52.
. The accuracy of ultrasound, stereotactic, and clinical core biopsies in the diagnosis of breast cancer, with an analysis of false-negative cases. Ann Surg 2005;242(5):701–707.
. Ultrasound-guided diagnostic breast biopsy methodology: retrospective comparison of the 8-gauge vacuum-assisted biopsy approach versus the spring-loaded 14-gauge core biopsy approach. World J Surg Oncol 2011;9:87.
. A comparative analysis of core needle biopsy and fine-needle aspiration cytology in the evaluation of palpable and mammographically detected suspicious breast lesions. Diagn Cytopathol2007;35(11):681–689.
. Incidental treatment of nipple discharge caused by benign intraductal papilloma through diagnostic Mammotome biopsy. AJR Am J Roentgenol 2000;174(5):1263–1268.
.Optimising surgical accuracy in palpable breast cancer with intra-operative breast ultrasound: feasibility and surgeons’ learning curve. Eur J Surg Oncol2011;37(12):1044–1050.