Feeds:
Posts
Comments

Archive for the ‘RNA Biology, Cancer and Therapeutics’ Category

Mindful Discoveries

Larry H. Bernstein, MD, FCAP, Curator

LPBI

Schizophrenia and the Synapse

Genetic evidence suggests that overactive synaptic pruning drives development of schizophrenia.

By Ruth Williams | January 27, 2016 … more follows)

http://www.the-scientist.com/?articles.view/articleNo/45189/title/Schizophrenia-and-the-Synapse/

3.2.4

3.2.4   Mindful Discoveries, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

http://www.the-scientist.com/images/News/January2016/Schizophrenia.jpg

C4 (green) at synapses of human neurons

Compared to the brains of healthy individuals, those of people with schizophrenia have higher expression of a gene called C4, according to a paper published inNature today (January 27). The gene encodes an immune protein that moonlights in the brain as an eradicator of unwanted neural connections (synapses). The findings, which suggest increased synaptic pruning is a feature of the disease, are a direct extension of genome-wide association studies (GWASs) that pointed to the major histocompatibility (MHC) locus as a key region associated with schizophrenia risk.

“The MHC [locus] is the first and the strongest genetic association for schizophrenia, but many people have said this finding is not useful,” said psychiatric geneticist Patrick Sullivan of the University of North Carolina School of Medicine who was not involved in the study. “The value of [the present study is] to show that not only is it useful, but it opens up new and extremely interesting ideas about the biology and therapeutics of schizophrenia.”

Schizophrenia has a strong genetic component—it runs in families—yet, because of the complex nature of the condition, no specific genes or mutations have been identified. The pathological processes driving the disease remain a mystery.

Researchers have turned to GWASs in the hope of finding specific genetic variations associated with schizophrenia, but even these have not provided clear candidates.

“There are some instances where genome-wide association will literally hit one base [in the DNA],” explained Sullivan. While a 2014 schizophrenia GWAS highlighted the MHC locus on chromosome 6 as a strong risk area, the association spanned hundreds of possible genes and did not reveal specific nucleotide changes. In short, any hope of pinpointing the MHC association was going to be “really challenging,” said geneticist Steve McCarroll of Harvard who led the new study.

Nevertheless, McCarroll and colleagues zeroed in on the particular region of the MHC with the highest GWAS score—the C4 gene—and set about examining how the area’s structural architecture varied in patients and healthy people.

The C4gene can exist in multiple copies (from one to four) on each copy of chromosome 6, and has four different forms: C4A-short, C4B-short, C4A-long, and C4B-long. The researchers first examined the “structural alleles” of the C4 locus—that is, the combinations and copy numbers of the different C4 forms—in healthy individuals. They then examined how these structural alleles related to expression of both C4Aand C4B messenger RNAs (mRNAs) in postmortem brain tissues.From this the researchers had a clear picture of how the architecture of the C4 locus affected expression ofC4A and C4B. Next, they compared DNA from roughly 30,000 schizophrenia patients with that from 35,000 healthy controls, and a correlation emerged: the alleles most strongly associated with schizophrenia were also those that were associated with the highest C4A expression. Measuring C4A mRNA levels in the brains of 35 schizophrenia patients and 70 controls then revealed that, on average, C4A levels in the patients’ brains were 1.4-fold higher.C4 is an immune system “complement” factor—a small secreted protein that assists immune cells in the targeting and removal of pathogens. The discovery of C4’s association to schizophrenia, said McCarroll, “would have seemed random and puzzling if it wasn’t for work . . . showing that other complement components regulate brain wiring.” Indeed, complement protein C3 locates at synapses that are going to be eliminated in the brain, explained McCarroll, “and C4 was known to interact with C3 . . . so we thought well, actually, this might make sense.”McCarroll’s team went on to perform studies in mice that revealed C4 is necessary for C3 to be deposited at synapses. They also showed that the more copies of the C4 gene present in a mouse, the more the animal’s neurons were pruned.Synaptic pruning is a normal part of development and is thought to reflect the process of learning, where the brain strengthens some connections and eradicates others. Interestingly, the brains of deceased schizophrenia patients exhibit reduced neuron density. The new results, therefore, “make a lot of sense,” said Cardiff University’s Andrew Pocklington who did not participate in the work. They also make sense “in terms of the time period when synaptic pruning is occurring, which sort of overlaps with the period of onset for schizophrenia: around adolescence and early adulthood,” he added.

“[C4] has not been on anybody’s radar for having anything to do with schizophrenia, and now it is and there’s a whole bunch of really neat stuff that could happen,” said Sullivan. For one, he suggested, “this molecule could be something that is amenable to therapeutics.”

A. Sekar et al., “Schizophrenia risk from complexvariation of complement component 4,”Nature,   http://dx.doi.com:/10.1038/nature16549, 2016.     

Tags schizophrenia, neuroscience, gwas, genetics & genomics, disease/medicine and cell & molecular biology

Schizophrenia: From genetics to physiology at last

Ryan S. Dhindsa& David B. Goldstein

Nature (2016)  http://dx.doi.org://10.1038/nature16874

The identification of a set of genetic variations that are strongly associated with the risk of developing schizophrenia provides insights into the neurobiology of this destructive disease.

http://www.nytimes.com/2016/01/28/health/schizophrenia-cause-synaptic-pruning-brain-psychiatry.html

Genetic study provides first-ever insight into biological origin of schizophrenia

Suspect gene may trigger runaway synaptic pruning during adolescence — NIH-funded study

NIH/NATIONAL INSTITUTE OF MENTAL HEALTH

IMAGE

http://media.eurekalert.org/multimedia_prod/pub/web/107629_web.jpg

The site in Chromosome 6 harboring the gene C4 towers far above other risk-associated areas on schizophrenia’s genomic “skyline,” marking its strongest known genetic influence. The new study is the first to explain how specific gene versions work biologically to confer schizophrenia risk.  CREDIT  Psychiatric Genomics Consortium

Versions of a gene linked to schizophrenia may trigger runaway pruning of the teenage brain’s still-maturing communications infrastructure, NIH-funded researchers have discovered. People with the illness show fewer such connections between neurons, or synapses. The gene switched on more in people with the suspect versions, who faced a higher risk of developing the disorder, characterized by hallucinations, delusions and impaired thinking and emotions.

“Normally, pruning gets rid of excess connections we no longer need, streamlining our brain for optimal performance, but too much pruning can impair mental function,” explained Thomas Lehner, Ph.D., director of the Office of Genomics Research Coordination of the NIH’s National Institute of Mental Health (NIMH), which co-funded the study along with the Stanley Center for Psychiatric Research at the Broad Institute and other NIH components. “It could help explain schizophrenia’s delayed age-of-onset of symptoms in late adolescence/early adulthood and shrinkage of the brain’s working tissue. Interventions that put the brakes on this pruning process-gone-awry could prove transformative.”

The gene, called C4 (complement component 4), sits in by far the tallest tower on schizophrenia’s genomic “skyline” (see graph below) of more than 100 chromosomal sites harboring known genetic risk for the disorder. Affecting about 1 percent of the population, schizophrenia is known to be as much as 90 percent heritable, yet discovering how specific genes work to confer risk has proven elusive, until now.

A team of scientists led by Steve McCarroll, Ph.D., of the Broad Institute and Harvard Medical School, Boston, leveraged the statistical power conferred by analyzing the genomes of 65,000 people, 700 postmortem brains, and the precision of mouse genetic engineering to discover the secrets of schizophrenia’s strongest known genetic risk. C4’s role represents the most compelling evidence, to date, linking specific gene versions to a biological process that could cause at least some cases of the illness.

“Since schizophrenia was first described over a century ago, its underlying biology has been a black box, in part because it has been virtually impossible to model the disorder in cells or animals,” said McCarroll. “The human genome is providing a powerful new way in to this disease. Understanding these genetic effects on risk is a way of prying open that block box, peering inside and starting to see actual biological mechanisms.”

McCarroll’s team, including Harvard colleagues Beth Stevens, Ph.D., Michael Carroll, Ph.D., and Aswin Sekar, report on their findings online Jan. 27, 2016 in the journal Nature.

A swath of chromosome 6 encompassing several genes known to be involved in immune function emerged as the strongest signal associated with schizophrenia risk in genome-wide analyses by the NIMH-funded Psychiatric Genomics Consortium over the past several years. Yet conventional genetics failed to turn up any specific gene versions there linked to schizophrenia.

To discover how the immune-related site confers risk for the mental disorder, McCarroll’s team mounted a search for “cryptic genetic influences” that might generate “unconventional signals.” C4, a gene with known roles in immunity, emerged as a prime suspect because it is unusually variable across individuals. It is not unusual for people to have different numbers of copies of the gene and distinct DNA sequences that result in the gene working differently.

The researchers dug deeply into the complexities of how such structural variation relates to the gene’s level of expression and how that, in turn, might relate to schizophrenia. They discovered structurally distinct versions that affect expression of two main forms of the gene in the brain. The more a version resulted in expression of one of the forms, called C4A, the more it was associated with schizophrenia. The more a person had the suspect versions, the more C4 switched on and the higher their risk of developing schizophrenia. Moreover, in the human brain, the C4 protein turned out to be most prevalent in the cellular machinery that supports connections between neurons.

Adapting mouse molecular genetics techniques for studying synaptic pruning and C4’s role in immune function, the researchers also discovered a previously unknown role for C4 in brain development. During critical periods of postnatal brain maturation, C4 tags a synapse for pruning by depositing a sister protein in it called C3. Again, the more C4 got switched on, the more synapses got eliminated.

In humans, such streamlining/pruning occurs as the brain develops to full maturity in the late teens/early adulthood – conspicuously corresponding to the age-of-onset of schizophrenia symptoms.

Future treatments designed to suppress excessive levels of pruning by counteracting runaway C4 in at risk individuals might nip in the bud a process that could otherwise develop into psychotic illness, suggest the researchers. And thanks to the head start gained in understanding the role of such complement proteins in immune function, such agents are already in development, they note.

“This study marks a crucial turning point in the fight against mental illness. It changes the game,” added acting NIMH director Bruce Cuthbert, Ph.D. “Thanks to this genetic breakthrough, we can finally see the potential for clinical tests, early detection, new treatments and even prevention.”

###

VIDEO: Opening Schizophrenia’s Black Box https://youtu.be/s0y4equOTLg

Reference: Sekar A, Biala AR, de Rivera H, Davis A, Hammond TR, Kamitaki N, Tooley K Presumey J Baum M, Van Doren V, Genovese G, Rose SA, Handsaker RE, Schizophrenia Working Group of the Psychiatric Genomics Consortium, Daly MJ, Carroll MC, Stevens B, McCarroll SA. Schizophrenia risk from complex variation of complement component 4.Nature. Jan 27, 2016. DOI: 10.1038/nature16549.

Schizophrenia risk from complex variation of complement component 4

Aswin SekarAllison R. BialasHeather de RiveraAvery DavisTimothy R. Hammond, …., Michael C. CarrollBeth Stevens Steven A. McCarroll

Nature(2016)   http://dx.doi.org:/10.1038/nature16549

Schizophrenia is a heritable brain illness with unknown pathogenic mechanisms. Schizophrenia’s strongest genetic association at a population level involves variation in the major histocompatibility complex (MHC) locus, but the genes and molecular mechanisms accounting for this have been challenging to identify. Here we show that this association arises in part from many structurally diverse alleles of the complement component 4 (C4) genes. We found that these alleles generated widely varying levels of C4A and C4B expression in the brain, with each common C4 allele associating with schizophrenia in proportion to its tendency to generate greater expression of C4A. Human C4 protein localized to neuronal synapses, dendrites, axons, and cell bodies. In mice, C4 mediated synapse elimination during postnatal development. These results implicate excessive complement activity in the development of schizophrenia and may help explain the reduced numbers of synapses in the brains of individuals with schizophrenia.

Figure 1: Structural variation of the complement component 4 (C4) gene.

http://www.nature.com/nature/journal/vaop/ncurrent/carousel/nature16549-f1.jpg

a, Location of the C4 genes within the major histocompatibility complex (MHC) locus on human chromosome 6. b, Human C4 exists as two paralogous genes (isotypes), C4A and C4B; the encoded proteins are distinguished at a key site

http://www.nature.com/nature/journal/vaop/ncurrent/carousel/nature16549-f3.jpg

http://www.nature.com/nature/journal/vaop/ncurrent/carousel/nature16549-sf8.jpg

Gene Study Points Toward Therapies for Common Brain Disorders

University of Edinburgh    http://www.dddmag.com/news/2016/01/gene-study-points-toward-therapies-common-brain-disorders

Scientists have pinpointed the cells that are likely to trigger common brain disorders, including Alzheimer’s disease, Multiple Sclerosis and intellectual disabilities.

It is the first time researchers have been able to identify the particular cell types that malfunction in a wide range of brain diseases.

Scientists say the findings offer a roadmap for the development of new therapies to target the conditions.

The researchers from the University of Edinburgh’s Centre for Clinical Brain Sciences used advanced gene analysis techniques to investigate which genes were switched on in specific types of brain cells.

They then compared this information with genes that are known to be linked to each of the most common brain conditions — Alzheimer’s disease, anxiety disorders, autism, intellectual disability, multiple sclerosis, schizophrenia and epilepsy.

Their findings reveal that for some conditions, the support cells rather than the neurons that transmit messages in the brain are most likely to be the first affected.

Alzheimer’s disease, for example, is characterised by damage to the neurons. Previous efforts to treat the condition have focused on trying to repair this damage.

The study found that a different cell type — called microglial cells — are responsible for triggering Alzheimer’s and that damage to the neurons is a secondary symptom of disease progression.

Researchers say that developing medicines that target microglial cells could offer hope for treating the illness.

The approach could also be used to find new treatment targets for other diseases that have a genetic basis, the researchers say.

Dr Nathan Skene, who carried out the study with Professor Seth Grant, said: “The brain is the most complex organ made up from a tangle of many cell types and sorting out which of these cells go wrong in disease is of critical importance to developing new medicines.”

Professor Seth Grant said: “We are in the midst of scientific revolution where advanced molecular methods are disentangling the Gordian Knot of the brain and completely unexpected new pathways to solving diseases are emerging. There is a pressing need to exploit the remarkable insights from the study.”

Quantitative multimodal multiparametric imaging in Alzheimer’s disease

Qian Zhao, Xueqi Chen, Yun Zhou      Brain Informatics  http://link.springer.com/article/10.1007/s40708-015-0028-9

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder, causing changes in memory, thinking, and other dysfunction of brain functions. More and more people are suffering from the disease. Early neuroimaging techniques of AD are needed to develop. This review provides a preliminary summary of the various neuroimaging techniques that have been explored for in vivo imaging of AD. Recent advances in magnetic resonance (MR) techniques, such as functional MR imaging (fMRI) and diffusion MRI, give opportunities to display not only anatomy and atrophy of the medial temporal lobe, but also at microstructural alterations or perfusion disturbance within the AD lesions. Positron emission tomography (PET) imaging has become the subject of intense research for the diagnosis and facilitation of drug development of AD in both animal models and human trials due to its non-invasive and translational characteristic. Fluorodeoxyglucose (FDG) PET and amyloid PET are applied in clinics and research departments. Amyloid beta (Aβ) imaging using PET has been recognized as one of the most important methods for the early diagnosis of AD, and numerous candidate compounds have been tested for Aβ imaging. Besides in vivo imaging method, a lot of ex vivo modalities are being used in the AD researches. Multiphoton laser scanning microscopy, neuroimaging of metals, and several metal bioimaging methods are also mentioned here. More and more multimodality and multiparametric neuroimaging techniques should improve our understanding of brain function and open new insights into the pathophysiology of AD. We expect exciting results will emerge from new neuroimaging applications that will provide scientific and medical benefits.

Keywords –   Alzheimer’s disease Neuroimaging PET MRI Amyloid beta Multimodal

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that gradually destroys brain cells, causing changes in memory, thinking, and other dysfunction of brain functions [1]. AD is considered to a prolonged preclinical stage where neuropathological changes precede the clinical symptoms [2]. An estimation of 35 million people worldwide is living with this disease. If effective treatments are not discovered in a timely fashion, the number of AD cases is anticipated to rise to 113 million by 2050 [3].

Amyloid beta (Aβ) and tau are two of the major biomarkers of AD, and have important and different roles in association with the progression of AD pathophysiology. Jack et al. established hypothetical models of the major biomarkers of AD. By renewing and modifying the models, they found that the two major proteinopathies underlying AD biomarker changes, Aβ and tau, may be initiated independently in late onset AD where they hypothesize that an incident Aβ pathophysiology can accelerate an antecedent limbic and brainstem tauopathy [4]. MRI technique was used in the article, which revealed that the level of Aβ load was associated with a shorter time-to-progression of AD [5]. This warrants an urgent need to develop early neuroimaging techniques of AD neuropathology that can detect and predict the disease before the onset of dementia, monitor therapeutic efficacy in halting and slowing down progression in the earlier stage of the disease.

There have been various reports on the imaging assessments of AD. Some measurements reflect the pathology of AD directly, including positron emission tomography (PET) amyloid imaging and cerebrospinal fluid (CSF) beta-amyloid 42 (Aβ42), while others reflect neuronal injury associated with AD indirectly, including CSF tau (total and phosphorylated tau), fluorodeoxy-d-glucose (FDG)-PET, and MRI. AD Neuroimaging Initiative (ADNI) has been to establish the optimal panel of clinical assessments, MRI and PET imaging measures, as well as other biomarkers from blood and CSF, to inform clinical trial design for AD therapeutic development. At the same time, it has been highly productive in generating a wealth of data for elucidating disease mechanisms occurring during early stages of preclinical and prodromal AD [6].

Single neuroimaging often reflects limit information of AD. As a result, multimodal neuroimaging is widely used in neuroscience researches, as it overcomes the limitations of individual modalities. Multimodal multiparametric imaging mean the combination of different imaging techniques, such as PET, MRI, simultaneously or separately. The multimodal multiparametric imaging enables the visualization and quantitative analysis of the alterations in brain structure and function, such as PET/CT, and PET/MRI. [7]. In this review article, we summarize and discuss the main applications, findings, perspectives as well as advantages and challenges of different neuroimaging in AD, especially MRI and PET imaging.

2 Magnetic resonance imaging

MRI demonstrates specific volume loss or cortical atrophy patterns with disease progression in AD patients [810]. There are several MRI techniques and analysis methods used in clinical and scientific research of AD. Recent advances in MR techniques, such as functional MRI (fMRI) and diffusion MRI, depict not only anatomy and atrophy of the medial temporal lobe (MTL), but also microstructural alterations or perfusion disturbance within this region.

2.1 Functional MRI

Because of the cognitive reserve (CR), the relationship between severity of AD patients’ brain damage and corresponding clinical symptoms is not always paralleled [11, 12]. Recently, resting-state fMRI (RS-fMRI) is popular for its ability to map brain functional connectivity non-invasively [13]. By using RS-fMRI, Bozzali et al. reported that the CR played a role in modulating the effect of AD pathology on default mode network functional connectivity, which account for the variable clinical symptoms of AD [14]. Moreover, AD patients with higher educated experience were able to recruit compensatory neural mechanisms, which can be measured using RS-fMRI. Arterial spin-labeled (ASL) MRI is another functional brain imaging modality, which measures cerebral blood flow (CBF) by magnetically labeled arterial blood water following through the carotid and vertebral arteries as an endogenous contrast medium. Several studies have concluded the characteristics of CBF changes in AD patients using ASL-MRI [1517].

At some point in time, sufficient brain damage accumulates to result in cognitive symptoms and impairment. Mild cognitive impairment (MCI) is a condition in which subjects are usually only mildly impaired in memory with relative preservation of other cognitive domains and functional activities and do not meet the criteria for dementia [18], or as the prodromal state AD [19]. MCI patients are at a higher risk of developing AD and up to 15 % convert to AD per year [18]. Binnewijzend et al. have reported the pseudocontinuous ASL could distinguish both MCI and AD from healthy controls, and be used in the early diagnosis of AD [20]. In their continuous study, they used quantitative whole brain pseudocontinuous ASL to compare regional CBF (rCBF) distribution patterns in different types of dementia, and concluded that ASL-MRI could be a non-invasive and easily accessible alternative to FDG-PET imaging in the assessment of CBF of AD patients [21].

2.2 Structure MRI

Structural MRI (sMRI) has already been a reliable imaging method in the clinical diagnosis of AD, characterized as gray matter reduction and ventricular enlargement in standard T1-weighted sequences [9]. Locus coeruleus (LC) and substantia nigra (SN) degeneration was seen in AD. By using new quantitative calculating method, Chen et al. presented a new quantitative neuromelanin MRI approach for simultaneous measurement of locus LC and SN of brainstem in living human subjects [22]. The approach they used demonstrated advantages in image acquisition, pre-processing, and quantitative analysis. Numerous transgenic animal models of amyloidosis are available, which can manipulate a lot of neuropathological features of AD progression from the deposition of β-amyloid [23]. Braakman et al. demonstrated the dynamics of amyloid plaque formation and development in a serial MRI study in a transgenic mouse model [24]. Increased iron accumulation in gray matter is frequently observed in AD. Because of the paramagnetic nature of iron, MRI shows nice potential in the investigating iron levels in AD [25]. Quantitative MRI was shown high sensitivity and specificity in mapping cerebral iron deposition, and helped in the research on AD diagnosis [26].

The imaging patterns are always associated with the pathologic changes, such as specific protein markers. Spencer et al. manifested the relationship between quantitative T1 and T2 relaxation time changes and three immunohistochemical markers: β-amyloid, neuron-specific nuclear protein (a marker of neuronal cell load), and myelin basic protein (a marker of myelin load) in AD transgenic mice [27].

High-field MRI has been successfully applied to imaging plaques in transgenic mice for over a decade without contrast agents [24, 2830]. Sillerud et al. devised a method using blood–brain barrier penetrating, amyloid-targeted, superparamagnetic iron oxide nanoparticles (SPIONs) for better imaging of amyloid plaque [31]. Then, they successfully used this SPION-MRI to assess the drug efficacy on the 3D distribution of Aβ plaques in transgenic AD mouse [32].

2.3 Diffusion MRI

Diffusion-weighted imaging (DWI) is a sensitive tool that allows quantifying of physiologic alterations in water diffusion, which result from microscopic structural changes.

Diffusion tensor imaging (DTI) is a well-established and commonly employed diffusion MRI technique in clinical and research on neuroimaging studies, which is based on a Gaussian model of diffusion processes [33]. In general, AD is associated with widespread reduced fractional anisotropy (FA) and increased mean diffusivity (MD) in several regions, most prominently in the frontal and temporal lobes, and along the cingulum, corpus callosum, uncinate fasciculus, superior longitudinal fasciculus, and MTL-associated tracts than healthy controls [3437]. Acosta-Cabronero et al. reported increased axial diffusivity and MD in the splenium, which were the earliest abnormalities in AD [38]. FA and radial diffusivity (DR) differences in the corpus callosum, cingulum, and fornix were found to separate individuals with MCI who converted to AD from non-converters [39]. DTI was also found to be a better predictor of AD-specific MTL atrophy when compared to CSF biomarkers [40]. These findings suggested the potential clinical utility of DTI as early biomarkers of AD and its progression. However, an increase in MD and DR and a decrease in FA with advancing age in selective brain regions have been previously reported [41, 42]. Diffusion MRI can be also used in the classifying of various stages of AD. Multimodal classification method, which combined fMRI and DTI, separated more MCI from healthy controls than single approaches [43].

In recent years, tau has emerged as a potential target for therapeutic intervention. Tau plays a critical role in the neurodegenerative process forming neurofibrillary tangles, which is a major hallmark of AD and correlates with clinical disease progression. Wells et al. applied multiparametric MRI, containing high-resolution structure MRI (sMRI), a novel chemical exchange saturation transfer (CEST) MRI, DTI, and ASL, and glucose CEST to measure changes of tau pathology in AD transgenic mouse [44].

Besides DWI MRI, perfusion-weighted imaging (PWI) is another advanced MR technique, which could measure the cerebral hemodynamics at the capillary level. Zimny et al. evaluated the correlation of MTL with both DWI and PWI in AD and MCI patients [45].

3 Positron emission tomography

PET is a specific imaging technique applying in researches of brain function and neurochemistry of small animals, medium-sized animals, and human subjects [4648]. As a particular brain imaging technique, PET imaging has become the subject of intense research for the diagnosis and facilitation of drug development of AD in both animal models and human trials due to its non-invasive and translational characteristic. PET with various radiotracers is considered as a standard non-invasive quantitative imaging technique to measure CBF, glucose metabolism, and β-amyloid and tau deposition.

3.1 FDG-PET

To date, 18F-FDG is one of the best and widely used neuroimaging tracers of PET, which employed for research and clinical assessment of AD [49]. Typical lower FDG metabolism was shown in the precuneus, posterior cingulate, and temporal and parietal cortex with progression to whole brain reductions with increasing disease progress in AD brains [50, 51]. FDG-PET imaging reflects the cerebral glucose metabolism, neuronal injury, which provides indirect evidence on cognitive function and progression that cannot be provided by amyloid PET imaging.

Schraml et al. [52] identified a significant association between hypometabolic convergence index and phenotypes using ADNI data. Some researchers also used 18F-FDG-PET to analyze genetic information with multiple biomarkers to classify AD status, predicting cognitive decline or MCI to AD conversion [5355]. Trzepacz et al. [56] reported multimodal AD neuroimaging study, using MRI, 11C-PiB PET, and 18F-FDG-PET imaging to predict MCI conversion to AD along with APOE genotype. Zhang et al. [57] compared the genetic modality single-nucleotide polymorphism (SNP) with sMRI, 18F-FDG-PET, and CSF biomarkers, which were used to differentiate healthy control, MCI, and AD. They found FDG-PET is the best modality in terms of accuracy.

3.2 Amyloid beta PET

Aβ, the primary constituent of senile plaques, and tau tangles are hypothesized to play a primary role in the pathogenesis of AD, but it is still hard to identify the fundamental mechanisms [5860]. Aβ plaque in brain is one of the pathological hallmarks of AD [61,62]. Accumulation of Aβ peptide in the cerebral cortex is considered one cause of dementia in AD [63]. Numerous studies have involved in vivo PET imaging assessing cortical β-amyloid burden [6466].

Aβ imaging using PET has been recognized as one of the most important methods for the early diagnosis of AD [67]. Numerous candidate compounds have been tested for Aβ imaging, such as 11C-PiB [68], 18F-FDDNP [69], 11C-SB-13 [70], 18F-BAY94-9172 [71], 18F-AV-45 [72], 18F-flutemetamol [73, 74], 11C-AZD2184 [75], and 18F-ADZ4694 [76], 11C-BF227 and 18F-FACT [77].

Several amyloid PET studies examined genotypes, phenotypes, or gene–gene interactions. Ramanan et al. [78] reported the GWAS results with 18F-AV-45 reflecting the cerebral amyloid metabolism in AD for the first time. Swaminathan et al. [79] revealed the association between plasma Aβ from peripheral blood and cortical amyloid deposition on 11C-PiB. Hohman et al. [80] reported the relationship between SNPs involved in amyloid and tau pathophysiology with 18F-AV-45 PET.

Among the PET tracers, 11C-PiB, which has a high affinity for fibrillar Aβ, is a reliable biomarker of underlying AD pathology [68, 81]. It shows cortical uptake well paralleled with AD pathology [82, 83], has recently been approved for use by the Food and Drug Administration (FDA, April 2012) and the European Medicines Agency (January 2013). 18F-GE-067 (flutemetamol) and 18F-BAY94-9172 (florbetaben) have also been approved by the US FDA in the last 2 years [84, 85].

18F-Florbetapir (also known as 18F-AV-45) exhibits high affinity specific binding to amyloid plaques. 18F-AV-45 labels Aβ plaques in sections from patients with pathologically confirmed AD [72].

It was reported in several research groups that 18F-AV-45 PET imaging showed a reliability of both qualitative and quantitative assessments in AD patients, and Aβ+ increased with diagnostic category (healthy control < MCI < AD) [82, 86, 87]. Johnson et al. used 18F-AV-45 PET imaging to evaluate the amyloid deposition in both MCI and AD patients qualitatively and quantitatively, and found that amyloid burden increased with diagnostic category (MCI < AD), age, and APOEε4 carrier status [88]. Payoux et al. reported the equivocal amyloid PET scans using 18F-AV-45 associated with a specific pattern of clinical signs in a large population of non-demented older adults more than 70 years old [89].

More and more researchers consider combination and comparison of multiple PET tracers targeting amyloid plaque imaging together. Bruck et al. compared the prognostic ability of 11C-PiB PET, 18F-FDG-PET, and quantitative hippocampal volumes measured with MR imaging in predicting MCI to AD conversion. They found that the FDG-PET and 11C-PiB PET imaging are better in predicting MCI to AD conversion [90]. Hatashita et al. used 11C-PiB and FDG-PET imaging to identify MCI due to AD, 11C-PiB showed a higher sensitivity of 96.6 %, and FDG-PET added diagnostic value in predicting AD over a short period [91].

Besides, new Aβ imaging agents were radiosynthesized. Yousefi et al. radiosynthesized a new Aβ imaging agent 18F-FIBT, and compared the three different Aβ-targeted radiopharmaceuticals for PET imaging, including 18F-FIBT, 18F-florbetaben, and 11C-PiB [92]. 11C-AZD2184 is another new PET tracer developed for amyloid senile plaque imaging, and the kinetic behavior of 11C-AZD2184 is suitable for quantitative analysis and can be used in clinical examination without input function [75,93, 94].

4 Multimodality imaging: PET/MRI

Several diagnostic techniques, including MRI and PET, are employed for the diagnosis and monitoring of AD [95]. Multimodal imaging could provide more information in the formation and key molecular event of AD than single method. It drives the progression of neuroimaging research due to the recognition of the clinical benefits of multimodal data [96], and the better access to hybrid devices, such as PET/MRI [97].

Maier et al. evaluated the dynamics of 11C-PiB PET, 15O-H2O-PET, and ASL-MRI in transgenic AD mice and concluded that the AD-related decline of rCBF was caused by the cerebral Aβ angiopathy [98]. Edison et al. systematically compared 11C-PiB PET and MRI in AD, MCI patients, and controls. They thought that 11C-PiB PET was adequate for clinical diagnostic purpose, while MRI remained more appropriate for clinical research [99]. Zhou et al. investigated the interactions between multimodal PET/MRI in elder patients with MCI, AD, and healthy controls, and confirmed the invaluable application of amyloid PET and MRI in early diagnosis of AD [100]. Kim et al. reported that Aβ-weighted cortical thickness, which incorporates data from both MRI and amyloid PET imaging, is a consistent and objective imaging biomarker in AD [101].

5 Other imaging modalities

Multiphoton non-linear optical microscope imaging systems using ultrafast lasers have powerful advantages such as label-free detection, deep penetration of thick samples, high sensitivity, subcellular spatial resolution, 3D optical sectioning, chemical specificity, and minimum sample destruction [102, 103]. Coherent anti-Stokes–Raman scattering (CARS), two-photon excited fluorescence (TPEF), and second-harmonic generation (SHG) microscopy are the most widely used biomedical imaging techniques [104106].

Quantitative electroencephalographic and neuropsychological investigation of an alternative measure of frontal lobe executive functions: the Figure Trail Making Test

 Paul S. Foster, Valeria Drago, Brad J. Ferguson, Patti Kelly Harrison,David W. Harrison 

Brain Informatis    http://dx.doi.org:/10.1007/s40708-015-0025-z    http://link.springer.com/article/10.1007/s40708-015-0025-z/fulltext.html

The most frequently used measures of executive functioning are either sensitive to left frontal lobe functioning or bilateral frontal functioning. Relatively little is known about right frontal lobe contributions to executive functioning given the paucity of measures sensitive to right frontal functioning. The present investigation reports the development and initial validation of a new measure designed to be sensitive to right frontal lobe functioning, the Figure Trail Making Test (FTMT). The FTMT, the classic Trial Making Test, and the Ruff Figural Fluency Test (RFFT) were administered to 42 right-handed men. The results indicated a significant relationship between the FTMT and both the TMT and the RFFT. Performance on the FTMT was also related to high beta EEG over the right frontal lobe. Thus, the FTMT appears to be an equivalent measure of executive functioning that may be sensitive to right frontal lobe functioning. Applications for use in frontotemporal dementia, Alzheimer’s disease, and other patient populations are discussed.

Keywords – Frontal lobes, Executive functioning, Trail making test, Sequencing, Behavioral speed, Designs, Nonverbal, Neuropsychological assessment, Regulatory control, Effortful control

A recent survey indicated that the vast majority of neuropsychologists frequently assess executive functioning as part of their neuropsychological evaluations [1]. Surveys of neuropsychologists have indicated that the Trail Making Test (TMT), Controlled Oral Word Association Test (COWAT), Wisconsin Card Sorting Test (WCST), and the Stroop Color-Word Test (SCWT) are among the most commonly used instruments [1,2]. Further, the Rabin et al. [1] survey indicated that these same tests are among the most frequently used by neuropsychologists when specifically assessing executive or frontal lobe functioning. The frequent use of the TMT, WCST, and the SCWT, as well as the assumption that they are measures of executive functioning, led Demakis (2003–2004) to conduct a series of meta-analyses to determine the sensitivity of these test to detect frontal lobe dysfunction, particularly lateralized frontal lobe dysfunction. The findings indicated that the SCWT and Part A of the TMT [3], as well as the WCST [4], were all sensitive to frontal lobe dysfunction. However, only the SCWT differentiated between left and right frontal lobe dysfunction, with the worst performance among those with left frontal lobe dysfunction [3].

The finding of the Demakis [4] meta-analysis, that the WCST was not sensitive to lateralized frontal lobe dysfunction, is not surprising given the equivocal findings that have been reported. Whereas performance on the WCST is sensitive to frontal lobe dysfunction [5, 6], demonstration of lateralized frontal dysfunction has been quite problematic. Unilateral left or right dorsolateral frontal dysfunction has been associated with impaired performance on the WCST [6]. Fallgatter and Strik [7] found bilateral frontal lobe activation during performance of the WCST. However, other imaging studies have found right lateralized frontal lobe activation [8] and left lateralized frontal activation [9] in response to performance on the WCST. Further, left frontal lobe alpha power is negatively correlated with performance on the WCST [10]. Finally, patients with left frontal lobe tumors exhibit more impaired performance on the WCST than those with right frontal tumors [11].

Unlike the data for the WCST, more consistent findings have been reported regarding lateralized frontal lobe functioning for the other commonly used measures of executive functioning. For instance, as with the Demakis [3] study, many investigations have found the SCWT to be sensitive to left frontal lobe functioning, although the precise localization within the left frontal lobe has varied. Impaired performance on the SCWT results from left frontal lesions [12] and specifically from lesions localized to the left dorsolateral frontal lobe [13, 14], though bilateral frontal lesions have also yielded impaired performance [13, 14]. Further, studies using neuroimaging to investigate the neural basis of performance on the SCWT have indicated involvement of the left anterior cingulated cortex [15], left lateral prefrontal cortex [16], left inferior precentral sulcus [17], and the left dorsolateral frontal lobe [18].

Wide agreement exists among investigations of the frontal lateralization of verbal or lexical fluency to confrontation. Specifically, patients with left frontal lobe lesions are known to exhibit impaired performance on lexical fluency to confrontation tasks, relative to either patients with right frontal lesions [12, 19, 20] or controls [21]. A recent meta-analysis also indicated that the largest deficits in performance on measures of lexical fluency are associated with left frontal lobe lesions [22]. Troster et al. [23] found that, relative to patients with right pallidotomy, patients with left pallidotomy exhibited more impaired lexical fluency. Several neuroimaging investigations have further supported the role of the left frontal lobe in lexical fluency tasks [15, 2427]. Performance on lexical fluency tasks also varies as a function of lateral frontal lobe asymmetry, as assessed by electroencephalography [28].

The Trail Making Test is certainly among the most widely used tests [1] and perhaps the most widely researched. Various norms exist for the TMT (see [29]), with Tombaugh [30] providing the most recent comprehensive set of normative data. Different methods of analyzing and interpreting the data have also been proposed and used, including error analysis [13, 14, 3133], subtraction scores [13, 14, 34], and ratio scores [13, 14, 35].

Several different language versions of the test have been developed and reported, including Arabic [36], Chinese [37, 38], Greek [39], and Hebrew [40]. Numerous alternative versions of the TMT have been developed to address perceived shortcomings of the original TMT. For instance, the Symbol Trail Making Test [41] was developed to reduce the cultural confounds associated with the use of the Arabic numeral system and English alphabet in the original TMT. The Color Trails Test (CTT; [42]) was also developed to control for cultural confounds, although mixed results have been reported regarding whether the CTT is indeed analogous to the TMT [4345]. A version of the TMT for preschool children, the TRAILS-P, has also been reported [46].

Additionally, the Comprehensive Trail Making Test [47] was developed to control for perceived psychometric shortcomings of the original TMT (for a review see [48] and the Oral Trail Making Test (OTMT; [49]) was developed to reduce confounds associated with motor speed and visual search abilities, with research supporting the OTMT as an equivalent measure [50, 51]. Alternate forms of the TMT have also been developed to permit successive administrations [32, 52] and to assess the relative contributions of the requisite cognitive skills [53].

Delis et al. [54] stated that the continued development of new instrumentation for improving diagnosis and treatment is a critical undertaking in all health-related fields. Further, in their view, the field of neuropsychology has recognized the importance of continually striving to develop new clinical measures. Delis and colleagues developed the extensive Delis-Kaplan Executive Functioning System (D-KEFS; [55]) in the spirit of advancing the instrumentation of neuropsychology. The D-KEFS includes a Trail Making Test consisting of five separate conditions. The Number-Letter Switching condition involves a sequencing procedure similar to that of the classic TMT. The other four conditions are designed to assess the component processes involved in completing the Number-Letter Switching condition so that a precise analysis of the nature of any underlying dysfunction may be accomplished. Specifically, these additional components include Visual Scanning, Number Sequencing, Letter Sequencing, and Motor Speed.

Given that the TMT comprises numbers and letters and is a measure of executive functioning, it may preferentially involve the left frontal lobe. Although the literature is somewhat controversial, neuropsychological and neuroimaging studies seem to provide support for the sensitivity of the TMT to detect left frontal dysfunction [56]. Recent clinically oriented studies investigating frontal lobe involvement of the TMT using transcranial magnetic stimulation (TMS) and near-infrared spectroscopy (NIRS) also support this localization [57]. Performance on Part B of the TMT improved following repetitive TMS applied to the left dorsolateral frontal lobe [57].

With 9–13-year-old boys performing TMT Part B, Weber et al. [58] found a left lateralized increase in the prefrontal cortex in deoxygenated hemoglobin, an indicator of increased oxygen consumption. Moll et al. [59] demonstrated increased activation specific to the prefrontal cortex, especially the left prefrontal region, in healthy controls performing Part B of the TMT. Foster et al. [60] found a significant positive correlation between performance on Part A of the TMT and low beta (13–21 Hz) magnitude (μV) at the left lateral frontal lobe, but not at the right lateral frontal lobe. Finally, Stuss et al. [13, 14] found that patients with left dorsolateral frontal dysfunction evidenced more errors than patients with lesions in other areas of the frontal lobes and those patients with left frontal lesions were the slowest to complete the test.

Taken together, the possibility exists that the aforementioned tests are largely associated with left frontal lobe activity and the TMT, in particular, provides information concerning mental processing speed as well as cognitive flexibility and set-shifting. While some studies have found that deficits in visuomotor set-shifting are specific to the frontal lobe damage [61], others investigators have reported such impairment in patients with posterior brain lesions and widespread cerebral dysfunctions, including cerebellar damage [62] and Alzheimer disease [63]. Thus, it remains unclear whether impairments in visuomotor set-shifting are specific to frontal lobe dysfunction or whether they are non-specific and can result from more posterior or widespread brain dysfunction.

Compared to the collective knowledge we have regarding the cognitive roles of the left frontal lobe, relatively little is known about right frontal lobe contributions to executive functioning. This is likely a result of the dearth of tests that are associated with right frontal activity. The Ruff Figural Fluency Test (RFFT; [64]) is among the few standardized tests of right frontal lobe functioning and was listed as the 14th most commonly used instrument to assess executive functioning in the Rabin et al. [1] survey. The RFFT is known to be sensitive to right frontal lobe functioning [65, 66]; see also [67] pp. 297–298), as is a measure based on the RFFT [19].

The present investigation, with the same intent and spirit as that reported by Delis et al. [54], sought to develop and initially validate a measure of right frontal lobe functioning in an effort to attain a greater understanding of right frontal contributions to executive functioning and to advance the instrumentation of neuropsychology. To meet this objective, a version of the Trail Making Test comprising figures, as opposed to numbers and letters, was developed. The TMT was used as a model for the new test, referred to as the Figure Trail Making Test (FTMT), due to the high frequency of use, the volume of research conducted, and the ease of administration of the TMT. Given that the TMT and the FTMT are both measuring executive functioning, we felt that a moderate correlation would exist between these two measures. Specifically, we hypothesized that performance on the FTMT would be positively correlated with performance on the TMT, in terms of the total time required to complete each part of the tests, an additive and subtractive score, and a ratio score. The total time required to complete each part of the FTMT was also hypothesized to be negatively correlated with the total number of unique designs produced on the RFFT and positively correlated with the number of perseverative errors committed on the RFFT and the perseverative error ratio. We also sought to determine whether the TMT and the FTMT were measuring different constructs by conducting a factor analysis, anticipating that the two tests would load on separate factors.

Additionally, we sought to obtain neurophysiological evidence that the FTMT is sensitive to right frontal lobe functioning. Specifically, we used quantitative electroencephalography (QEEG) to measure electrical activity over the left and right frontal lobes. A previous investigation we conducted found that performance on Part A of the TMT was related to left frontal lobe (F7) low beta magnitude [60]. For the present investigation, we predicted that significant negative correlations would exist between performance on Parts A and B of the TMT and both low and high beta magnitude at the F7 electrode site. We further predicted that significant negative correlations would exist between performance on Parts C and D of the FTMT and both low and high beta magnitude at the F8 electrode site.

3 Discussion

The need for additional measures of executive functions and especially instruments which may provide implications relevant to cerebral laterality is clear. There remains especially a void for neuropsychological instruments using a TMT format, which may provide information pertaining to the functional integrity of the right frontal region. Consistent with the hypotheses forwarded, significant correlations were found between performance on the TMT and the FTMT, in terms of the raw time required to complete each respective part of the tests as well as the additive and subtraction scores. The fact that the ratio scores were not significantly correlated is not surprising given that research has generally indicated a lack of clinical utility for this score [13, 14, 35]. Given the present findings, the TMT and the FTMT appear to be equivalent measures of executive functioning. Further, the present findings not only suggest that the FTMT may be a measure of executive functioning but also extend the realm of executive functioning to the sequencing and set-shifting of nonverbal stimuli.

However, the finding of significant correlations between the TMT and the FTMT represents somewhat of a caveat in that the TMT has been found to be sensitive to left frontal lobe functioning [13, 14, 57, 59]. This would seem to suggest the possibility that the FTMT is also sensitive to left frontal lobe functioning. The possibility that FTMT is related to left frontal lobe functioning is tempered, though, by the fact that the many of the hypothesized correlations between performance on the RFFT and the FTMT were also significant. Performance on the RFFT is related to right frontal lobe functioning [65,66]. Thus, the significant correlations between the RFFT and the FTMT suggest that the FTMT may also be sensitive to right frontal lobe functioning. Additionally, it should also be noted that the TMT was not significantly correlated with performance on the RFFT, with the exception of the significant correlation between performance on the TMT Part A and the total number of unique designs produced on the RFFT. Taken together, the results suggest that the FTMT may be a measure of right frontal executive functioning.

Additional support for the sensitivity of the FTMT to right frontal lobe functioning is provided by the finding of a significant negative correlation between performance on Part D of the FTMT and high beta magnitude. We have previously used QEEG to provide neurophysiological validation of the RFFT [65] and the Rey Auditory Verbal Learning Test [70] and the present findings provide further support for the use of QEEG in validating neuropsychological tests. The lack of significant correlations between the TMT and either low or high beta magnitude may be related to a restricted range of scores on the TMT. As a whole, performance on the FTMT was more variable than performance on the TMT and this relatively restricted range for the TMT may have impacted the obtained correlations. Given the present findings, together with those of the Foster et al. [65, 70] investigations, further support is also provided for the use of EEG in establishing neurophysiological validation for neuropsychological tests.

The results from the factor analysis provide support for the contention that the FMT may be a measure of right frontal lobe activity and also provide initial discriminant validity for the FTMT. Specifically, Parts C and D of the FTMT were found to load on the same factor as the number of designs generated on the RFFT, although the time required to complete Part A of the TMT is also included. Additionally, the number of errors committed on Parts C and D of the FTMT comprises a single factor, separate from either the TMT or the RFFT. Although these results support the FTMT as a measure of nonverbal executive functioning, it would be helpful to conduct an additional factor analysis including additional measures of right frontal functioning, and perhaps other measures of right hemisphere functioning as marker variables.

We sought to develop a measure sensitive to right frontal lobe functioning due to the paucity of such tests and the potentially important uses that right frontal lobe tests may have clinically. Tests of right frontal lobe functioning may, for instance, be useful in identifying and distinguishing left versus right frontotemporal dementia (FTD). Research has indicated that FTD is associated with cerebral atrophy at the right dorsolateral frontal and left premotor cortices [71]. Fukui and Kertesz [72] found right frontal lobe volume reduction in FTD relative to Alzheimer’s disease and progressive nonfluent aphasia. Some have suggested that FTD should not be considered as a unitary disorder and that neuropsychological testing may aid in differentially diagnosing left versus right FTD [73].

Whereas right FTD has been associated with more errors and perseverative responses on the Wisconsin Card Sorting Test (WCST), left FTD has been associated with significantly worse performance on the Boston Naming Test (BNT) and the Stroop Color-Word test [73]. Razani et al. [74] also distinguished between left and right FTD in finding that left FTD performed worse on the BNT and the right FTD patients performed worse on the WCST. However, as noted earlier, the WCST has been associated with left frontal activity [9], right frontal activation [8], and bilateral frontal activation [7]. Further, patients with left frontal tumors perform worse than those with right frontal tumors [11].

Patients with FTD that predominantly involves the right frontotemporal region have behavioral and emotional abnormalities and those with predominantly left frontotemporal region damage have a loss of lexical semantic knowledge. Patients, in whom neural degeneration begins on the left side, often present to the clinicians at an early stage of the disease due to the presence of language abnormalities, but maintain their emotion processing abilities, being preserved the right anterior temporal lobe. However, as this disease advances, the disease may progress to the right frontotemporal regions. Tests sensitive to right frontal lobe functioning may be useful tools to identify in advance the course of the disease, providing immediate and specific treatments and informing the caregivers on the possible prospective frame of the disease.

A potentially more important use of tests sensitive to right frontal lobe functioning, though, may be in predicting dementia patients that will develop significant and disruptive behavioral deficits. Research has found that approximately 92 % of right-sided FTD patients exhibit socially undesirable behaviors as their initial symptom, as compared to only 11 % of left-sided FTD patients [75]. Behavioral deficits in FTD are associated with gray matter loss at the dorsomedial frontal region, particularly on the right [76].

Alzheimer’s disease (AD) is also often associated with significant behavioral disturbances. Even AD patients with mild dementia are noted to exhibit behavioral deficits such as delusions, hallucinations, agitation, dysphoria, anxiety, apathy, and irritability [77]. Indeed, Shimabukuro et al. [77] found that regardless of dementia severity, over half of all AD patients exhibited apathy, delusions, irritability, dysphoria, and anxiety. Delusions in AD patients are associated with relative right frontal hypoperfusion as indicated by SPECT imaging [78, 79]. Further, positron emission tomography (PET) has indicated that AD patients exhibiting delusions exhibit hypometabolism at the right superior dorsolateral frontal and right inferior frontal pole [80].

Although research clearly implicates right frontal lobe dysfunction in the expression of behavioral deficits, data from neuropsychological testing are not as clear. Negative symptoms in patients with AD and FTD have been related to measures of nonverbal and verbal executive functioning as well as verbal memory [81]. Positive symptoms, in contrast, were related to constructional skills and attention. However, Staff et al. [78] failed to dissociate patients with delusions from those without delusions based on neuropsychological test performance, despite significant differences existing in right frontal and limbic functioning as revealed by functional imaging. The inclusion of other measures of right frontal lobe functioning may result in improved neuropsychological differentiation of dementia patients with and without significant behavioral disturbances. Further, it may be possible to predict early in the disease process those patients that will ultimately develop behavioral disturbances with improved measures of right frontal functioning. Predicting those that may develop behavioral problems will permit earlier treatment and will provide the family with more time to prepare for the potential emergence of such difficulties. Certainly, future research needs to be conducted that incorporates measures of right and left frontal lobe functioning in regression analyses to determine the plausibility of such prediction.

Tests sensitive to right frontal lobe functioning may also be useful in identifying more subtle right frontal lobe dysfunction and the cognitive and behavioral changes that follow. The right frontal lobe mediates language melody or prosody and forms a cohesive discourse, interprets abstract communication in spoken and written languages, and interprets the inferred relationships involved in communications. Subtle difficulties in interpreting abstract meaning in communication, comprehending metaphors, and even understanding jokes that are often seen in right frontal lobe stroke patients may not be detected by the family and may also be under diagnosed by clinicians [82]. Further, patients with right frontal lobe lesions are generally more euphoric and unconcerned, often minimizing their symptoms [82] or denying the illness, which may delay referral to a clinician and diagnosis.

Attention deficit hyperactivity disorder (ADHD) is a neurological disease characterized by motor inhibition deficit, problems with cognitive flexibility, social disruption, and emotional disinhibition [83, 84]. Functional MRI studies reveal reduced right prefrontal activation during “frontal tasks,” such as go/no go [85], Stroop [86], and attention task performance [87]. The right frontal lobe deficit hypothesis is further supported by structural studies [88, 89]. Tests of right frontal lobe functioning may be useful in further characterizing the nature of this deficit and in specifying the likely hemispheric locus of dysfunction.

To summarize, we feel that right frontal lobe functioning has been relatively neglected in neuropsychological assessment and that many uses for such tests exist. Our intent was to develop a test purportedly sensitive to right frontal functioning that would be easy and quick to administer in a clinical setting. However, we are certainly not meaning to assert that our FTMT would be applicable in all the aforementioned conditions. Additional research should be conducted to determine the precise clinical utility of the FTMT.

Further validation of the FTMT should also be undertaken. Establishing convergent validation may involve correlating tests measuring the same domain, such as executive functioning. This was initially accomplished in the present investigation through the significant correlations between the TMT and the FTMT. Additionally, convergent validation may also involve correlating tests that purportedly measure the same region of the brain. This was also initially accomplished in the present investigation through the significant correlations between the FTMT and the RFFT. However, additional convergent validation certainly needs to be obtained, as well as validation using patient populations and neurophysiological validation.

We are currently collecting data that hopefully will provide neurophysiological validation of the FTMT. Certainly, though, it is hoped that the present investigation will not only stimulate further research seeking to validate the FTMT and provide more comprehensive normative data, but also stimulate research investigating whether the FTMT or other measures of right frontal lobe functioning may be used to predict patients that will develop behavioral disturbances.

World’s Greatest Literature Reveals Multifractals, Cascades of Consciousness

http://www.scientificcomputing.com/news/2016/01/worlds-greatest-literature-reveals-multifractals-cascades-consciousness

http://www.scientificcomputing.com/sites/scientificcomputing.com/files/Worlds_Greatest_Literature_Reveals_Multifractals_Cascades_of_Consciousness_440.jpg

Multifractal analysis of Finnegan’s Wake by James Joyce. The ideal shape of the graph is virtually indistinguishable from the results for purely mathematical multifractals. The horizontal axis represents the degree of singularity, and the vertical axis shows the spectrum of singularity. Courtesy of IFJ PAN

Arthur Conan Doyle, Charles Dickens, James Joyce, William Shakespeare and JRR Tolkien. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

As far as many bookworms are concerned, advanced equations and graphs are the last things which would hold their interest, but there’s no escape from the math. Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is, in fact, a fractal. In the case of several works, their mathematical complexity proved to be exceptional, comparable to the structure of complex mathematical objects considered to be multifractal. Interestingly, in the analyzed pool of all the works, one genre turned out to be exceptionally multifractal in nature.

Fractals are self-similar mathematical objects: when we begin to expand one fragment or another, what eventually emerges is a structure that resembles the original object. Typical fractals, especially those widely known as the Sierpinski triangle and the Mandelbrot set, are monofractals, meaning that the pace of enlargement in any place of a fractal is the same, linear: if they at some point were rescaled x number of times to reveal a structure similar to the original, the same increase in another place would also reveal a similar structure.

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals ‘interwoven’ with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that, in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is, therefore, non-linear in nature.

“Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So, we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that, when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity,” says Professor Stanislaw Drozdz, IFJ PAN, Cracow University of Technology.

The study involved 113 literary works written in English, French, German, Italian, Polish, Russian and Spanish by such famous figures as Honore de Balzac, Arthur Conan Doyle, Julio Cortazar, Charles Dickens, Fyodor Dostoevsky, Alexandre Dumas, Umberto Eco, George Elliot, Victor Hugo, James Joyce, Thomas Mann, Marcel Proust, Wladyslaw Reymont, William Shakespeare, Henryk Sienkiewicz, JRR Tolkien, Leo Tolstoy and Virginia Woolf, among others. The selected works were no less than 5,000 sentences long, in order to ensure statistical reliability.

To convert the texts to numerical sequences, sentence length was measured by the number of words (an alternative method of counting characters in the sentence turned out to have no major impact on the conclusions). The dependences were then searched for in the data — beginning with the simplest, i.e. linear. This is the posited question: if a sentence of a given length is x times longer than the sentences of different lengths, is the same aspect ratio preserved when looking at sentences respectively longer or shorter?

“All of the examined works showed self-similarity in terms of organization of the lengths of sentences. Some were more expressive — here The Ambassadors by Henry James stood out — while others to far less of an extreme, as in the case of the French seventeenth-century romance Artamene ou le Grand Cyrus. However, correlations were evident and, therefore, these texts were the construction of a fractal,” comments Dr. Pawel Oswiecimka (IFJ PAN), who also noted that fractality of a literary text will, in practice, never be as perfect as in the world of mathematics. It is possible to magnify mathematical fractals up to infinity, while the number of sentences in each book is finite and, at a certain stage of scaling, there will always be a cut-off in the form of the end of the dataset.

Things took a particularly interesting turn when physicists from IFJ PAN began tracking non-linear dependence, which in most of the studied works was present to a slight or moderate degree. However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has, so far, never been associated with this literary genre.

“The absolute record in terms of multifractality turned out to be Finnegan’s Wakeby James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals,” says Drozdz.

The most multifractal works also included A Heartbreaking Work of Staggering Genius by Dave Eggers, Rayuela by Julio Cortazar, The US Trilogy by John Dos Passos, The Waves by Virginia Woolf, 2666 by Roberto Bolano, and Joyce’sUlysses. At the same time, a lot of works usually regarded as stream of consciousness turned out to show little correlation to multifractality, as it was hardly noticeable in books such as Atlas Shrugged by Ayn Rand and A la recherche du temps perdu by Marcel Proust.

“It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers. It is hardly surprising that ascribing a work to a particular genre is, for whatever reason, sometimes subjective. We see, moreover, the possibility of an interesting application of our methodology: it may someday help in a more objective assignment of books to one genre or another,” notes Drozdz.

Multifractal analyses of literary texts carried out by the IFJ PAN have been published in Information Sciences, the journal of computer science. The publication has undergone rigorous verification: given the interdisciplinary nature of the subject, editors immediately appointed up to six reviewers.

Citation: “Quantifying origin and character of long-range correlations in narrative texts” S. Drożdż, P. Oświęcimka, A. Kulig, J. Kwapień, K. Bazarnik, I. Grabska-Gradzińska, J. Rybicki, M. Stanuszek; Information Sciences, vol. 331, 32–44, 20 February 2016; DOI: 10.1016/j.ins.2015.10.023

New Quantum Approach to Big Data could make Impossibly Complex Problems Solvable

David L. Chandler, MIT

http://www.scientificcomputing.com/news/2016/01/new-quantum-approach-big-data-could-make-impossibly-complex-problems-solvable

http://www.scientificcomputing.com/sites/scientificcomputing.com/files/New_Quantum_Approach_to_Big_Data_could_make_Impossibly_Complex_Problems_Solvable_440.jpg

This diagram demonstrates the simplified results that can be obtained by using quantum analysis on enormous, complex sets of data. Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

From gene mapping to space exploration, humanity continues to generate ever-larger sets of data — far more information than people can actually process, manage or understand.

Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way.

Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at MIT, the University of Waterloo, and the University of Southern California.

The team describes their theoretical proposal this week in the journal Nature Communications. Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world.

In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.”

It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.”

Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations.

Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.

“That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd.

“Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says.

There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says.

The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” Lloyd says. But the limits of classical computation have prevented such approaches from being applied before.

While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.”

Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.”

The team also included Silvano Garnerone of the University of Waterloo in Ontario, Canada, and Paolo Zanardi of the Center for Quantum Information Science and Technology at the University of Southern California. The work was supported by the Army Research Office, Air Force Office of Scientific Research, Defense Advanced Research Projects Agency, Multidisciplinary University Research Initiative of the Office of Naval Research, and the National Science Foundation.

Beyond Chess: Computer Beats Human in Ancient Chinese Game

http://www.rdmag.com/news/2016/01/beyond-chess-computer-beats-human-ancient-chinese-game

http://www.rdmag.com/sites/rdmag.com/files/rd1601_chess.jpg

A player places a black stone while his opponent waits to place a white one as they play Go, a game of strategy, in the Seattle Go Center, Tuesday, April 30, 2002. The game, which originated in China more than 2,500 years ago, involves two players who take turns putting markers on a grid. The object is to surround more area on the board with the markers than one’s opponent, as well as capturing the opponent’s pieces by surrounding them. A paper released Wednesday, Jan. 27, 2016 describes how a computer program has beaten a human master at the complex board game, marking significant advance for development of artificial intelligence. (AP Photo/Cheryl Hatch)

A computer program has beaten a human champion at the ancient Chinese board game Go, marking a significant advance for development of artificial intelligence.

The program had taught itself how to win, and its developers say its learning strategy may someday let computers help solve real-world problems like making medical diagnoses and pursuing scientific research.

The program and its victory are described in a paper released Wednesday by the journal Nature.

Computers previously have surpassed humans for other games, including chess, checkers and backgammon. But among classic games, Go has long been viewed as the most challenging for artificial intelligence to master.

Go, which originated in China more than 2,500 years ago, involves two players who take turns putting markers on a checkerboard-like grid. The object is to surround more area on the board with the markers than one’s opponent, as well as capturing the opponent’s pieces by surrounding them.

While the rules are simple, playing it well is not. It’s “probably the most complex game ever devised by humans,” Dennis Hassabis of Google DeepMind in London, one of the study authors, told reporters Tuesday.

The new program, AlphaGo, defeated the European champion in all five games of a match in October, the Nature paper reports.

In March, AlphaGo will face legendary player Lee Sedol in Seoul, South Korea, for a $1 million prize, Hassabis said.

Martin Mueller, a computing science professor at the University of Alberta in Canada who has worked on Go programs for 30 years but didn’t participate in AlphaGo, said the new program “is really a big step up from everything else we’ve seen…. It’s a very, very impressive piece of work.”

Biological Origin of Schizophrenia

Excessive ‘pruning’ of connections between neurons in brain predisposes to disease

http://hms.harvard.edu/sites/default/files/uploads/news/McCarroll_C4_600x400.jpg

Imaging studies showed C4 (in green) located at the synapses of primary human neurons. Image: Heather de Rivera, McCarroll lab

 PAUL GOLDSMITH    http://hms.harvard.edu/news/biological-origin-schizophrenia

The risk of schizophrenia increases if a person inherits specific variants in a gene related to “synaptic pruning”—the elimination of connections between neurons—according to a study from Harvard Medical School, the Broad Institute and Boston Children’s Hospital. The findings were based on genetic analysis of nearly 65,000 people.

The study represents the first time that the origin of this psychiatric disease has been causally linked to specific gene variants and a biological process.

Get more HMS news here

It also helps explain two decades-old observations: synaptic pruning is particularly active during adolescence, which is the typical period of onset for symptoms of schizophrenia, and the brains of schizophrenic patients tend to show fewer connections between neurons.

The gene, complement component 4 (C4), plays a well-known role in the immune system. It has now been shown to also play a key role in brain development and schizophrenia risk. The insight may allow future therapeutic strategies to be directed at the disorder’s roots, rather than just its symptoms.

The study, which appears online Jan. 27 in Nature, was led by HMS researchers at the Broad Institute’s Stanley Center for Psychiatric Research and Boston Children’s. They include senior author Steven McCarroll, HMS associate professor of genetics and director of genetics for the Stanley Center; Beth Stevens, HMS assistant professor of neurology at Boston Children’s and institute member at the Broad; Michael Carroll, HMS professor of pediatrics at Boston Children’s; and first author Aswin Sekar, an MD-PhD student at HMS.

The study has the potential to reinvigorate translational research on a debilitating disease. Schizophrenia afflicts approximately 1 percent people worldwide and is characterized by hallucinations, emotional withdrawal and a decline in cognitive function. These symptoms most frequently begin in patients when they are teenagers or young adults.

“These results show that it is possible to go from genetic data to a new way of thinking about how a disease develops—something that has been greatly needed.”

First described more than 130 years ago, schizophrenia lacks highly effective treatments and has seen few biological or medical breakthroughs over the past half-century.

In the summer of 2014, an international consortium led by researchers at the Stanley Center identified more than 100 regions in the human genome that carry risk factors for schizophrenia.

The newly published study now reports the discovery of the specific gene underlying the strongest of these risk factors and links it to a specific biological process in the brain.

“Since schizophrenia was first described over a century ago, its underlying biology has been a black box, in part because it has been virtually impossible to model the disorder in cells or animals,” said McCarroll. “The human genome is providing a powerful new way in to this disease. Understanding these genetic effects on risk is a way of prying open that black box, peering inside and starting to see actual biological mechanisms.”

“This study marks a crucial turning point in the fight against mental illness,” said Bruce Cuthbert, acting director of the National Institute of Mental Health. “Because the molecular origins of psychiatric diseases are little-understood, efforts by pharmaceutical companies to pursue new therapeutics are few and far between. This study changes the game. Thanks to this genetic breakthrough we can finally see the potential for clinical tests, early detection, new treatments and even prevention.”

The path to discovery

The discovery involved the collection of DNA from more than 100,000 people, detailed analysis of complex genetic variation in more than 65,000 human genomes, development of an innovative analytical strategy, examination of postmortem brain samples from hundreds of people and the use of animal models to show that a protein from the immune system also plays a previously unsuspected role in the brain.

Over the past five years, Stanley Center geneticists and collaborators around the world collected more than 100,000 human DNA samples from 30 different countries to locate regions of the human genome harboring genetic variants that increase the risk of schizophrenia. The strongest signal by far was on chromosome 6, in a region of DNA long associated with infectious disease. This caused some observers to suggest that schizophrenia might be triggered by an infectious agent. But researchers had no idea which of the hundreds of genes in the region was actually responsible or how it acted.

Based on analyses of the genetic data, McCarroll and Sekar focused on a region containing the C4 gene. Unlike most genes, C4 has a high degree of structural variability. Different people have different numbers of copies and different types of the gene.

McCarroll and Sekar developed a new molecular technique to characterize the C4 gene structure in human DNA samples. They also measured C4 gene activity in nearly 700 post-mortem brain samples.

They found that the C4 gene structure (DNA) could predict the C4 gene activity (RNA) in each person’s brain. They then used this information to infer C4 gene activity from genome data from 65,000 people with and without schizophrenia.

These data revealed a striking correlation. People who had particular structural forms of the C4 gene showed higher expression of that gene and, in turn, had a higher risk of developing schizophrenia.

Connecting cause and effect through neuroscience

But how exactly does C4—a protein known to mark infectious microbes for destruction by immune cells—affect the risk of schizophrenia?

Answering this question required synthesizing genetics and neurobiology.

Stevens, a recent recipient of a MacArthur Foundation “genius grant,” had found that other complement proteins in the immune system also played a role in brain development. These results came from studying an experimental model of synaptic pruning in the mouse visual system.

“This discovery enriches our understanding of the complement system in brain development and in disease, and we could not have made that leap without the genetics.”

Carroll had long studied C4 for its role in immune disease, and developed mice with different numbers of copies of C4.

The three labs set out to study the role of C4 in the brain.

They found that C4 played a key role in pruning synapses during maturation of the brain. In particular, they found that C4 was necessary for another protein—a complement component called C3—to be deposited onto synapses as a signal that the synapses should be pruned. The data also suggested that the more C4 activity an animal had, the more synapses were eliminated in its brain at a key time in development.

The findings may help explain the longstanding mystery of why the brains of people with schizophrenia tend to have a thinner cerebral cortex (the brain’s outer layer, responsible for many aspects of cognition) with fewer synapses than do brains of unaffected individuals. The work may also help explain why the onset of schizophrenia symptoms tends to occur in late adolescence.

The human brain normally undergoes widespread synapse pruning during adolescence, especially in the cerebral cortex. Excessive synaptic pruning during adolescence and early adulthood, due to increased complement (C4) activity, could lead to the cognitive symptoms seen in schizophrenia.

“Once we had the genetic findings in front of us we started thinking about the possibility that complement molecules are excessively tagging synapses in the developing brain,” Stevens said.

“This discovery enriches our understanding of the complement system in brain development and in disease, and we could not have made that leap without the genetics,” she said. “We’re far from having a treatment based on this, but it’s exciting to think that one day we might be able to turn down the pruning process in some individuals and decrease their risk.”

Opening a path toward early detection and potential therapies

Beyond providing the first insights into the biological origins of schizophrenia, the work raises the possibility that therapies might someday be developed that could turn down the level of synaptic pruning in people who show early symptoms of schizophrenia.

This would be a dramatically different approach from current medical therapies, which address only a specific symptom of schizophrenia—psychosis—rather than the disorder’s root causes, and which do not stop cognitive decline or other symptoms of the illness.

The researchers emphasize that therapies based on these findings are still years down the road. Still, the fact that much is already known about the role of complement proteins in the immune system means that researchers can tap into a wealth of existing knowledge to identify possible therapeutic approaches. For example, anticomplement drugs are already under development for treating other diseases.

“In this area of science, our dream has been to find disease mechanisms that lead to new kinds of treatments,” said McCarroll. “These results show that it is possible to go from genetic data to a new way of thinking about how a disease develops—something that has been greatly needed.”

This work was supported by the Broad Institute’s Stanley Center for Psychiatric Research and by the National Institutes of Health (grants U01MH105641, R01MH077139 and T32GM007753).

Adapted from a Broad Institute news release.

 

Scientists open the ‘black box’ of schizophrenia with dramatic genetic discovery

Amy Ellis Nutt    https://www.washingtonpost.com/news/speaking-of-science/wp/2016/01/27/scientists-open-the-black-box-of-schizophrenia-with-dramatic-genetic-finding/

Scientists Prune Away Schizophrenia’s Hidden Genetic Mechanisms

http://www.genengnews.com/gen-news-highlights/scientists-prune-away-schizophrenia-s-hidden-genetic-mechanisms/81252297/

https://youtu.be/s0y4equOTLg

A landmark study has revealed that a person’s risk of schizophrenia is increased if they inherit specific variants in a gene related to “synaptic pruning”—the elimination of connections between neurons. The findings represent the first time that the origin of this devastating psychiatric disease has been causally linked to specific gene variants and a biological process.

http://www.genengnews.com/Media/images/GENHighlight/thumb_107629_web2209513618.jpg

The site in Chromosome 6 harboring the gene C4 towers far above other risk-associated areas on schizophrenia’s genomic “skyline,” marking its strongest known genetic influence. The new study is the first to explain how specific gene versions work biologically to confer schizophrenia risk. [Psychiatric Genomics Consortium]

  • A new study by researchers at the Broad Institute’s Stanley Center for Psychiatric Research, Harvard Medical School, and Boston Children’s Hospital genetically analyzed nearly 65,000 people and revealed that an individual’s risk of schizophrenia is increased if they inherited distinct variants in a gene related to “synaptic pruning”—the elimination of connections between neurons. This new data represents the first time that the origin of this psychiatric disease has been causally linked to particular gene variants and a biological process.

The investigators discovered that versions of a gene commonly thought to be involved in immune function might trigger a runaway pruning of an adolescent brain’s still-maturing communications infrastructure. The researchers described a scenario where patients with schizophrenia show fewer such connections between neurons or synapses.

“Normally, pruning gets rid of excess connections we no longer need, streamlining our brain for optimal performance, but too much pruning can impair mental function,” explained Thomas Lehner, Ph.D., director of the Office of Genomics Research Coordination at the NIH’s National Institute of Mental Health (NIMH), which co-funded the study along with the Stanley Center for Psychiatric Research at the Broad Institute and other NIH components. “It could help explain schizophrenia’s delayed age-of-onset of symptoms in late adolescence and early adulthood and shrinkage of the brain’s working tissue. Interventions that put the brakes on this pruning process-gone-awry could prove transformative.”

The gene the research team called into question, dubbed C4 (complement component 4), was associated with the largest risk for the disorder. C4’s role represents some of the most compelling evidence, to date, linking specific gene versions to a biological process that could cause at least some cases of the illness.

The findings from this study were published recently in Nature through an article entitled “Schizophrenia risk from complex variation of complement component 4.”

“Since schizophrenia was first described over a century ago, its underlying biology has been a black box, in part because it has been virtually impossible to model the disorder in cells or animals,” noted senior study author Steven McCarroll, Ph.D., director of genetics for the Stanley Center and an associate professor of genetics at Harvard Medical School. “The human genome is providing a powerful new way into this disease. Understanding these genetic effects on risk is a way of prying open that block box, peering inside and starting to see actual biological mechanisms.”

Dr. McCarroll and his colleagues found that a stretch of chromosome 6 encompassing several genes known to be involved in immune function emerged as the strongest signal associated with schizophrenia risk in genome-wide analyses. Yet conventional genetics failed to turn up any specific gene versions there that were linked to schizophrenia.

In order to uncover how the immune-related site confers risk for the mental disorder, the scientists mounted a search for cryptic genetic influences that might generate unconventional signals. C4, a gene with known roles in immunity, emerged as a prime suspect because it is unusually variable across individuals.

Upon further investigation into the complexities of how such structural variation relates to the gene’s level of expression and how that, in turn, might link to schizophrenia, the team discovered structurally distinct versions that affect expression of two main forms of the gene within the brain. The more a version resulted in expression of one of the forms, called C4A, the more it was associated with schizophrenia. The greater number of copies an individual had of the suspect versions, the more C4 switched on and the higher their risk of developing schizophrenia. Furthermore, the C4 protein turned out to be most prevalent within the cellular machinery that supports connections between neurons.

“Once we had the genetic findings in front of us we started thinking about the possibility that complement molecules are excessively tagging synapses in the developing brain,” remarked co-author Beth Stevens, Ph.D. a neuroscientist and assistant professor of neurology at Boston Children’s Hospital and institute member at the Broad. “This discovery enriches our understanding of the complement system in brain development and disease, and we could not have made that leap without the genetics. We’re far from having a treatment based on this, but it’s exciting to think that one day we might be able to turn down the pruning process in some individuals and decrease their risk.”

“This study marks a crucial turning point in the fight against mental illness. It changes the game,” added acting NIMH director Bruce Cuthbert, Ph.D. “Because the molecular origins of psychiatric diseases are little-understood, efforts by pharmaceutical companies to pursue new therapeutics are few and far between. This study changes the game. Thanks to this genetic breakthrough, we can finally see the potential for clinical tests, early detection, new treatments, and even prevention.”

Connecting cause and effect through neuroscience

But how exactly does C4—a protein known to mark infectious microbes for destruction by immune cells—affect the risk of schizophrenia?

Answering this question required synthesizing genetics and neurobiology.

Stevens, a recent recipient of a MacArthur Foundation “genius grant,” had found that other complement proteins in the immune system also played a role in brain development. These results came from studying an experimental model of synaptic pruning in the mouse visual system.

“This discovery enriches our understanding of the complement system in brain development and in disease, and we could not have made that leap without the genetics.”

Carroll had long studied C4 for its role in immune disease, and developed mice with different numbers of copies of C4.

The three labs set out to study the role of C4 in the brain.

They found that C4 played a key role in pruning synapses during maturation of the brain. In particular, they found that C4 was necessary for another protein—a complement component called C3—to be deposited onto synapses as a signal that the synapses should be pruned. The data also suggested that the more C4 activity an animal had, the more synapses were eliminated in its brain at a key time in development.

The findings may help explain the longstanding mystery of why the brains of people with schizophrenia tend to have a thinner cerebral cortex (the brain’s outer layer, responsible for many aspects of cognition) with fewer synapses than do brains of unaffected individuals. The work may also help explain why the onset of schizophrenia symptoms tends to occur in late adolescence.

The human brain normally undergoes widespread synapse pruning during adolescence, especially in the cerebral cortex. Excessive synaptic pruning during adolescence and early adulthood, due to increased complement (C4) activity, could lead to the cognitive symptoms seen in schizophrenia.

“Once we had the genetic findings in front of us we started thinking about the possibility that complement molecules are excessively tagging synapses in the developing brain,” Stevens said.

“This discovery enriches our understanding of the complement system in brain development and in disease, and we could not have made that leap without the genetics,” she said. “We’re far from having a treatment based on this, but it’s exciting to think that one day we might be able to turn down the pruning process in some individuals and decrease their risk.”

Opening a path toward early detection and potential therapies

Beyond providing the first insights into the biological origins of schizophrenia, the work raises the possibility that therapies might someday be developed that could turn down the level of synaptic pruning in people who show early symptoms of schizophrenia.

This would be a dramatically different approach from current medical therapies, which address only a specific symptom of schizophrenia—psychosis—rather than the disorder’s root causes, and which do not stop cognitive decline or other symptoms of the illness.

The researchers emphasize that therapies based on these findings are still years down the road. Still, the fact that much is already known about the role of complement proteins in the immune system means that researchers can tap into a wealth of existing knowledge to identify possible therapeutic approaches. For example, anticomplement drugs are already under development for treating other diseases.

“In this area of science, our dream has been to find disease mechanisms that lead to new kinds of treatments,” said McCarroll. “These results show that it is possible to go from genetic data to a new way of thinking about how a disease develops—something that has been greatly needed.”

This work was supported by the Broad Institute’s Stanley Center for Psychiatric Research and by the National Institutes of Health (grants U01MH105641, R01MH077139 and T32GM007753).

Adapted from a Broad Institute news release.

 

https://img.washingtonpost.com/wp-apps/imrs.php?src=https://img.washingtonpost.com/rf/image_908w/2010-2019/WashingtonPost/2011/09/27/Production/Sunday/SunBiz/Images/mental2b.jpg&w=1484

This post has been updated.

For the first time, scientists have pinned down a molecular process in the brain that helps to trigger schizophrenia. The researchers involved in the landmark study, which was published Wednesday in the journal Nature, say the discovery of this new genetic pathway probably reveals what goes wrong neurologically in a young person diagnosed with the devastating disorder.

The study marks a watershed moment, with the potential for early detection and new treatments that were unthinkable just a year ago, according to Steven Hyman, director of the Stanley Center for Psychiatric Research at the Broad Institute at MIT. Hyman, a former director of the National Institute of Mental Health, calls it “the most significant mechanistic study about schizophrenia ever.”

“I’m a crusty, old, curmudgeonly skeptic,” he said. “But I’m almost giddy about these findings.”

The researchers, chiefly from the Broad Institute, Harvard Medical School and Boston Children’s Hospital, found that a person’s risk of schizophrenia is dramatically increased if they inherit variants of a gene important to “synaptic pruning” — the healthy reduction during adolescence of brain cell connections that are no longer needed.

[Schizophrenic patients have different oral bacteria than non-mentally ill individuals]

In patients with schizophrenia, a variation in a single position in the DNA sequence marks too many synapses for removal and that pruning goes out of control. The result is an abnormal loss of gray matter.

The genes involved coat the neurons with “eat-me signals,” said study co-author Beth Stevens, a neuroscientist at Children’s Hospital and Broad. “They are tagging too many synapses. And they’re gobbled up.

The Institute’s founding director, Eric Lander, believes the research represents an astonishing breakthrough. “It’s taking what has been a black box…and letting us peek inside for the first time. And that is amazingly consequential,” he said.

The timeline for this discovery has been relatively fast. In July 2014, Broad researchers published the results of the largest genomic study on the disorder and found more than 100 genetic locations linked to schizophrenia. Based on that research, Harvard and Broad geneticist Steven McCarroll analyzed data from about 29,000 schizophrenia cases, 36,000 controls and 700 post mortem brains. The information was drawn from dozens of studies performed in 22 countries, all of which contribute to the worldwide database called the Psychiatric Genomics Consortium.

[Influential government-appointed panel recommends depression screening for everyone]

One area in particular, when graphed, showed the strongest association. It was dubbed the “Manhattan plot” for its resemblance to New York City’s towering buildings. The highest peak was on chromosome 6, where McCarroll’s team discovered the gene variant. C4 was “a dark corner of the human genome,” he said, an area difficult to decipher because of its “astonishing level” of diversity.

C4 and numerous other genes reside in a region of chromosome 6 involved in the immune system, which clears out pathogens and similar cellular debris from the brain. The study’s researchers found that one of C4’s variants, C4A, was most associated with a risk for schizophrenia.

More than 25 million people around the globe are affected by schizophrenia, according to the World Health Organization, including 2 million to 3 million Americans. Highly hereditable, it is one of the most severe mental illnesses, with an annual economic burden in this country of tens of billions of dollars.

“This paper is really exciting,” said Jacqueline Feldman, associate medical director of the National Alliance on Mental Illness. “We as scientists and physicians have to temper our enthusiasm because we’ve gone down this path before. But this is profoundly interesting.”

There have been hundreds of theories about schizophrenia over the years, but one of the enduring mysteries has been how three prominent findings related to each other: the apparent involvement of immune molecules, the disorder’s typical onset in late adolescence and early adulthood, and the thinning of gray matter seen in autopsies of patients.

[A low-tech way to help treat young schizophrenic patients]

“The thing about this result,” said McCarroll, the lead author, ” it makes a lot of other things understandable. To have a result to connect to these observations and to have a molecule and strong level of genetic evidence from tens of thousands of research participants, I think that combination sets [this study] apart.”

The authors stressed that their findings, which combine basic science with large-scale analysis of genetic studies, depended on an unusual level of cooperation among experts in genetics, molecular biology, developmental neurobiology and immunology.

“This could not have been done five years ago,” said Hyman. “This required the ability to reference a very large dataset . …When I was [NIMH] director, people really resisted collaborating. They were still in the Pharaoh era. They wanted to be buried with their data.”

The study offers a new approach to schizophrenia research, which has been largely stagnant for decades.  Most psychiatric drugs seek to interrupt psychotic thinking, but experts agree that psychosis is just a single symptom — and a late-occurring one at that. One of the chief difficulties for psychiatric researchers, setting them apart from most other medical investigators, is that they can’t cut schizophrenia out of the brain and look at it under a microscope. Nor are there any good animal models.

All that now has changed, according to Stevens. “We now have a strong molecular handle, a pathway and a gene, to develop better models,” he said.

Which isn’t to say a cure is right around the corner.

“This is the first exciting  clue, maybe even the most important we’ll ever have, but it will be decades” before a true cure is found,” Hyman said. “Hope is a wonderful thing. False promise is not.”

Insight Pharma Report

Three neurodegenerative disorders that are heavily focused on in this report include: Alzheimer’s Disease/Mild Cognitive Impairment, Parkinson’s Disease, and Amyotrophic Lateral Sclerosis. Part II of the report will include all three of these disorders, highlighting specifics including background, history, and development of the disease. Deeper into the chapters, the report will unfold biomarkers under investigation, genetic targets, and an analysis of multiple studies investigating these elements.

Experts interviewed in these chapters include:

  • Dr. Jens Wendland, Head of Neuroscience Genetics, Precision Medicine, Clinical Research, Pfizer Worldwide R&D
  • Dr. Howard J. Federoff, Executive Vice President for Health Sciences, Georgetown University
  • Dr. Andrew West, Associate Professor of Neurology and Neurobiology and Co-Director, Center for Neurodegeneration and Experimental Therapeutics
  • Dr. Merit Ester Cudkowicz, Chief of Neurology at Massachusetts General Hospital

Part III of the report makes a shift from neurobiomarkers to neurodiagnostics. This section highlights several diagnostics in play and in the making from a number of companies, identifying company strategies, research underway, hypotheses, and institution goals. Elite researchers and companies highlighted in this part include:

  • Dr. Xuemei Huang, Professor and Vice Chair, Department of Neurology; Professor of Neurosurgery, Radiology,  Pharmacology, and Kinesiology Director; Hershey Brain Analysis Research Laboratory for Neurodegenerative Disorders, Penn State University-Milton, S. Hershey Medical Center Department of Neurology
  • Dr. Andreas Jeromin, CSO and President of Atlantic Biomarkers
  • Julien Bradley, Senior Director, Sales & Marketing, Quanterix
  • Dr. Scott Marshall, Head of Bioanalytics, and Dr. Jared Kohler, Head of Biomarker Statistics, BioStat Solutions, Inc.

Further analysis appears in Part IV. This section includes a survey exclusively conducted for this report. With over 30 figures and graphics and an in depth analysis, this part features insight into targets under investigation, challenges, advantages, and desired features of future diagnostic applications. Furthermore, the survey covers more than just the featured neurodegenerative disorders in this report, expanding to Multiple Sclerosis and Huntington’s Disease.

Finally, Insight Pharma Reports concludes this report with clinical trial and pipeline data featuring targets and products from over 300 companies working in Alzheimer’s Disease, Parkinson’s Disease and Amyotrophic Lateral Sclerosis.

Epigenome Tapped to Understand Rise of Subtype of Brain Medulloblastoma

http://www.genengnews.com/gen-news-highlights/epigenome-tapped-to-understand-rise-of-subtype-of-brain-medulloblastoma/81252294/

Scientists have identified the cells that likely give rise to the brain tumor subtype Group 4 medulloblastoma. [V. Yakobchuk/ Fotolia]

http://www.genengnews.com/Media/images/GENHighlight/thumb_Jan_28_2016_Fotolia_6761569_ColorfulBrain_4412824411.jpg

An international team of scientists say they have identified the cells that likely give rise to the brain tumor subtype Group 4 medulloblastoma. The believe their study (“Active medulloblastoma enhancers reveal subgroup-specific cellular origins”), published in Nature, removes a barrier to developing more effective targeted therapies against the brain tumor’s most common subtype.

Medulloblastoma occurs in infants, children, and adults, but it is the most common malignant pediatric brain tumor. The disease includes four biologically and clinically distinct subtypes, of which Group 4 is the most common. In children, about half of medulloblastoma patients are of the Group 4 subtype. Efforts to improve patient outcomes, particularly for those with high-risk Group 4 medulloblastoma, have been hampered by the lack of accurate animal models.

Evidence from this study suggests Group 4 tumors begin in neural stem cells that are born in a region of the developing cerebellum called the upper rhomic lip (uRL), according to the researchers.

“Pinpointing the cell(s) of origin for Group 4 medulloblastoma will help us to better understand normal cerebellar development and dramatically improve our chances of developing genetically faithful preclinical mouse models. These models are desperately needed for learning more about Group 4 medulloblastoma biology and evaluating rational, molecularly targeted therapies to improve patient outcomes,” said Paul Northcott, Ph.D., an assistant member of the St. Jude department of developmental neurobiology. Dr. Northcott, Stefan Pfister, M.D., of the German Cancer Research Center (DKFZ), and James Bradner, M.D., of Dana-Farber Cancer Institute, are the corresponding authors.

The discovery and other findings about the missteps fueling tumor growth came from studying the epigenome. Researchers used the analytic tool ChiP-seq to identify and track medulloblastoma subtype differences based on the activity of epigenetic regulators, which included proteins known as master regulator transcription factors. They bind to DNA enhancers and super-enhancers. The master regulator transcription factors and super-enhancers work together to regulate the expression of critical genes, such as those responsible for cell identity.

Those and other tools helped investigators identify more than 3,000 super-enhancers in 28 medulloblastoma tumors as well as evidence that the activity of super-enhancers varied by subtype. The super-enhancers switched on known cancer genes, including genes like ALK, MYC, SMO, and OTX2 that are associated with medulloblastoma, the researchers reported.

Knowledge of the subtype super-enhancers led to identification of the transcription factors that regulate their activity. Using computational methods, researchers applied that information to reconstruct the transcription factor networks responsible for medulloblastoma subtype diversity and identity, providing previously unknown insights into the regulatory landscape and transcriptional output of the different medulloblastoma subtypes.

The approach helped to discover and nominate Lmx1A as a master regulator transcription factor of Group 4 tumors, which led to the identification of the likely Group 4 tumor cells of origin. Lmx1A was known to play an important role in normal development of cells in the uRL and cerebellum. Additional studies performed in mice with and without Lmx1A in this study supported uRL cells as the likely source of Group 4 tumors.

“By studying the epigenome, we also identified new pathways and molecular dependencies not apparent in previous gene expression and mutational studies,” explained Dr. Northcott. “The findings open new therapeutic avenues, particularly for the Group 3 and 4 subtypes where patient outcomes are inferior for the majority of affected children.”

For example, researchers identified increased enhancer activity targeting the TGFbeta pathway. The finding adds to evidence that the pathway may drive Group 3 medulloblastoma, currently the subtype with the worst prognosis. The pathway regulates cell growth, cell death, and other functions that are often disrupted in cancer, but it’s role in medulloblastoma is poorly understood.

The analysis included samples from 28 medulloblastoma tumors representing the four subtypes. Researchers believe it is the largest epigenetic study yet for any single cancer type and, importantly, the first to use a large cohort of primary patient tumor tissues instead of cell lines grown in the laboratory. Previous studies have suggested that cell lines may be of limited use for studying the tumor epigenome. The three Group 3 medulloblastoma cell lines used in this study reinforced the observation, highlighting significant differences in epigenetic regulators at work in medulloblastoma cell lines versus tumor samples.

Read Full Post »

at #JPM16 – Moderna Therapeutics turns away an extra $200 million: with AstraZeneca (collaboration) & with Merck ($100 million investment)

Reporter: Aviva Lev-Ari, PhD, RN

 

per SOURCES quoted:

 

AstraZeneca, for one, has sent an important validating signal to outsiders by continuing to invest in 29 Moderna drug candidates at last count. The financial community can’t get enough. As ambitious as Moderna has been with a dream to disrupt conventional small molecule drugs and protein therapies, it recognizes it can only do so much. Moderna turned away an extra $200 million of investment that would have made its $500 million round a $700 million round. The company didn’t need that much. “It’s bizarre,” Bancel said. “I used to spend my time begging for money. Now I had to go to my board and say ‘We’re going to turn down $200 million.”

READ MORE @SOURCE

http://www.forbes.com/sites/luketimmerman/2015/01/13/merck-joins-messenger-rna-frenzy-betting-100m-on-moderna-therapeutics/#2715e4857a0b3c845e263de9

 

AstraZeneca and Moderna Therapeutics announce new collaboration to co-develop and co-commercialise immuno-oncology mRNA therapeutics™

PUBLISHED 11 January 2016

Moderna to lead preclinical development; AstraZeneca to lead clinical development; Moderna to co-commercialise and share profits on resulting products in the US

11 January 2016

AstraZeneca, along with its global biologics research and development arm, MedImmune, and Moderna Therapeutics today announced a new collaboration to discover, co-develop and co-commercialise messenger RNA (mRNA) therapeutic candidates for the treatment of a range of cancers. The collaboration is in addition to the agreement announced by the companies in 2013 to develop mRNA Therapeutics™ for the treatment of cardiovascular, metabolic and renal diseases as well as selected targets in oncology.

The collaboration will combine MedImmune’s protein engineering and cancer biology expertise with Moderna’s mRNA platform. mRNA-based therapies are an innovative treatment approach that enables the body to produce therapeutic protein in vivo, opening up new treatment options for a wide range of diseases that cannot be addressed today using existing technologies.

READ MORE @SOURCE

https://www.astrazeneca.com/our-company/media-centre/press-releases/2016/AstraZeneca-and-Moderna-Therapeutics-announce-new-collaboration-to-co-develop-and-co-commercialise-immuno-oncology-mRNA-therapeutics-11012016.html

 

Merck Joins Messenger RNA Frenzy, Betting $100M On Moderna Therapeutics

I’m the founder and editor of Timmerman Report.

Read Full Post »

Blocking miRNAs in Triple Negative Breast Cancer

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Why Blocking microRNAs in Triple Negative Breast Cancer Is so Difficult

http://www.genengnews.com/gen-news-highlights/why-blocking-micrornas-in-triple-negative-breast-cancer-is-so-difficult/81252048/

New research uncovers why attempts at blocking microRNAs for triple negative breast cancer often fail. [National Breast Cancer Foundation]

http://www.genengnews.com/Media/images/GENHighlight/thumb_Dec2_2015_NationalBreastCancerFoundation_MicroRNAs4198253108.jpg

 

While a triple negative score is hardly ever a good thing, for breast cancer it is especially troubling. Triple-negative breast cancer (TNBC) refers to a disease scenario where the cancer cells do not express the genes for estrogen, progesterone, or HER2/neu receptors simultaneously—making this cancer particularly aggressive and difficult to treat as most chemotherapies target one of these receptors.

Over the past several years, researchers have discovered that various microRNAs (miRNAs) underlie the expression of certain genes that can enable cancer cells to proliferate faster. However, the ability to block these miRNAs in TNBC has been met with failure. Yet now, scientists from Thomas Jefferson University in Philadelphia believe they have discovered the reason conventional methods to block these miRNAs has been thus far unsuccessful.

“Triple-negative breast cancer is one of the most aggressive forms of breast cancer, and there’s been a lot of excitement in blocking the microRNAs that appear to make this type of cancer grow faster and resist conventional treatment,” explained senior author Eric Wickstrom, Ph.D., professor in the department of biochemistry and molecular biology at TJU. “However blocking microRNAs hasn’t met with great success and this paper offers one explanation for why that might be the case.”

The findings from this study were published online today in PLOS ONE through an article entitled “Non-specific blocking of miR-17-5p guide strand in triple negative breast cancer cells by amplifying passenger strand activity.” Insight from this study may enable new and more effective design of blockers against previously intractable miRNAs.

“Triple negative breast cancer strikes younger women, tragically killing them in as little as two years,” noted lead author Yuan-Yuan Jin, a doctoral candidate in the department of biochemistry and molecular biology at TJU. “Only chemotherapy and radiation are approved therapies for triple negative breast cancer. We want to treat a genetic target that will keep patients alive with a good quality of life.”

The investigators targeted the miRNA molecule miR-17, which has been shown previously to cause a surge in TNBC growth by alternating genes that would normally signal a diseased or early cancerous cell to die—specifically, the tumor suppressor genes PDCD4 and PTEN.

Although, when the TJU researchers tried to reduce the levels of miR-17 in TNBC cells, rather than increase the levels of the tumor suppressor genes, as they had anticipated, they saw an even larger decrease in these genes than unmodified controls.

The TJU team was acting under the current assumptions that miRNA, which are double stranded, only silence genes using one of their two strands, which is complementary to parts of the messenger RNA coding sequence. The matching, or so-called passenger strand, was thought to be discarded and degraded by the cell.

Using a method to silence RNAs, which involves flooding the cell with modified RNA sequences that mimic the passenger strand and bind to the single-stranded microRNA before it reaches its target, the TJU team saw more silencing of the PDCD4 and PTEN genes.  After some bioinformatic and folding energy calculations, the authors realized that both strands of miR-17 were active in downregulating the tumor suppressor genes.

“Rather than blocking miR-17, we were inadvertently boosting its levels, and, therefore, boosting the cell’s cancerous potential,” noted Jin.

The results of the current study should help to open a pathway to designing specific blockers of one microRNA strand without imitating the opposite strand. Dr. Wickstrom added that “we are now testing new miR-17 blocker designs made possible by these results.”

Read Full Post »

Gene-Silencing and Gene-Disabling in Pharmaceutical Development

Larry H. Bernstein, MD, FCAP, Curator

LPBI

2.1.2.8

Gene-Silencing and Gene-Disabling in Pharmaceutical Development, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Down and Out with RNAi and CRISPR

http://www.genengnews.com/gen-articles/down-and-out-with-rnai-and-crispr/5619/

RNA interference (RNAi) silences, or knocks down, the translation of a gene by inducing degradation of a gene target’s transcript. To advance RNAi applications, Thermo Fisher Scientific has developed two types of small RNA molecules: short interfering RNAs and microRNAs. The company also offers products for RNAi analysis in vitro and in vivo, including libraries for high-throughput applications.

Genes can be knocked down with RNA interference (RNAi) or knocked out with CRISPR-Cas9. RNAi, the screening workhorse, knocks down the translation of genes by inducing rapid degradation of a gene target’s transcript.

CRISPR-Cas9, the new but already celebrated genome-editing technology, cleaves specific DNA sequences to render genes inoperative. Although mechanistically different, the two techniques complement one another, and when used together facilitate discovery and validation of scientific findings.

RNAi technologies along with other developments in functional genomics screening were discussed by industry leaders at the recent Discovery on Target conference. The conference, which emphasized the identification and validation of novel drug targets and the exploration of unknown cellular pathways, included a symposium on the development of CRISPR-based therapies.

RNAi screening can be performed in either pooled or arrayed formats. Pooled screening provides an affordable benchtop option, but requires back-end deconvolution and deep sequencing to identify the shRNA causing the specific phenotype. Targets are much easier to identify using the arrayed format since each shRNA clone is in an individual well.

“CRISPR complements RNAi screens,” commented Ryan Raver, Ph.D., global product manager of functional genomics at Sigma-Aldrich. “You can do a whole genome screen with either small interfering RNA (siRNA) or small hairpin RNA (shRNA), then validate with individual CRISPRs to ensure it is a true result.”

A powerful and useful validation method for knockdown or knockout studies is to use lentiviral open reading frames (ORFs) for gene re-expression, for rescue experiments, or to detect whether the wild-type phenotype is restored. However, the ORF randomly integrates into the genome. Also, with this validation technique, gene expression is not acting under the endogenous promoter. Accordingly, physiologically relevant levels of the gene may not be expressed unless controlled for via an inducible system.

In the future, CRISPR activators may provide more efficient ways to express not only wild-type but also mutant forms of genes under the endogenous promoter.

Choice of screening technique depends on the researcher and the research question. Whole gene knockout may be necessary to observe a phenotype, while partial knockdown might be required to investigate functions of essential or lethal genes. Use of both techniques is recommended to identify all potential targets.

For example, recently, a whole genome siRNA screen on a human glioblastoma cell line identified a gene, known as FAT1, as a negative regulator of apoptosis. A CRISPR-mediated knockout of the gene also conferred sensitivity to death receptor–induced apoptosis with an even stronger phenotype, thereby validating FAT1’s new role and link to extrinsic apoptosis, a new model system.

Dr. Raver indicated that next-generation RNAi libraries that are microRNA-adapted might have a more robust knockdown of gene expression, up to 90–95% in some cases. Ultracomplex shRNA libraries help to minimize both false-negative and false-positive rates by targeting each gene with ~25 independent shRNAs and by including thousands of negative-control shRNAs.

Recently, a relevant paper emerged from the laboratory of Jonathan Weissman, Ph.D., a professor of cellular and molecular pharmacology at the University of California, San Francisco. The paper described how a new ultracomplex pooled shRNA library was optimized by means of a microRNA-adapted system. This system, which was able to achieve high specificity in the detection of hit genes, produced robust results. In fact, they were comparable to results obtained with a CRISPR pooled screen. Members of the Weissman group systematically optimized the promoter and microRNA contexts for shRNA expression along with a selection of guide strands.

Using a sublibrary of proteostasis genes (targeting 2,933 genes), the investigators compared CRISPR and RNAi pooled screens. Data showed 48 hits unique to RNAi, 40 unique to CRISPR, and an overlap of 21 hits (with a 5% false discovery rate cut-off). Together, the technologies provided a more complete research story.

“RNA screens are well accepted and will continue to be used, but it is important biologically that researchers step away from the RNA mechanism to further study and validate their hits to eliminate potential bias,” explained Louise Baskin, senior product manager, Dharmacon, part of GE Healthcare. “The natural progression is to adopt CRISPR in the later stages.”

RNAi uses the cell’s endogenous mechanism. All of the components required for gene knockdown are already within the cell, and the delivery of the siRNA starts the process. With the CRISPR gene-editing system, which is derived from a bacterial immune defense system, delivery of both the guide RNA and the Cas9 nuclease, often the rate limiter in terms of knockout efficiency, are required.

Arrayed CRISPR Screens

Synthetic crRNA:tracrRNA reagents can be used in a similar manner to siRNA reagents for assessment of phenotypes in a cell population. Top row: A reporter cell line stably expressing Cas9 nuclease was transfected with GE Dharmacon’s Edit-R synthetic crRNA:tracrRNA system, which was used to target three positive control genes (PSMD7, PSMD14, and VCP) and a negative control gene (PPIB). Green cells indicate EGFP signaling occuring as a result of proteasome pathway disruption. Bottom row: A siGENOME siRNA pool targeting the same genes was used in the same reporter cell line.

In pooled approaches, the cell has to either drop out or overexpress so that it is sortable, limiting the types of addressable biological questions. A CRISPR-arrayed approach opens up the door for use of other analytical tools, such as high-content imaging, to identify hits of interest.

To facilitate use of CRISPR, GE recently introduced Dharmacon Edit-R synthetic CRISPR RNA (crRNA) libraries that can be used to carry out high-throughput arrayed analysis of multiple genes. Rather than a vector- or plasmid-based gRNA to guide the targeting of the Cas9 cleavage, a synthetic crRNA and tracrRNA are used. These algorithm-designed crRNA reagents can be delivered into the cells very much like siRNA, opening up the capability to screen multiple target regions for many different genes simultaneously.

GE showed a very strong overlap between CRISPR and RNAi using this arrayed approach to validate RNAi screen hits with synthetic crRNA. The data concluded that CRISPR can be used for medium- or high-throughput validation of knockdown studies.

“We will continue to see a lot of cooperation between RNAi and gene editing,” declared Baskin. “Using the CRISPR mechanism to knockin could introduce mutations to help understand gene function at a much deeper level, including a more thorough functional analysis of noncoding genes.

“These regulatory RNAs often act in the nucleus to control translation and transcription, so to knockdown these genes with RNAi would require export to the cytoplasm. Precision gene editing, which takes place in the nucleus, will help us understand the noncoding transcriptome and dive deeper into how those genes regulate disease progression, cellular development and other aspects of human health and biology.”

Tool Selection

The functional genomics tool should fit the specific biology; the biology should not be forced to fit the tool. Points to consider include the regulation of expression, the cell line or model system, as well as assay scale and design. For example, there may be a need for regulatable expression. There may be limitations around the cell line or model system. And assay scale and design may include delivery conditions and timing to optimally complete perturbation and reporting.

“Both RNAi- and CRISPR-based gene modulation strategies have pros and cons that should be considered based on the biology of the genes being studied,” commented Gwen Fewell, Ph.D., chief commercial officer, Transomic Technologies.

RNAi reagents, which can produce hypomorphic or transient gene-suppression states, are well known for their use in probing drug targets. In addition, these reagents are enriching studies of gene function. CRISPR-Cas9 reagents, which produce clean heterozygous and null mutations, are important for studying tumor suppressors and other genes where complete loss of function is desired.

Schematic of a pooled shRNA screening workflow developed by Transomic Technologies. Cells are transduced, and positive or negative selection screens are performed. PCR amplification and sequencing of the shRNA integrated into the target cell genome allows the determination of shRNA representation in the population.

Timing to readout the effects of gene perturbation must be considered to ensure that the biological assay is feasible. RNAi gene knockdown effects can be seen in as little as 24–72 hours, and inducible and reversible gene knockdown can be realized. CRISPR-based gene knockout effects may become complete and permanent only after 10 days.

Both RNAi and CRISPR reagents work well for pooled positive selection screens; however, for negative selection screens, RNAi is the more mature tool. Current versions of CRISPR pooled reagents can produce mixed populations containing a fraction of non-null mutations, which can reduce the overall accuracy of the readout.

To meet the needs of varied and complex biological questions, Transomic Technologies has developed both RNAi and CRISPR tools with options for optimal expression, selection, and assay scale. For example, the company’s shERWOOD-UltramiR shRNA reagents incorporate advances in design and small RNA processing to produce increased potency and specificity of knockdown, particularly important for pooled screens.

Sequence-verified pooled shRNA screening libraries provide flexibility in promoter choice, in vitro formats, in vivo formats, and a choice of viral vectors for optimal delivery and expression in biologically relevant cell lines, primary cells or in vivo.

The company’s line of lentiviral-based CRISPR-Cas9 reagents has variable selectable markers such that guide RNA- and Cas9-expressing vectors, including inducible Cas9, can be co-delivered and selected for in the same cell to increase editing efficiency. Promoter options are available to ensure expression across a range of cell types.

“Researchers are using RNAi and CRISPR reagents individually and in combination as cross-validation tools, or to engineer CRISPR-based models to perform RNAi-based assays,” informs Dr. Fewell. “Most exciting are parallel CRISPR and RNAi screens that have tremendous potential to uncover novel biology.”

Converging Technologies

The convergence of RNAi technology with genome-editing tools, such as CRISPR-Cas9 and transcription activator-like effector nucleases, combined with next-generation sequencing will allow researchers to dissect biological systems in a way not previously possible.

“From a purely technical standpoint, the challenges for traditional RNAi screens come down to efficient delivery of the RNAi reagents and having those reagents provide significant, consistent, and lasting knockdown of the target mRNAs,” states Ross Whittaker, Ph.D., a product manager for genome editing products at Thermo Fisher Scientific. “We have approached these challenges with a series of reagents and siRNA libraries designed to increase the success of RNAi screens.”

Thermo Fisher’ provides lipid-transfection RNAiMax reagents, which effectively deliver siRNA. In addition, the company’s Silencer and Silencer Select siRNA libraries provide consistent and significant knockdown of the target mRNAs. These siRNA libraries utilize highly stringent bioinformatic designs that ensure accurate and potent targeting for gene-silencing studies. The Silencer Select technology adds a higher level of efficacy and specificity due to chemical modifications with locked nucleic acid (LNA) chemistry.

The libraries alleviate concerns for false-positive or false-negative data. The high potency allows less reagent use; thus, more screens or validations can be conducted per library.

Dr. Whittaker believes that researchers will migrate regularly between RNAi and CRISPR-Cas9 technology in the future. CRISPR-Cas9 will be used to create engineered cell lines not only to validate RNAi hits but also to follow up on the underlying mechanisms. Cell lines engineered with CRISPR-Cas9 will be utilized in RNAi screens. In the long term, CRISPR-Cas9 screening will likely replace RNAi screening in many cases, especially with the introduction of arrayed CRISPR libraries.

Validating Antibodies with RNAi

Unreliable antibody specificity is a widespread problem for researchers, but RNAi is assuaging scientists’ concerns as a validation method.

The procedure introduces short hairpin RNAs (shRNAs) to reduce expression levels of a targeted protein. The associated antibody follows. With its protein knocked down, a truly specific antibody shows dramatically reduced or no signal on a Western blot. Short of knockout animal models, RNAi is arguably the most effective method of validating research antibodies.

The method is not common among antibody suppliers—time and cost being the chief barriers to its adoption, although some companies are beginning to embrace RNAi validation.

“In the interest of fostering better science, Proteintech felt it was necessary to implement this practice,” said Jason Li, Ph.D., founder and CEO of Proteintech Group, which made RNAi standard protocol in February 2015. “When researchers can depend on reproducibility, they execute more thorough experiments and advance the treatment of human diseases and conditions.”

Down and Out with RNAi and CRISPR

Genes can be knocked down with RNA interference (RNAi) or knocked out with CRISPR-Cas9. RNAi, the screening workhorse, knocks down the translation of genes by inducing rapid degradation of a gene target’s transcript.

RNA-Based Therapeutics and Vaccines

RNA-based biopharmaceuticals, which includes therapeutics and vaccines, is a relatively new class of treatment and prophylactic for a number of chronic and rare diseases, including cancer, diabetes, tuberculosis, and certain cardiovascular conditions. The field holds great promise in the prevention and treatment of these diseases as demonstrated by early-phase clinical trials as well as significant investment by the drug development community.

Ready, Aim, CRISPR (or RNAi)

Recent progress in probing gene function via the RNAi and CRISPR methods were a strong theme of the Discovery On Target conference. Both methods enable researchers to impair the function of a targeted gene.

Masked RNAi Drug Slips through Membrane, Sheds Guise within Cell

For small interfering RNA, approaching a cell is like walking up to the door of an old speakeasy. Such doors were heavily reinforced and had a narrow, built-in sliding panel at eye level, and if the eyes peering out though the open panel didn’t like the look of you, well, you were not getting inside. Failing to gain entry is something that happens all too frequently to small interfering RNAs, which admittedly are anything but “life of the party” types.

Read Full Post »

Variability of Gene Expression and Drug Resistance, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Variability of Gene Expression and Drug Resistance

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

New Data Suggest Extreme Genetic Diversity of Tumors May Impart Drug Resistance

NEW YORK (GenomeWeb) – Researchers from the University of Chicago and the Beijing Institute of Genomics have undertaken one of the most extensive analyses of the genome of a single tumor and found far greater genetic diversity than anticipated. Such variation, they said, may enable even small tumors to resist treatment.

“With 100 million mutations, each capable of altering a protein in some way, there is a high probability that a significant minority of tumor cells will survive, even after aggressive treatment,” Chung-I Wu, a University of Chicago researcher and senior author of the study, said in a statement. “In a setting with so much diversity, those cells could multiply to form new tumors, which would be resistant to standard treatments.”

 

Extremely high genetic diversity in a single tumor points to prevalence of non-Darwinian cell evolution

Shaoping Linga,1Zheng Hua,1Zuyu Yanga,1Fang Yanga,1Yawei LiaPei LinbKe ChenaLili DongaLihua CaoaYong TaoaLingtong HaoaQingjian ChenbQiang Gonga, et al.

Shaoping Ling,  PNAS   http://dx.doi.org:/10.1073/pnas.1519556112      http://www.pnas.org/content/early/2015/11/11/1519556112

A tumor comprising many cells can be compared to a natural population with many individuals. The amount of genetic diversity reflects how it has evolved and can influence its future evolution. We evaluated a single tumor by sequencing or genotyping nearly 300 regions from the tumor. When the data were analyzed by modern population genetic theory, we estimated more than 100 million coding region mutations in this unexceptional tumor. The extreme genetic diversity implies evolution under the non-Darwinian mode. In contrast, under the prevailing view of Darwinian selection, the genetic diversity would be orders of magnitude lower. Because genetic diversity accrues rapidly, a high probability of drug resistance should be heeded, even in the treatment of microscopic tumors.

The prevailing view that the evolution of cells in a tumor is driven by Darwinian selection has never been rigorously tested. Because selection greatly affects the level of intratumor genetic diversity, it is important to assess whether intratumor evolution follows the Darwinian or the non-Darwinian mode of evolution. To provide the statistical power, many regions in a single tumor need to be sampled and analyzed much more extensively than has been attempted in previous intratumor studies. Here, from a hepatocellular carcinoma (HCC) tumor, we evaluated multiregional samples from the tumor, using either whole-exome sequencing (WES) (n = 23 samples) or genotyping (n = 286) under both the infinite-site and infinite-allele models of population genetics. In addition to the many single-nucleotide variations (SNVs) present in all samples, there were 35 “polymorphic” SNVs among samples. High genetic diversity was evident as the 23 WES samples defined 20 unique cell clones. With all 286 samples genotyped, clonal diversity agreed well with the non-Darwinian model with no evidence of positive Darwinian selection. Under the non-Darwinian model,MALL (the number of coding region mutations in the entire tumor) was estimated to be greater than 100 million in this tumor. DNA sequences reveal local diversities in small patches of cells and validate the estimation. In contrast, the genetic diversity under a Darwinian model would generally be orders of magnitude smaller. Because the level of genetic diversity will have implications on therapeutic resistance, non-Darwinian evolution should be heeded in cancer treatments even for microscopic tumors.

Semantically Related Articles

 

 

 

The findings, which appeared in the Proceedings of the National Academy of Sciences this week, also call into question the widely held view that evolution at the cellular level is driven by Darwinian selection, revealing a level of rapid and extensive genetic diversity beyond what would be expected under this model.

In the study, the researchers focused on a single hepatocellular carcinoma tumor, roughly the size of a ping pong ball. They sampled 286 regions from a single slice of the tumor, studying each one with either whole-exome sequencing or genotyping under both the infinite-site and infinite-allele models of population genetics.

Based on their analyses, the team estimated more than 100 million coding region mutations in what they called an “unexceptional” tumor — more mutations than would ordinarily be expected by orders of magnitude, according to Wu.

This extreme genetic diversity, the study’s authors wrote, implies evolution under the non-Darwinian mode, which is driven by random mutations largely unaffected by natural selection. It also raises the question of why there is so little apparent Darwinian selection in the tumor.

The scientists speculated that in solid tumors, cells remain together and do not migrate, “so that when an advantageous mutation indeed emerges, cells carrying it are competing mostly with themselves. These mutations may confer advantages in fighting for space or extracting nutrients, but they are stifled by their own advantages,” they wrote.

Beneficial mutations may emerge on occasion, but in solid tumors the cell populations are “so structured that selection may often be blunted,” they stated. “The physiological effect has to be very strong to overcome those constraints.” Cancer drugs could remove those constraints, loosening up a cell population and allowing competition to occur, the investigators added.

Wu and his colleagues see the presence of so many mutations in a tumor as creating problems when it comes to treatment. “It almost guarantees that some cells will be resistant,” study co-author and University of Chicago oncologist Daniel Catenacci said in the statement. “But it also suggests that aggressive treatment could push tumor cells into a more Darwinian mode.”

Overall, the findings highlight the need to consider non-Darwinian evolution and the vast genetic diversity it can confer as factors when developing treatment strategies, even for small tumors, the researchers concluded.

Read Full Post »

GEN Tech Focus: Rethinking Gene Expression Analysis, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

GEN Tech Focus: Rethinking Gene Expression Analysis

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Quantitating gene expression is essential for researchers to answer important biological questions about basic cellular functions, as well as disease states. In the following articles you will discover the multitude of advances investigators have made to accurately measure and quantitate genetic transcripts within the cell.

Diverse Pathways to Drug Targets

A great deal of research on pathway analysis is currently focusing on RNA rather than proteins, and the complex RNA networks that regulate gene expression. With the realization that more than 90% of the genome that is transcribed into RNA is not translated into protein, and the growing numbers of naturally occurring microRNAs (miRNAs) and long noncoding RNAs (lncRNAs) being identified and characterized, the important role these RNAs play in normal biological processes and across human diseases is becoming increasingly clear.

 

The Gene-Expression Undergrowth Have Been Well Trodden, but RNA Paths Want Wear, Too

  • Hepatitis C virus depends on a functional interaction between its genome and miR-122 for viral stability and replication. Researchers recently used an antisense oligonucleotide that targets the liver-specific microRNA miR-122, blocking its function. [Bluebay2014/Fotolia]

    A great deal of research on pathway analysis is currently focusing on RNA rather than proteins, and the complex RNA networks that regulate gene expression.

    With the realization that more than 90% of the genome that is transcribed into RNA is not translated into protein, and the growing numbers of naturally occurring microRNAs (miRNAs) and long noncoding RNAs (lncRNAs) being identified and characterized, the important role these RNAs play in normal biological processes and across human diseases is becoming increasingly clear.

    This knowledge—combined with the available technology and strategies to decipher RNA pathways and link alterations in the levels or activity of miRNAs or lncRNAs to gene expression, epigenetic mechanisms, and protein activity in normal and disease phenotypes—is driving the development and clinical testing of novel drug targets and therapeutics that target regulatory RNAs.

    For example, a microRNA was targeted in a Phase II clinical study that assessed the effect of miravirsen, an antisense oligonucleotide, in patients with hepatitis C. The study, which was described in 2013 in the New England Journal of Medicine, indicated that miravirsen sequesters the liver-specific microRNA miR-122 in a highly stable heteroduplex, thereby inhibiting its function.

    Hepatitis C virus (HCV) depends on a functional interaction between its genome and miR-122 for viral stability and replication. According to the study, inhibition of miR-122 in HCV-infected patients was associated with decreased levels of HCV RNA that continued beyond the treatment period, without evidence of viral resistance.

    The therapeutic potential of regulatory RNAs is also being assessed in other conditions such as cancer. Specifically, miRNAs and other ncRNAs in cancer initiation, progression, and metastasis are being studied by George Calin, M.D., Ph.D., a professor of experimental therapeutics, MD Anderson Cancer Center, University of Texas. Dr. Calin’s group is scouring the “microRNAome” to identify miRNAs of about 21–22 nucleotides that can serve as reliable biomarkers for cancer diagnosis and to guide decision-making in patient management, including as predictors of survival and response to drug therapy.

    miRNAs are involved in every aspect of tumorigenesis, cancer progression, and dissemination. Not only are they expressed in tumor cells, they are also stably expressed in exosomes and are present in various bodily fluids, where they can act like hormones and signaling molecules. Comparative profiling of these fluids for differences in miRNA levels between patients with and without cancer could identify relevant biomarkers.

     

    Analyzing RNA Pathways

  • Using Qiagen’s Ingenuity Pathway Analysis, researchers can analyze relationships between molecules and diseases of interest by modeling how gene expression patterns affect functional outcomes or disease processes.

     

     

     

     

     

     

     

     

     

    Dr. Calin and colleagues have described the significance of miRNA signatures obtained in recent studies involving miRNA profiling of human tumors. An overview appeared 2014 in CA: A Cancer Journal for Clinicians (“MicroRNAome genome: a treasure for cancer diagnosis and therapy”). Also, last February, Dr. Calin gave an account of his group’s work at the Molecular Med Tri Conference in San Francisco.

    Technology is not holding back advances in the field of RNA pathway analysis according to Dr. Calin. The main bottleneck at present is in the design of prospective studies needed to confirm the predictive value of miRNA-based biomarkers.

    Dr. Calin points to two other key challenges that scientists currently face in translating research findings into diagnostic, prognostic, and therapeutic tools. One is the difficulty in selecting an miRNA target, mainly because an individual miRNA could have a role in regulating tens, hundreds, or even thousands of protein-coding genes. For drug discovery, the aim is to identify miRNAs that affect a single pathway of interest to help limit off-target effects. The need for novel delivery systems for RNA-targeted drugs is another key challenge.

    At the Molecular Med Tri Conference, Jean-Noel Billaud, Ph.D., principal scientist at Qiagen Bioinformatics, presented a case study demonstrating how the company’s Ingenuity Pathway Analysis technology can be used to conduct a systems biology analysis to identify the pathways, potential upstream regulators, and downstream outcomes involved in the host response to West Nile Virus (WNV) infection. Dr. Billaud also discussed how to interpret the results from a biological perspective.

    In his presentation, Dr. Billaud described the first step in this analytical process as the acquisition of RNA sequence data using next-generation sequencing techniques for the purpose of characterizing and quantifying differential gene expression between an infected and uninfected cell. The CLC Cancer Research Workbench tool is used to process the sequence data, and the results are imported directly into the IPA system.

    Analysis of differential gene expression aims to answer a series of key questions, including the following: What metabolic and/or signaling pathway(s) is activated or inhibited? Is there an overlap of the genes or pathways that are activated or inhibited? What are the potential upstream, downstream, functional, and phenotypic implications of this pathway activation or inhibition?

    Dr. Billaud described other questions researchers might attempt to answer through the use of IPA: What are the identifying the underlying transcriptional programs? Which biological processes are involved and in what way? Are there splice variants of interest? What type of regulation is involved?

    In the WNV case study, IPA predicted activation of the interferon signaling pathway and added statistically and functionally relevant biological processes to the WNV-related biochemical network the system developed. IPA is able to simulate the effects of interferon pathway activation on neighboring molecules and processes, which enables broader modeling of antiviral responses, prediction of the effects on viral replication, and identification of upstream transcriptional regulators of antiviral and related anti-inflammatory processes, for example.

    These data and analytical capabilities may allow researchers to propose new hypotheses that connect molecules in regulatory networks to disease-related pathways in a predictive way, leading to the identification of a “master regulator” that could serve as a disease-specific drug target, according to Dr. Billaud.

    In the WNV example, he described the use of the Molecule Activity Predictor (MAP) function in IPA to test the hypothesis that CLEC7A is a host susceptibility factor required by WNV to stimulate an immune response in the brains of infected patients, contributing to the development of life-threatening encephalitis. The MAP function simulates the inhibition or downregulation of CLEC7A, showing how it would likely reduce the risk of WNV-associated encephalitis. These types of hypotheses would then need to be tested and validated.

    Pathways Driving B-Cell Differentiation

    • Robert C. Rickert, Ph.D., professor and director of the Tumor Microenvironment and Metastasis Program at Sanford-Burnham Medical Research Institute, is using conditional gene targeting to identify the genes and biochemical pathways that play a role at specific stages of B-cell differentiation. With this approach, it is possible to knock out targeted genes in a mouse at different stages of B-cell development, and to do so in an inducible fashion, allowing you “to look at how it affects different signal transduction pathways in a context-specific manner,” says Dr. Rickert.

      When applied to a relevant mouse model of disease—such as a B-cell lymphoma—this inducible genetic system should yield effects similar to those that could be obtained with a drug capable of blocking the activity of the targeted gene product. Dr. Rickert and colleagues are exploring the similarity between the effects achieved with conditional gene targeting and those of recently approved drugs to treat chronic lymphocytic leukemia (CLL) and some forms of lymphoma such as idelalisib and ibrutinib, which are both inhibitors of the B-cell receptor pathway via blocking of PI3K or Bruton’s tyrosine kinase (BTK), respectively.

      Dr. Rickert presented his group’s latest research at a Keystone Symposium Conference, PI 3-Kinase Signaling Pathways in Disease, which took place last January in Vancouver. In his talk, Dr. Rickert emphasized that the phosphatidyl inositol-3 kinase (PI3K) pathway is a major regulator B lymphocyte differentiation and function.

      Dr. Rickert has also applied conditional gene targeting to compare the roles of the NFκB and PI3K pathways in B-cell maturation. He has shown that while both pathways are essential at some stages of B-cell differentiation, only one pathway may be necessary for B-cell maintenance and survival.

      “Ultimately we want to gain more insight at the biochemical level into single cells and the heterogeneity of the cell populations we’re interested in,” says Dr. Rickert. Tumors and cancer cell populations are quite heterogeneic, and better biochemical tools are needed to be able to sort through these populations of cells and “look at some of the more interesting, rogue cells, such as cancer stem cells,” he adds.

    An Evolutionary Approach

    In his laboratory at Hebrew University of Jerusalem, researcher Yuval Tabach, Ph.D., is using computational tools to analyze and compare the genomes and proteins of hundreds of species to identify evolutionary patterns of conservation and loss that point to connections between molecular pathways and disease.

    “The main power of this phylogenetic profiling approach is that if you look at proteins across evolution, some are lost at certain points in certain species,” says Dr. Tabach. For example, proteins involved in the tricarboxylic acid (TCA) cycle have been highly conserved across some species, but have disappeared in others because those species have lost their mitochondria.

    Dr. Tabach and colleagues have shown that sets of genes associated with particular diseases have similar phylogenetic profiles. They are also using this approach to identify genes associated with longevity, cancer resistance, and various extreme environmental conditions.

    Phylogenetic profiling to connect patterns of conservation and loss across millions of years of evolution can be applied to entire proteins, protein domains, and RNA molecules such as microRNAs. The potential applicability of this approach to drug discovery and development is multifaceted.

    For example, given a gene known to be related to a certain disease, the ability to identify other genes with a similar phylogenetic profile might reveal genetic factors that could explain incomplete penetrance or the variability of disease severity in different affected individuals. Alternatively, identification of a candidate gene in one patient could serve as the basis for identifying other key factors in other patients with the same disease using the phylogenetic profile.

    Compared to strategies such as gene expression analysis or protein-protein interaction mapping for identifying disease-related genes, phylogenetic profiling “is much faster” and will become an increasingly powerful tool as the genome sequences of more species become available, explains Dr. Tabach.

    The Israeli start-up company ReThink Pharmaceuticals is using the molecular networks generated through this phylogenetic profiling work for the purpose of drug repositioning. “If you know that a certain drug targets a gene, we can build a network to find other genes/proteins that interact with the drug target,” asserts Dr. Tabach, citing preliminary results that demonstrate the ability to predict additional effects of a drug candidate.

     

 

Measuring siRNA-mediated Knockdown of the IL-8 gene Using the QuantiGene Singleplex Assay

A critical component of RNA interference (RNAi) studies is the validation of gene expression inhibition. RNAi experiments have many sources of variation that make accurate quantitation of target mRNA difficult when qPCR is used. Variation in the potency and stability of short interfering RNA (siRNA), coupled with differences in transfection efficiency and protein turnover, results in varying gene knockdown efficiency.

 

The RNA World Expands

Over the past 10 years, scientists say new methods, including deep sequencing and DNA tiling arrays, have enabled the identification and characterization of the human transcriptome. These techniques completely changed our understanding of genome organization and content and revealed that a much larger part of the human genome is transcribed into RNA than was previously assumed—about 70%.

The RNA World Expands  

Long noncoding RNAs mean more than HOTAIR.

The RNA World Expands

Long noncoding RNA (lncRNAs) can regulate gene expression at epigenetic, transcriptional, and post-transcriptional levels. [© Alila Medicinal Media – Fotolia.com]

  • Over the past 10 years, scientists say new methods, including deep sequencing and DNA tiling arrays, have enabled the identification and characterization of the human transcriptome. These techniques completely changed our understanding of genome organization and content and revealed that a much larger part of the human genome is transcribed into RNA than was previously assumed—about 70%.

    Last year researchers, including Tim Mercer, Ph.D., at the Institute for Molecular Bioscience-University of Queensland, Roche Nimblegen, and John Rinn, Ph.D., and his team in the department of stem cell and regenerative biology at Harvard, reported that “transcriptomic analyses have revealed an ‘unexpected complexity’ to the human transcriptome, the depth and breadth of which exceeds current RNA sequencing capability.”

    These scientists used these techniques to identify and characterize unannotated transcripts whose rare or transient expression is below the detection limits of conventional sequencing approaches. The data also show that intermittent sequenced reads observed in conventional RNA sequencing datasets, previously dismissed as noise, are indicative of unassembled rare transcripts. Collectively, they say these results reveal the range, depth, and complexity of a human transcriptome that is far from fully characterized.

    Noncoding transcripts are RNA molecules that include classical “housekeeping” RNAs such as transfer RNAs (tRNAs), ribosomal RNAs (rRNAs), small nuclear RNAs (snRNAs), and small nucleolar RNAs (snoRNAs), which are constitutively expressed and play critical roles in protein biosynthesis.

    Among these noncoding RNAs are numerous long noncoding RNAs (lncRNAs), which are defined as endogenous cellular RNAs of more than 200 nucleotides in length that lack an open reading frame of significant length (less than 100 amino acids). The RNA molecules constitute a heterogeneous group, allowing them, scientists point out, to cover a broad spectrum of molecular and cellular functions by implementing different modes of action. lncRNAs are roughly classified based on their position relative to protein-coding genes as intergenic (between genes), intragenic/intronic (within genes), and antisense. Initial efforts to characterize these molecules demonstrated that they function in cis, regulating their immediate genomic neighbors.

    Regulatory Levels

  • lncRNAs can regulate gene expression at epigenetic, transcriptional, and post-transcriptional levels and take part in various physiological and pathological processes, such as cell development, immunity, oncogenesis, clinical disease processes, and more. A classic lncRNA, HOTAIR, was originally identified through work done by Howard Chang, M.D., Ph.D., at Stanford, and Dr. Rinn. Their research eventually led to the discovery of this 2.2 kilobase spliced RNA transcript that interacts with Polycomb group proteins to modify chromatin and repress transcription of the human HOX genes, which regulate development. It remains unclear as to exactly this is accomplished.

    HOTAIR, it was found, originates from the HOXC locus and represses transcription across 40 kb of that locus by altering the chromatin trimethylation state. Hox genes, a highly conserved subgroup of the homeobox superfamily, regulate numerous processes including apoptosis, receptor signaling, differentiation, motility, and angiogenesis. Aberrations in Hox gene expression have been reported in abnormal development and malignancy.

    HOTAIR works to repress Hox gene expression by directing the action of Polycomb chromatin remodeling complexes in trans to govern the cells’ epigenetic state and subsequent gene expression.HOTAIR expression is increased in primary breast tumors and metastases and its expression level in primary tumors can predict eventual metastasis and death. The recent discovery that lncRNA HOTAIRcan link chromatin changes to cancer metastasis furthers the relevance of lncRNAs to human disease.

    Dr. Chang and his colleagues say that the finding that several lncRNAs can control transcriptional alteration implies that the difference in lncRNA profiling between normal and cancer cells is not merely the secondary effect of cancer transformation, and that lncRNAs are strongly associated with cancer progression. The researchers showed that lncRNAs in the HOX loci become systematically dysregulated during breast cancer progression.

    They further demonstrated that enforced expression of HOTAIR in epithelial cancer cells induced genome-wide retargeting of polycomb repressive complex 2 (PRC2) to an occupancy pattern more resembling embryonic fibroblasts, leading to altered histone H3 lysine 27 methylation, gene expression, and increased cancer invasiveness and metastasis in a manner dependent on PRC2.

    On the other hand they noted loss of HOTAIR can inhibit cancer invasiveness, particularly in cells that possess excessive PRC2 activity. These findings indicate that lncRNAs have active roles in modulating the cancer epigenome and may be important targets for cancer diagnosis and therapy. Thus, the investigators say, differential expression of lncRNAs may be profiled to aid in cancer diagnosis and prognosis and in the selection of potential therapeutics.

    Two years ago the GENCODE consortium, within the framework of the ENCODE project, presented, and analyzed the most complete human lncRNA annotation to date. The data comprise 9,277 manually annotated genes producing 14,880 transcripts. The identification and annotation of this wealth of lncRNAs leaves scientists with a lot of research to do to fully characterize the varied functions of these unusual RNAs. Their identification also challenges technology developers to produce the tools to necessary for these analyses.

     

Transcript Regulation of 18 ADME Genes by Prototypical Inducers in Human Hepatocytes

Drug-drug interactions (DDIs) are of particular concern for regulatory agencies and the pharmaceutical industry for drug safety. Induction of drug metabolizing enzymes by pharmaceuticals, nutraceuticals, and lifestyle influences is one type of DDI in which the influence of a perpetrator molecule increases the enzyme capacity that can metabolize a victim molecule, rendering it ineffective as a therapy. To evaluate this potential, screening assays have been developed, such as the use…

 

Biomarkers Reshape Drug Development

Biomarkers defining specific phenotypes are becoming increasingly important for developing new drugs for specific patient subpopulations. The value of a new biomarker is measured by its ability to reduce risk. Ideally, the biomarker should be developed in parallel with the new drug, as nearly 50% of the projected development costs can be saved by…

Biomarkers Reshape Drug Development

  • Imanova takes a structured approach to the development of imaging biomarkers, or i-biomarkers.

    Biomarkers defining specific phenotypes are becoming increasingly important for developing new drugs for specific patient subpopulations. The value of a new biomarker is measured by its ability to reduce risk.

    Ideally, the biomarker should be developed in parallel with the new drug, as nearly 50% of the projected development costs can be saved by shutting down a development program before it enters Phase II. A meaningful risk-benefit analysis of a biomarker requires estimates of its cost and accuracy, as well as the consequences of decisions that it will enable.
    For the biomarker to be of value, the cost of its development has to be less than the projected costs of development from Phase II onwards, discounted to present time. While multiple competing business considerations affect a pharmaceutical company’s decision to proceed with a biomarker program, the skyrocketing market for biomarker discovery underscores the pharmaceutical industry’s hope that biomarkers will bolster the success rates of pipeline products.
    “Imaging biomarkers have been Ideally, the biomarker should be developed in parallel with the new drug, as nearly 50% of the projected development costs can be saved by shutting down a development program before it enters Phase II. A meaningful risk-benefit analysis of a biomarker requires estimates of its cost and accuracy, as well as the consequences of decisions that it will enable.

    Ideally, the biomarker should be developed in parallel with the new drug, as nearly 50% of the projected development costs can be saved by shutting down a development program before it enters Phase II. A meaningful risk-benefit analysis of a biomarker requires estimates of its cost and accuracy, as well as the consequences of decisions that it will enable.

    For the biomarker to be of value, the cost of its development has to be less than the projected costs of development from Phase II onwards, discounted to present time. While multiple competing business considerations affect a pharmaceutical company’s decision to proceed with a biomarker program, the skyrocketing market for biomarker discovery underscores the pharmaceutical industry’s hope that biomarkers will bolster the success rates of pipeline products.

    “Imaging biomarkers have been largely underutilized in drug development,” says Kevin Cox, Ph.D., CEO of London-based Imanova. “But we believe that molecular imaging has the power to assist in successful translation of molecules by reducing the risk of several specific causes of failure in Phase II clinical studies. Imaging biomarkers, or i-biomarkers, are especially valuable in giving confidence of tissue delivery, determination of target engagement, and the evaluation of a drug’s pharmacodynamic effects.”

    While imaging is routinely used in clinical diagnostics for cancer, its acceptance in drug development has been slow. “This is a highly specialized area of knowledge,” Dr. Cox observes. “Designing imaging experiments to answer the right questions is not trivial. Combined with the perceived high costs and dearth of well-equipped facilities, this has slowed down the adoption of imaging as an integral step in drug development.”

    Imanova presents an innovative and highly integrated solution in reducing the barriers for use of molecular imaging. Located in the former GlaxoSmithKline imaging center, Imanova’s staff applies the knowledge needed for translational application of imaging science.

    “Another historical barrier for use of molecular imaging has been the lack of versatile PET tracers for key therapeutic targets,” remarks Dr. Cox. Together with its pharmaceutical clients, Imanova develops proprietary tracers that can answer critical questions about target engagement directly after drug administration. A structured approach for i-biomarker development takes the novel tracer from the candidate pool to clinical validation.

    Uniquely, Imanova utilizes in silico biomathematical modeling to predict a candidate with ideal physicohemical characteristics. “The i-biomarker development pipeline adheres to a strict quality system,” continues Dr. Cox. “We not only provide candidate selection and labeling, but also rigorous preclinical evaluation in several species, combined with blood chemistry or other physiological measurements.”

    The resulting biomarker provides quantitative information to make informed go/no-go decisions. Imanova hopes to develop an open innovation approach to i-biomarker research, and to encourage pharmaceutical companies to collaborate on tracer development.

    “By collaborating in this pre-competitive space, a pharma-academic consortium can de-risk i-biomarker development programs and generate new tools to eliminate costs associated with futile activities downstream,” concludes Dr. Cox. “Most tracers need to be utilized early in the drug development process. Used at the right time, imaging biomarkers are able to inform the design of Phase II studies, including dose ranging and possibly patient selection, saving many months in development and millions of dollars in costs.”

    Answers from Big Data

  • “Clinical bioinformatics is the application of a data-driven, high-tech approach in clinical setting,” says Jerome Wojcik, Ph.D., CEO of Quartz Bio, a clinical bioinformatics service provider located in Plan-Les-Ouates, Switzerland. “We use clinical bioinformatics to adapt treatment to patients, that is, to identify cohorts that respond to the drug in a predictable manner,” says Dr. Wojcik.

    Pharmaceutical partners supply Quartz Bio with data collected in a course of clinical trials. The data (which may include information from protein and RNA expression, genotyping, molecular diagnostics, and flow cytometry studies) often exists in silos within a pharma company. To make sense of the data, Quartz Bio integrates heterogeneously formatted data, analyzes it for consistency, and identifies gaps and outliers.

    Dr. Wojcik’s team dedicates over 40% of the overall analysis time to the biomarker data management. This key step is crucial for the quality of the overall analysis. According to Quartz Bio, all the data-management processes are documented, auditable, and reproducible.

    Once the “Big Data” horde is adequately cleaned up, the team applies adaptive statistical methods to generate multiple hypotheses linking the drug action with subpopulations of patients. “Our challenge is to generate reliable hypotheses on a fairly small statistical patient sample, for example, a thousand patients, but using millions of biomarker datapoints,” continues Dr. Wojcik. “We do not rely on statistics alone. Graphical visualization adapted to the objectives of the study is necessary for interpretation of results.”

    In a recent project, Quartz Bio analyzed multiple oncology biomarkers, such as gene expression, circulating tumor cells, and immunohistochemistry, to identify patient cohorts that would most likely benefit from a novel treatment. Biomarker analysis revealed a subpopulation whose survival rate increased significantly over the population average, bringing a potential application of personalized medicine closer to reality.

     

 

Read Full Post »

Pathology Insights

Larry H Bernstein, MD, FCAP, Curator

LPBI

 

Predicting the Prognosis of Lung Cancer: The Evolution of Tumor, Node and Metastasis in the Molecular Age—Challenges and Opportunities

Ramón Rami-Porta; Hisao Asamura; Peter Goldstraw

Transl Lung Cancer Res. 2015;4(4):415-423.

http://www.medscape.com/viewarticle/852315

 

 

The tumor, node and metastasis (TNM) classification of malignant tumors was proposed by Pierre Denoit in the mid-20th century to code the anatomic extent of tumors. Soon after, it was accepted by the Union for International Cancer Control and by the American Joint Committee on Cancer, and published in their respective staging manuals. Till 2002, the revisions of the TNM classification were based on the analyses of a database that included over 5,000 patients, and that was managed by Clifton Mountain. These patients originated from North America and almost all of them had undergone surgical treatment. To overcome these limitations, the International Association for the Study of Lung Cancer proposed the creation of an international database of lung cancer patients treated with a wider range of therapeutic modalities. The changes introduced in the 7th edition of the TNM classification of lung cancer, published in 2009, derived from the analysis of an international retrospective database of 81,495 patients. The revisions for the 8th edition, to be published in 2016, will be based on a new retrospective and prospective international database of 77,156 patients, and will mainly concern tumor size, extrathoracic metastatic disease, and stage grouping. These revisions will improve our capacity to indicate prognosis and will make the TNM classification more robust. In the future the TNM classification will be combined with non-anatomic parameters to define prognostic groups to further refine personalized prognosis.

Introduction

Obvious as it may seem, it is important that the readers of this article keep in mind that the tumor, node and metastasis (TNM) classification of lung cancer is no more and no less than a system to code the anatomic extent of the disease. Therefore, by definition, the TNM classification does not include other elements that, while they can help improve our capacity to prognosticate the disease for a given patient, are unrelated to the anatomy of the tumor, i.e., parameters from blood analysis, tumor markers, genetic signatures, comorbidity index, environmental factors, etc. Prognostic indexes combining the TNM classification and other non-anatomic parameters are called, by consensus between the Union for International Cancer Control (UICC) and the American Joint Committee on Cancer (AJCC), prognostic groups to differentiate them from the anatomic stage groupings.

The TNM classification of lung cancer is applied to all histopathological subtypes of non-small cell carcinoma, to small cell carcinoma and to typical and atypical carcinoids. It is governed by general rules[1–3] (Table 1) that apply to all malignancies classified with this system, and by site-specific rules applicable to lung cancer exclusively.[4] There also are recommendations and requirements issued with the objective to classify tumors in a uniform way when their particular characteristics do not fit in the basic rules.[4]

The three components of the classification have several categories that are defined by different descriptors. For lung cancer, those for the T component are based on tumor size, tumor location and involved structures; those for the N, on the absence, presence and location of lymph node metastasis; and those for the M, on the absence, presence and location of distant metastasis. There are optional descriptors that add information on the local aggressiveness of the tumor (differentiation grade, perineural invasion, vascular invasion and lymphatic permeation) all of which have prognostic relevance;[5–8] assess the intensity of the investigation to determine the stage (certainty factor); and assess the residual tumor after therapy (residual tumor).

The TNM classification was developed by Pierre Denoit in a series of articles published from 1943 to 1952. It was soon adopted by the UICC that published brochures covering several anatomical sites, the lung being included in 1966. Two years later, the UICC published the first edition of the TNM Classification of Malignant Tumors and agreements were reached with the AJCC, created in 1959 as the American Joint Committee for Cancer Staging and End Results Reporting, to consult each other to avoid publication of differing classifications. Since then, the UICC and the AJCC have been responsible for updating and revising the TNM classifications of malignant tumors with the participation of national TNM committees of several countries and taking into account the published reports on the topic. The second to sixth editions of the UICC manual on the TNM Classification of Malignant Tumors and the first to sixth editions of the AJCC Staging Manual included classifications for lung cancer that had been informed by a progressively enlarging database initially collected by Mountain, Carr and Anderson, and subsequently managed by Mountain. Their database originally contained a little over 2,000 patients, but it had grown to more than 5,000 by the time the fifth edition of the TNM classification for lung cancer was published in 1997. The sixth edition was published in 2002 with no modifications.[9]

While the fifth edition of the classification was being printed, the International Workshop on Intrathoracic Staging took place in London, United Kingdom, in October 1996, sponsored by the International Association for the Study of Lung Cancer (IASLC).[10] At that meeting, in the presence of Dr. Mountain, the limitations of the database that had been used to revise the TNM classification for lung cancer were openly discussed. In essence, it was considered that, while the database consisted of a relatively large number of patients, all of them originated from the United States of America, and, therefore, the staging system could not really be called ‘international’, as it was called at that time; and, although all tumors had clinical and pathological classifications, the majority had been treated surgically. So, the database was thought not to be representative of the international community, as there were no patients from other countries; or of the current clinical practice, as there were no patients treated with other therapies. Therefore, an agreement was reached to issue a worldwide call to build a really international database of lung cancer patients treated by all therapeutic modalities. This required the constitution of an International Staging Committee that was approved and given a small amount of funding, to pump-prime, by the IASLC Board in 1998. Subsequently substantial financial support was secured by an unrestricted grant from Eli-Lilly. Cancer Research And Biostatistics (CRAB), a not-for-profit biosciences statistical center in Seattle, was appointed to collect, manage and analyze the new database. The proprietors and managers of known databases were subsequently summoned to attend a series of preparatory meetings to identify potential contributors to the IASLC international database for the purpose of revising the TNM classification of lung cancer.

The Future of the TNM Classification

The TNM classification of lung cancer is the most consistent and solid prognosticator of the disease, but it does not explain the whole prognosis because prognosis is multifactorial. In addition to the anatomic extent of the tumor, patient and environmental factors also count. Prognosis also is dynamic, as it may be different at the time of diagnosis, after treatment or at recurrence.[71] In the TNM classification, tumor resection plays an important role as it defines pathological staging and may modify the prognostic assessment based on clinical staging. Other than that, the TNM classification does not include blood analyses, tumor markers, genetic characteristic of the tumor or environmental factors that may account for the differences in survival among similar tumors in different geographic areas.

In order to make progress to indicate a more personalized prognosis, instead of a prognosis based on cohorts of patients with tumors of similar anatomic extent, the IASLC Staging and Prognosis Factors Committee decided to expand its activities to the study of non-anatomic prognostic factors. Therefore, in the third phase of the IASLC Lung Cancer Staging Project, the activities of the committee will be directed to further refine the TNM classification and to find available factors that can be combined with tumor staging to define prognostic groups. To some extent, this already was done with the analyses of the database used for the 7th edition. Prognostic groups with statistically significant differences were defined by combining anatomic tumor extent and very simple clinical variables, such as performance status, gender, and age. These prognostic groups were defined for clinically and pathologically staged tumors, and for small-cell and non-small cell lung cancers.[22,23]

The database used for the 8th edition includes several non-anatomical elements related to the patient, the tumor and the environment that may help refine prognosis at clinical and pathological staging.[69]Due to the limitations of the previous databases, future revisions of the TNM classification will need to be more balanced in terms of therapeutic modalities, and better populated with patients from underrepresented geographical areas, such as Africa, India, Indonesia, North, Central and South America, and South East Asia. The data contributed in the future will have to be complete regarding the TNM descriptors, and preferably prospective. The more robust the TNM, the more important its contribution to the prognostic groups.

To achieve all of the above, international collaboration is essential. Those interested in participating in this project should send an email expressing their interest to information@crab.org, stating ‘IASLC staging project’ in the subject of the email. The IASLC Staging and Prognostic Factors Committee has been very touched by the overwhelming generosity of colleagues around the world who have contributed cases to inform the 7th and the 8th editions of the TNM classification of lung cancer. We continue to count on their collaboration to further revise future editions and to define prognostic groups that will eventually allow a more personalized indication of prognosis.

MicroRNAs in the Pathobiology of Sarcomas

Anne E Sarver; Subbaya Subramanian

Lab Invest. 2015;95(9):987-984

http://www.medscape.com/viewarticle/852145

 

Sarcomas are a rare and heterogeneous group of tumors. The last decade has witnessed extensive efforts to understand the pathobiology of many aggressive sarcoma types. In parallel, we have also begun to unravel the complex gene regulation processes mediated by microRNAs (miRNAs) in sarcomas and other cancers, discovering that microRNAs have critical roles in the majority of both oncogenic and tumor suppressor signaling networks. Expression profiles and a greater understanding of the biologic roles of microRNAs and other noncoding RNAs have considerably expanded our current knowledge and provided key pathobiological insights into many sarcomas, and helped identify novel therapeutic targets. The limited number of sarcoma patients in each sarcoma type and their heterogeneity pose distinct challenges in translating this knowledge into the clinic. It will be critical to prioritize these novel targets and choose those that have a broad applicability. A small group of microRNAs have conserved roles across many types of sarcomas and other cancers. Therapies that target these key microRNA-gene signaling and regulatory networks, in combination with standard of care treatment, may be the pivotal component in significantly improving treatment outcomes in patients with sarcoma or other cancers.

Sarcomas are a heterogenous group of tumors that account for ~200 000 cancers worldwide each year (~1% of all human malignant tumors); however, they represent a disproportionately high 15% of all pediatric malignant tumors.[1,2] Sarcomas comprise over 50 subtypes that can broadly be classified into bone and soft-tissue sarcomas that are generally based on the cell and/or tissue type.[3] The vast majority of sarcomas fall into the soft-tissue group, primarily affecting connective tissues such as muscle (smooth and skeletal), fat, and blood vessels. Bone sarcomas are relatively rare, representing only ~20% of all diagnosed sarcomas (~0.2% of all cancers). Even within a specific subtype, sarcomas are highly heterogenous making them diagnostically and therapeutically challenging. Several sarcoma types are genetically characterized by chromosomal translocations or DNA copy number alterations, both of which are used as diagnostic markers.[2,4,5]

The four main types of bone sarcomas are defined by their histology, cell of origin (when known), clinical features, and site distribution—osteosarcoma, Ewing’s sarcoma, chondrosarcoma, and chordoma. The most common primary bone malignancy, osteosarcoma, predominantly affects children and young adults and is characterized by undifferentiated bone-forming proliferating cells.[6] Ewing’s sarcoma, another aggressive pediatric malignancy, usually arises in growing bone and is genetically characterized by a fusion of EWS–FLI1 oncoproteins that act as gain-of-function transcriptional regulators.[7] Chondrosarcoma is itself a heterogenous group of malignant bone tumors arising from the malignant transformation of cartilage-producing cells, frequently with mutations in IDH1/2 and COL2A1.[8,9] Chordoma is an aggressive, locally invasive cancer that typically arises from bones in the base of the skull and along the spine. It is characterized, in part, by its abnormal expression of transcription factor T, which is normally only expressed during embryonic development or in the testes.[10]

Soft-tissue sarcomas are also primarily defined by their histology, cell of origin, and, in some cases, by characteristic genetic translocation events. Rhabdomyosarcoma is a malignant skeletal-muscle derived tumor comprised of two main histological subtypes, embryonal and alveolar, is one of the most common childhood soft-tissue sarcomas, accounting for 6–8% of all pediatric tumors.[11] Liposarcoma is the most common soft-tissue cancer overall, accounting for 20% of adult sarcoma cases. It originates in deep-tissue fat cells and is characterized primarily by amplification of the 12q chromosomal region.[12] Other common soft-tissue sarcomas include angiosarcomas, fibrosarcomas, gastrointestinal stromal tumors, and synovial sarcomas, each with their own unique genetic signature.

Ever since the discovery of oncogenes, the primary emphasis in cancer research has been on understanding the role of proteins and protein-coding genes. However, the percent of the genome dedicated to coding genes is small compared with noncoding regions. The last decade has seen a surge of interest in these noncoding regions with small noncoding RNAs such as microRNAs (miRNAs) gaining particular prominence. These small RNAs have critical roles in tumor formation and progression. Understanding their roles in sarcoma will lead to new therapeutic targets and diagnostic biomarkers, opening the door to a greater understanding of the molecular mechanisms of all cancers.

miRNAs are evolutionarily conserved, small, noncoding RNA molecules of 18–24 nucleotides in length at maturity that can control gene function through mRNA degradation, translational inhibition, or chromatin-based silencing mechanisms.[13] Each miRNA can potentially regulate hundreds of targets via a ‘seed’ sequence of ~5–8 nucleotides at the 5′ end of the mature miRNA. miRNAs bind to complementary sequences in the 3′-untranslated regions (3′-UTRs) of target mRNA molecules, leading to either translational repression or transcriptional degradation.[14] The short seed sequence length and relatively low stringency requirement for these miRNA–3′-UTR interactions allow a single miRNA to potentially regulate hundreds of genes.[15] Small changes in the expression level of a few miRNAs can therefore have a dramatic biological impact, particularly when dysregulated. miRNA expression profiles can be used to distinguish between closely related soft-tissue sarcoma subtypes and may provide a more consistent diagnosis than histological inspection.[16–18]

miRNAs have critical roles in the majority of canonical cellular signaling networks and their dysregulation is implicated in many cancers including breast cancer, colon cancer, gastric cancer, lung cancer, and sarcomas.[19,20] Dysregulation of miRNA expression may result from a variety of factors, including abnormal cellular stimuli, genetic mutations, epigenetic alterations, copy number variations, and chromosomal fusions. Because miRNAs act as critical regulator molecules in a variety of signaling pathways and regulatory networks, their dysregulation can be amplified across the entire signaling network.[21–24] Selected miRNAs and targets that have critical regulatory roles in sarcoma and other cancers are summarized in Table 1 .
The p53 signaling pathway is one of the most highly studied cellular signaling networks. It actively induces apoptosis in response to DNA damage and oncogene activation and is therefore a key tumor suppressor pathway.[25] Germline mutations in TP53 are strongly associated with the development of soft-tissue sarcomas, osteosarcoma, and are the underlying cause of Li–Fraumeni Syndrome, a familial clustering of early-onset tumors including sarcomas.[26,27] It is estimated that over 50% of human tumors harbor a TP53 mutation but over 80% of tumors have dysfunctional p53 signaling.[28,29] It is only within the last 10 years that researchers have started uncovering the roles of miRNAs in mediating p53’s activity and resulting pro-apoptotic signals (Figure 1). miRNA dysregulation could be a key factor in the ~30% of tumors with dysfunctional p53 signaling that lack an apparent TP53 mutation.


Figure 1.

http://img.medscape.com/article/852/145/852145-fig1.jpg

p53–miRNA interaction network. p53 interacts with the Drosha complex and promotes the processing of pri-miRNA to pre-miRNA. Although p53 directly or indirectly regulates hundreds of miRNAs, for clarity, only selected cancer-relevant miRNAs are shown. miRNAs and proteins in red are upregulated by p53. miRNAs and proteins in green are downregulated by p53. miRNAs in gray are not known to be directly regulated by p53, they are included because they target p53 regulators MDM2 and/or MDM4. miRNA, microRNA.

Like other transcription factors, p53 exerts its function primarily through transcriptional regulation of target genes that contain p53 response elements in their promoters. p53 also regulates the post-transcriptional maturation of miRNAs by interacting with the Drosha processing complex, promoting the processing of primary miRNAs to precursor miRNAs.[30] In addition to protein-coding genes, many miRNA genes also contain p53 regulatory sites in their promoter regions. Large-scale screens have revealed many different miRNAs directly regulated by p53 including miR-22-3p, miR-34a, miR-125a/b, miR-182, and miR-199a-3p.[31] Some of these miRNAs, such as miR-34a and miR-199a-3p, function themselves as tumor suppressors via the regulation of genes involved in cell cycle, cell proliferation, and even of itself.[32–34] Although some p53-targeted miRNAs form a feedback loop, translationally and transcriptionally inhibiting the TP53 gene (e.g., miR-22-3p, miR-34a, and miR-125b), others target, or are predicted to target, p53 repressors such as MDM2 and/or MDM4 (miR-199a-3p, miR-661).[31,33,35,36] It is impossible to fully understand the regulation of the p53 signaling network without considering the role of these miRNAs.

miR-34a has emerged as a critical and conserved member of the p53 signaling pathway. miR-34a is downregulated in osteosarcoma tumor samples and, in conjunction with other miRNAs, regulates p53-mediated apoptosis in human osteosarcoma cell lines.[32,33,37] The gene encoding miR-34a contains a conserved p53-binding site and is upregulated in response to cellular damage in a p53-dependent manner.[37,38] Protein-coding members of the p53 signaling pathway are well-liked targets for anticancer therapeutic development efforts and miRNAs may prove equally effective. In a preclinical model of lung cancer, therapeutic delivery of a miR-34a mimic specifically downregulated miR-34a-target genes and resulted in slower tumor growth. When combined with a siRNA targeting Kras, this small RNA combination therapy resulted in tumor regression.[39] miRNAs such as miR-34a, miR-125b, and miR-199a-3p also mediate p53’s regulation of other key signaling pathways such as the IGF-1/PI3K/AKT/mTOR signaling network. Activation of the AKT network due to downregulation of PTEN (a negative regulator of AKT) by miR-21 or miR-221 or by alternate activation of AKT is a common mechanism underlying many different types of cancer.[40–43] The induction of cell growth, migration, invasion, and metastasis resulting from the upregulation of either miR-21 or miR-221 is seen across different tumor types.[41,44–50] Dysregulation of these miRNAs is a common factor in sarcomas and other tumors. Understanding their mechanisms of action in sarcoma could lead to broadly useful cancer therapeutics.

In prospective analyses that could be models for other sarcoma studies with sufficient numbers of patient samples, Thayanithy et al[19] and Maire et al[23] each analyzed collections of osteosarcoma tissues and compared them with either normal bone or osteoblasts. They each found a set of consistently downregulated miRNAs localized to the 14q32 region.[19,23] Targeting predictions performed by Thayanithy et al[19] identified a subset of four miRNAs as potential regulators of cMYC. One of the many roles of cMYC is to promote the expression of the miR-17–92 family, a known oncogenic cluster that has been observed to be highly expressed in many cancer types including osteosarcoma, leiomyosarcoma, and alveolar rhabdomyosarcoma.[51–57] Restoring the expression of the four 14q32 miRNAs increased apoptosis of SAOS-2 cells, an effect that was attenuated either by overexpression of a cMYC construct lacking the 3′UTR or by ectopic expression of the miR-17–92 cluster.[19] Although the 14q32 region is dysregulated across many different cancer types, this pattern of dysregulation appears to be a hallmark of osteosarcoma, which is particularly interesting due to the heterogenous nature of osteosarcomas and provides an extremely attractive common therapeutic target.

One particular challenge with these types of expression profiling studies is that the cell-of-origin for a particular sarcoma subtype may not be definitely established. Another challenge is the scarcity of patient samples, particularly for the rare sarcoma subtypes. As a result, there have only been a limited number of studies designed to comprehensively profile miRNA expression in various sarcoma subtypes and to compare those expression profiles with the corresponding normal tissues or cell lines. These studies were reviewed recently in Drury et al[20] and Subramanian and Kartha.[58]

Owing to the scarcity of frozen sarcoma tissue samples, it is tempting to study sarcoma cells in vitro, using either primary or immortalized cell cultures. Studies performed in culture are less expensive and more accessible; however, the cell lines used must be chosen with care and may not truly represent the tumors. Any results derived from cultured cells must be interpreted with caution and validated in vivo when possible. A tumor cell’s microenvironment has a profound effect on gene expression and cell metabolism and culturing for even short periods of time can result in large changes in gene/miRNA expression.[59] Three-dimensional cultures can provide more physiological relevant in vitro models of individual tumors (eg, spheroid cultures) or multi-layered epithelial tissues (eg, organotypic cultures using extracellular matrix proteins, fibroblasts, and/or artificial matrix components) vs the previous standard two dimensional culture model.[60,61]

Complicating the analysis of these miRNA expression changes is the fact that many miRNAs showing differential expression in multiple different studies do not have a consistent direction of change and/or a consistent role (tumor suppressor vs tumor promoter). This likely reflects both random chance observational differences and different tissue biology reflected in different regulatory networks. Elucidation of the regulatory roles played by miRNAs in these networks in their appropriate biological contexts may provide suitable upstream targets for more effective treatment of sarcomas. Recent advances in sequencing and downstream bioinformatics techniques provide the tools to efficiently examine these questions.

For two decades, microarray gene chips containing synthetic oligonucleotides whose sequences are designed to be representative of thousands of genes have allowed researchers to perform simultaneous expression analysis of thousands of RNA transcripts in a single reaction.[62–65] Gene expression profiling has been used to characterize and classify a wide range of sarcomas, in some cases providing a diagnostic resolution more accurate than histological examination.[66–72] With the advent of high-throughput RNA-Seq, sarcoma researchers are now able to prospectively analyze the differential expression of small RNAs, such as miRNAs, without prior knowledge of their sequence.[73,74] RNA-Seq also allows for the prospective identification of novel genomic rearrangements resulting from gene fusions or premature truncations that may be of particular interest to cancer researchers.[75,76] These data are highly quantitative and digital in nature, allowing for a dynamic range that is theoretically only limited by the sequencing depth and approaches the estimated range within the cell itself.[77] Marguerat and Bähler[78] provide a basic overview of the different RNA-Seq technologies and their differences from array-based technologies.[78]

Several groups have taken advantage of these technologies to create miRNA expression profiles for a number of different sarcomas in an effort to find both common sarcoma oncomirs and to discover unique miRNA signatures that could be used in diagnosis, prognosis, and novel therapeutic development. Renner et al[18] used a microarray-based miRNA screen, followed by qRT-PCR verification, to analyze the expression of 1146 known miRNAs across a collection of 76 primary soft-tissue sarcoma samples representing eight different subtypes and across a panel of 15 sarcoma cell lines. In addition to identifying overrepresented miRNAs synovial sarcomas (miR-200 family) and liposarcomas (miR-9) compared with other sarcomas and adipose tissue, respectively, their results revealed a high degree of co-expression of 63 miRNAs clustering in the chromosomal region 14q32.[18] The most comprehensive sarcoma miRNA data set has been published by Sarver et al[79] who profiled miRNA expression in over 300 sarcoma primary tumor tissue samples representing 22 different sarcoma types. These data form the basis for the web-accessible comprehensive Sarcoma microRNA Expression Database (SMED) database, which has tools that allows users to query specific sarcoma types and/or specific miRNAs.[79]

Integrative miRNA–mRNA analysis using a tool such as Ingenuity Pathway Analysis (Qiagen) or GeneSpring (Agilent) allows for more biologically relevant results by highlighting miRNA–mRNA pairs that are linked not only by predicted targeting interactions but whose expression levels are inversely correlated (i.e., as miRNA expression increases one would expect the target mRNA levels to decrease). For example, out of 177 differentially expressed miRNAs in osteosarcoma cell lines vs normal bone, an integrated miRNA–mRNA analysis highlighted two particularly interesting miRNA/mRNA pairs (miR-9/TGFBR2 and miR-29/p85α regulatory subunit of PI3K) that were dysregulated.[44]

It is important to note that the general consensus is that there is often no single ‘correct’ method to analyze miRNA expression data. Different experimental and bioinformatics techniques may reveal different aspects in the data that can be further investigated and experimentally validated. All of these experiments, whether performed at the bench or systems biology, contribute to our greater understanding of sarcoma biology and the central role of dysregulated miRNA–gene networks as drivers of tumor formation and progression.

miRNAs are part of a larger family of noncoding RNAs including long noncoding RNAs (lncRNAs) and competing endogenous RNAs (ceRNAs) that deserve to be evaluated for therapeutic potential in sarcomas with broader applicability to other cancer types. Just like miRNAs, lncRNAs are widely expressed in tissue-specific patterns that are highly disrupted in cancer.[80] As their name implies, ceRNAs compete for their common miRNA targets and influence their expression, which has an indirect effect on the protein-coding genes, such as PTEN, regulated by those miRNAs.[81,82] We have just begun to unravel the role of lncRNAs and ceRNAs in cancer development and progression but recent results hint at yet another layer of complexity and genetic control in tumor biology.

The lessons learned from carcinomas, leukemias, and lymphomas will be helpful in understanding the pathobiology of sarcomas and the insights gained from sarcoma biology may form the foundation for therapeutics to treat a wide range of other cancers. Recent studies have shown miRNAs are very stable in blood serum and plasma, and extensive efforts are underway to develop circulating miRNA-based diagnostic and prognostic markers. Major technical challenges in developing circulating miRNA-based markers still need to be addressed, including standardization of pre-analytical, analytical, and post-analytical methods for effective reproducibility. For example, miR-16, which is used in the normalization of miRNA expression in serum/plasma is also found in red blood cells; thus, any hemolysis during sample collection could significantly affect the downstream expression data analysis.

Cancers do not exist in isolation inside the body and extensive research has been performed on how tumor-derived proteins adapt their microenvironment to provide more favorable conditions for tumor growth and development. Recent studies have shown that miRNAs also have a major role in modulating tumor microenvironment. Although most miRNAs are found inside the cell, a significant number of miRNAs are encapsulated in exosomes that can be used as a delivery system to send miRNAs from one cell to another, allowing tumor cells to modulate gene expression in surrounding tissues.[83,84] Exosome and miRNA-mediated cross talk between sarcoma tumor cells and the surrounding stromal cells is a new and exciting avenue of research and the potential for novel therapeutics is high.

Sarcomas are a diverse collection of rare cancers with proportionally limited resources for research and development of novel treatments. It is therefore crucial that potential therapeutic targets are prioritized and novel therapeutic agents carefully selected for clinical trials to succeed. Extensive studies in preclinical models will be required; however, there are also challenges in the development of appropriate in vitro and in vivo model systems that accurately reflect the different sarcoma types. Sarcomas, such as osteosarcoma, leiomyosarcoma, and angiosarcoma are very heterogeneous in nature, making it unlikely that therapies targeting specific genomic mutations will be successful. Even if specific targets were to be identified it would still be a challenge to develop clinical trials based on the small number of patients harboring those specific mutations. Coordinated efforts such as the Cancer Genome Atlas (TCGA, http://cancergenome.nih.gov/) and its associated preclinical and clinical trial consortiums will help unravel novel miRNA–mRNA interactions and their significance as potential therapeutic targets.

Targeting common miRNA–gene oncogenic or tumor suppressor networks goes after the common denominator underlying many of these cancers. Key regulatory molecules in sarcoma are highly likely to have similar roles in leukemias and lymphomas, for instance, and vice versa. For example, oncogenic activation of STAT3 strongly promotes the expression of miR-135b in lymphoma, resulting in increased angiogenesis and tumor growth.[85] miR-135b is widely overexpressed in sarcomas and STAT3 may be having a similar transcriptional regulatory role, indicating that STAT3 inhibitors could be an effective supplemental therapy in sarcomas.[86] Interestingly, p53 promotes the transcription of miR-125b, which can directly target both STAT3 and p53 transcription. This finely balanced regulatory network is frequently dysregulated in osteosarcoma and Ewing’s sarcoma.[87,88] In retinoblastoma, STAT3 activation is associated with upregulation of the miR-17-92 cluster via a positive feedback loop and inhibition of STAT3-suppressed retinoblastoma proliferation, providing further evidence that STAT3 may be an attractive therapeutic target in many cancers.[89] The dysregulation of key signaling molecules such as the p53 and STAT3 along with their associated signaling networks are a common feature across most cancer types implying that advances in understanding of sarcoma biology may be highly impactful in more frequently occurring solid tumors and lymphomas.

Certain miRNAs appear to be common players across many types of sarcomas and other cancers and their dysregulation contributes to the development of the hallmarks of cancer (Figure 2). miR-210, a key modulator of many downstream pathways involved in the hypoxic response, is upregulated under hypoxic conditions in most solid tumors, including soft-tissue sarcomas, osteosarcoma, renal cancer, and breast cancer.[90] A recent meta-analysis demonstrated that the elevated expression of miR-210 is a prognostic indicator for disease-free, progression-free, and relapse-free survival in a variety of cancer patients.[91] Perhaps the most consistently upregulated miRNA across all tumor types is the anti-apoptotic miR-21, which directly targets the tumor suppressor PDCD4.[92] Levels of miR-21 correlate with cancer progression and patient prognosis.[93]
Figure 2.

http://img.medscape.com/article/852/145/852145-thumb2.png

Conserved miRNA-tumor suppressor signaling networks in cancer. These miRNAs and tumor suppressors are involved in other network and signaling pathway interactions, such as the p53 signaling network; this figure highlights selected critical conserved pathways.

 

Human Papillomavirus Oncogenic mRNA Testing for Cervical Cancer Screening

Jennifer L. Reid, PhD; Thomas C. Wright Jr, MD; Mark H. Stoler, MD; Jack Cuzick, PhD; Philip E. Castle, PhD; Janel Dockter; Damon Getman, PhD; Cristina Giachetti, PhD

Am J Clin Pathol. 2015;144(3):473-483.

http://www.medscape.com/viewarticle/850740

 

Objectives: This study determined the longitudinal clinical performance of a high-risk human papillomavirus (HR-HPV) E6/E7 RNA assay (Aptima HPV [AHPV]; Hologic, San Diego, CA) compared with an HR-HPV DNA assay (Hybrid Capture 2 [HC2]; Qiagen, Gaithersburg, MD) as an adjunctive method for cervical cancer screening.

Methods: Women 30 years or older with a negative result for intraepithelial lesions or malignancy cytology (n = 10,860) positive by AHPV and/or HC2 assays and randomly selected women negative by both assays were referred to colposcopy at baseline. Women without baseline cervical intraepithelial neoplasia (CIN) grade 2 or higher (CIN2+) continued into the 3-year follow-up.

Results: The specificity of AHPV for CIN2 or lower was significantly greater at 96.3% compared with HC2 specificity of 94.8% (P < .001). Estimated sensitivities and risks for detection of CIN2+ were similar between the two assays. After 3 years of follow-up, women negative by either human papillomavirus test had a very low risk of CIN2+ (<0.3%) compared with CIN2+ risk in women with positive AHPV results (6.3%) or positive HC2 results (5.1%).

Conclusions: These results support the use of AHPV as a safe and effective adjunctive cervical cancer screening method.

Introduction

Cervical cancer is one of the most frequent cancers in women worldwide, accounting for approximately 530,000 new cases and 275,000 deaths annually.[1] Countries with well-organized screening programs using conventional Papanicolaou (Pap) stain cytology have experienced substantially reduced mortality from the disease in the past 5 decades.[2–4] Despite this advance, the relatively low sensitivity and reproducibility of both conventional Pap smear and liquid-based cytology screening methods have prompted investigation into identifying adjunctive methods with Pap cytology for improving detection of cervical neoplasia.[5–9]

Infection with 14 high-risk human papillomavirus (HR-HPV) genotypes (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, and 68) is associated with almost all cases of cervical precancer, defined as cervical intraepithelial neoplasia (CIN) grade 2 (CIN2), grade 3 (CIN3), and cancer.[10] Addition of HR-HPV nucleic acid testing to a cervical cytology screening regimen offers higher sensitivity and negative predictive value (NPV) for detection of cervical precancer and cancer compared with cytology alone, especially in older women.[11–15] For this reason, HR-HPV nucleic acid testing is recommended as an adjunctive test to cytology to assess the presence of HR-HPV types in women 30 years of age or older.[16] In this context, HR-HPV testing guides patient management by identifying women at elevated risk for CIN2 or higher (CIN2+) but, importantly, also reassures women who are negative for HR-HPV of their extremely low cancer risk.[17–19]
First-generation HR-HPV molecular tests used for adjunctive cervical cancer screening function by detecting viral genomic DNA in cellular samples from the uterine cervix. However, because the presence of HR-HPV in the female genital tract is common and often transient in nature,[20,21] and most cervical HPV infections resolve without becoming cancerous,[22,23] HR-HPV DNA-based test methods yield only moderate specificity for detection of high-grade cervical disease.[12,24] This leads to unnecessary follow-up and referral of patients to colposcopy, increasing the physical and emotional burdens on patients and elevating health care costs.

A test approved by the US Food and Drug Administration (FDA) for detection of HR-HPV E6/E7 messenger RNA (mRNA) (Aptima HPV [AHPV]; Hologic, San Diego, CA) has shown higher specificity with similar sensitivity for detection of CIN2+ compared with HPV DNA-based tests in patients referred for colposcopy due to an abnormal Pap smear result as well as in a screening setting.[25–30] Expression of mRNA from viral E6 and E7 oncogenes is highly associated with the development of CIN,[31,32] and extensive investigation into the role of E6 and E7 oncoproteins in the human papillomavirus (HPV) life cycle has revealed that the expression of the corresponding oncogenes is necessary and sufficient for cell immortalization, neoplastic transformation, and the development of invasive cancer.[33–35]

To confirm and extend the previous evidence on the clinical utility of HR-HPV oncogenic mRNA testing in a US population-based setting, the clinical performance of AHPV was evaluated as an adjunctive method for cervical cancer screening in women aged 30 years or older with negative for intraepithelial lesions or malignancy (NILM) cytology results from routine Pap testing in a pivotal, prospective, multicenter US clinical study including 3 years of follow-up (the Clinical Evaluation of Aptima mRNA [CLEAR] study). We report herein the results from this study.

1 of 4

Figure 1.

Clinical evaluation of Aptima mRNA study participant disposition. aReasons for withdrawal: did not meet eligibility criteria (70); Pap volume insufficient for AHPV testing (117); specimen expired or unsuitable for testing (190); specimen lost (58); noncompliant site (320); other reasons (26). bReasons for withdrawal: collection site did not participate in follow-up (243); subject terminated participation (37); participant had hysterectomy (22); participant not eligible (17); participant treated prior to CIN2+ diagnosis (8); other reasons (4). AHPV, Aptima HPV (Hologic, San Diego, CA); ASC-US, atypical squamous cells of unknown significance; CIN2+, cervical intraepithelial neoplasia grade 2 or higher; HC2, Hybrid Capture 2 (Qiagen, Gaithersburg, MD); HPV, human papillomavirus; NILM, negative for intraepithelial lesions or malignancy; Pap, Papanicolaou test.

HPV Testing

Baseline PreservCyt specimens (1-mL aliquot) were tested with the AHPV (Hologic) on both the automated Tigris DTS System and Panther System. Results from the two systems were similar; Panther System results are presented here. AHPV is a target amplification assay that uses transcription-mediated amplification to detect the E6/E7 oncogene mRNA of 14 HR-HPV genotypes (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, and 68).

 

HPV and Disease Prevalence

Cervical disease and HPV status are shown in Table 2 for the baseline evaluation and cumulatively after 3 years of follow-up. Of the 10,860 evaluable participants with NILM cytology at baseline, 512 were positive for AHPV, yielding a prevalence of 4.7% for HR-HPV E6/E7 oncogenic mRNA, whereas prevalence of HR-HPV DNA was 6.5% among 10,229 women with HC2 results. A total of 845 HPV RNA-positive or DNA-positive women and 556 randomly selected HPV-negative women were referred to colposcopy at baseline (Figure 1).

At baseline, the percentage of colposcopy attendance was similar between HPV-positive (62%, n = 526) and randomly selected HPV-negative (61%, n = 339) women with 29 cases of CIN1, nine cases of CIN2, eight cases of CIN3, and three cases of adenocarcinoma in situ (AIS) identified (Table 2). Four of the CIN2 cases and two of the AIS cases were identified based on an ECC biopsy specimen only.

In total, 6,271 women completed the 3-year follow-up with a known disease status (Table 2). Of these, 6,098 (97.2%) women had normal (negative) disease status, and 56 (0.9%) had low-grade lesions (CIN1). In addition to the 20 women with CIN2+ identified at baseline, 15 (0.2%) women had CIN2 and 12 (0.2%) women had CIN3 identified during follow-up, with two cases identified from an ECC biopsy specimen only.

Of the 27 women with CIN2+ identified during follow-up, two had CIN1 at baseline, with CIN3 identified during year 1. Ten women had no disease found at baseline, with five cases of CIN2+ identified during year 1, one case of CIN2+ identified during year 2, and four cases of CIN2+ identified during year 3. The remaining 15 women with CIN2+ identified during follow-up did not have a baseline colposcopy; among them, two cases of CIN2+ were identified during year 1, six cases of CIN2+ during year 2, and seven cases of CIN2+ during year 3.

AHPV Assay Performance

Baseline risk and prevalence estimates adjusted for verification bias are provided in Table 3. The prevalence of CIN2+ was 0.9% in the overall population. CIN2+ occurred in 4.5% (95% CI, 2.7%-7.4%) of women with positive AHPV results and in 0.6% (95% CI, 0.2%–1.9%) of women with negative AHPV results, yielding a relative risk of 7.5 (95% CI, 2.1–26.3). This indicates that women with a positive AHPV result are at significantly greater risk of CIN2+ than women with a negative AHPV result. The CIN2+ relative risk obtained for the HC2 test at baseline was similar (7.3; 95% CI, 1.6–33.5). For CIN3+ diagnosis, the overall prevalence was 0.4%. The AHPV relative risk was 24.9 (95% CI, 2.0–307.0), again with a similar relative risk for HC2 (21.0; 95% CI, 1.0–423.8).

Cumulative absolute and relative risks for AHPV and HC2 over the 3-year follow-up period for HPV-positive and HPV-negative women are shown in Table 4. Women with an HPV-negative result with either test had very low cervical disease risk after 3 years of follow-up (<0.3%). Comparatively, 5% to 6% of women with an HPV-positive result had CIN2+ and 3% to 4% had CIN3+, with overall cumulative absolute and relative risks slightly higher for the AHPV assay than for HC2. Younger women aged 30 to 39 years who were HPV positive had twice the prevalence of disease but a similar increase in relative risk of cervical disease compared with HPV-positive women 40 years and older (Table 4). Risk of cervical disease in HPV-negative women did not vary by age group.

Figure 2 and Figure 3 show the cumulative absolute risk of CIN2+ and CIN3+, respectively, by year according to AHPV or HC2 positivity status at baseline. Both assays show a similar trend, with consistent slightly higher risk for the AHPV assay each year.

Figure 2.

Cumulative absolute risk of cervical intraepithelial neoplasia grade 2 or higher (CIN2+) by year. AHPV, Aptima HPV (Hologic, San Diego, CA); HC2, Hybrid Capture 2 (Qiagen, Gaithersburg, MD).

Figure 3.

Cumulative absolute risk of cervical intraepithelial neoplasia grade 3 or higher (CIN3+) by year. AHPV, Aptima HPV (Hologic, San Diego, CA); HC2, Hybrid Capture 2 (Qiagen, Gaithersburg, MD).

After 3 years of follow-up, the specificity of AHPV for CIN2 or lower was 96.3% (95% CI, 95.8%-96.7%), significantly greater (P < .001) compared with HC2 specificity of 94.8% (95% CI, 94.3%-95.4%) Table 5. AHPV specificity for CIN3 or lower (96.2%; 95% CI, 95.5%–96.5%) was also significantly greater (P < .001) than HC2 specificity (94.7%; 95% CI, 94.1%-95.2%). Estimated sensitivities for detection of CIN2+ and CIN3+ were similar between the two assays (P = .219 and P = 1.0, respectively). For detection of CIN2+, AHPV sensitivity was 55.3% (95% CI, 41.2%-68.6%), and HC2 sensitivity was 63.6% (95% CI, 48.9%-76.2%). For CIN3+ detection, AHPV sensitivity was 78.3% (95% CI, 58.1%-90.3%), and HC2 sensitivity was 81.8% (95% CI, 61.5%-92.7%) (Table 5).

Discussion

This study presents the results of a 3-year longitudinal evaluation of the AHPV assay as an adjunctive method for screening women 30 years and older who have NILM Pap cytology results. Consistent with previously published data,[28,29] these results demonstrate that HR-HPV oncogenic E6/E7 mRNA testing has a sensitivity similar to an HR-HPV DNA-based test for detection of CIN2+ and CIN3+ and slightly, but significantly, improved specificity compared with HR-HPV DNA testing for both end points. We found that use of AHPV as an adjunctive method for HPV-induced cervical disease screening provided disease detection capability similar to HC2 while reducing the false-positive rate (from 5.2% to 3.7%) relative to the HPV DNA-based test. Reduction of HPV detection in women without cervical disease minimizes the anxiety and burden associated with spurious positive HPV molecular test results in women with NILM cytology, decreases health care costs, and reduces unnecessary follow-up procedures, thereby improving the safety of cervical cancer screening (unnecessary colposcopy is considered a significant “harm” in the recent American Cancer Society guidelines[16]).

Importantly, we show that women with a NILM cytology result who also had a positive AHPV result are approximately 24 times more likely to have CIN2+ disease after 3 years than women with a negative AHPV result. This risk increased to approximately 68-fold for detection of CIN3+ disease. Similar but slightly lower risk estimates were obtained with HC2, demonstrating comparable accuracy of the AHPV and HC2 for identifying participants with CIN2+ and CIN3+ in this respect.

After 3 years of follow-up, women in this study who were HPV negative at baseline using any test method had very low risk for CIN2+ (<0.3%), a result similar to previously published studies with HC2.[42,43] These findings reinforce evidence from previous studies showing that HR-HPV nucleic acid testing should be performed as an adjunctive test to routine Pap for cervical cancer screening of women 30 years or older to increase sensitivity of disease detection.[28] Correspondingly, compared with annual cytology-only screening, this study supports longer screening intervals for women negative for both abnormal cytology and HPV E6/E7 mRNA, due to the high NPV and low risk of disease afforded by this screening algorithm for 3 years following a test-negative baseline visit. Extension of cervical cancer screening intervals following negative HPV and cytology test results in women 30 years or older is a key recommendation of current US screening guidelines from both the American Cancer Society and the US Preventive Services Task Force.[16]

Conversely, since the positive predictive value of any HPV test in women with NILM cytology is low, additional AHPV testing to detect persistent HR-HPV infection during follow-up care in women with an initial AHPV-positive result is likely a better option than direct referral to colposcopy. Alternatively, genotyping with referral for HPV 16– or HPV 18–positive women can optimize referral and minimize loss to follow-up.[44]

Read Full Post »

pathway and network analysis of complex ‘omics data

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

While blood tests can be used to detect some cancers, the FDA said a San Diego company has no proof its blood test works in patients who have not already been diagnosed with some form of the disease.

WASHINGTON, Sept. 25 (UPI) — A San Diego company selling an early cancer detection test was notified by the U.S. Food and Drug Administration it can find no evidence the test actually works, and is concerned it could prove to be harmful for some people.

Pathway Genomics debuted its CancerIntercept test in early September with claims it can detect cancer cell DNA in the blood, picking up mutations linked to as many as 10 different cancers. The goal is to catch cancer early in people who are “otherwise healthy” and not showing symptoms of the disease.

“Based on our review of your promotional materials and the research publication cited above, we believe you are offering a high risk test that has not received adequate clinical validation and may harm the public health,” said FDA Deputy Director James L. Woods in a letter to the company.

CancerIntercept is billed by the company as a blood test looking for DNA fragments in the bloodstream and testing them for 96 genomic markers it says are found in several specific tumor types.

The direct-to-consumer test can be purchased through the Pathway Genomics website, with programs ranging from a one-time test to a quarterly “subscription” for people who want regular testing.

The company states, in several sections of its website, “the presence of one or more of these genomic markers in a patient’s bloodstream may indicate that the patient has a previously undetected cancer. However, the test is not diagnostic, and thus, follow-up screening and clinical testing would be required to confirm the presence or absence of a specific cancer in the patient.”

The FDA is concerned that people may seek treatment for tumors that do not require medical attention, or spend money and possibly seek out treatment they do not need at all — in either case, unnecessary treatment for cancer is potentially harmful to people, the agency said.

CancerIntercept has not been approved by the FDA for use as a medical device, nor has it been subjected to peer review as most tests of its type would be. The company published a white paper on its website which outlines how the test works, supporting its efficacy with references to several clinical trials on detection of mutated DNA in the bloodstream.

Glenn Braunstein, Chief Medical Officer at Pathway Genomics, told The VergePathway had validated its tests with “hundreds” of patients, though those patients had well-defined, often advanced cancers.

In the letter from the FDA, Woods requests the company provide a timeline for meeting with the agency to review plans for future longitudinal studies on the product and specific details on studies that have been conducted before it was made available to consumers.

http://www.upi.com/Health_News/2015/09/25/FDA-Start-ups-cancer-blood-test-may-be-harmful/4191443181676/

The clinical laboratory is an essential player in the treatment of cancer providing a diagnostic, potentially a prognostic, and follow-up treatment armamentarium.  The laboratory diagnostics industry has grown over the last half century into  a highly accurate, well regulated industry with highly automated and point of care technologies.  Prior to introduction, the tests that are put on the market have to be validated prior to introduction.

How are they validated?

The most common approach is for the test to be used concomitantly with treatment in a clinical trial. Measurements may be made prior to surgical biopsy and treatment, and at a month or 6 months to a year later.  The pharmaceutical and diagnostics industries are independent, even though a large company may have both pharmaceutical and diagnostic divisions.  Consequently, the integration of diagnostics and therapeutics occurs on the front lines of patient care.

How this discrepancy between the FDA and the manufacturer could occur is not clear because prior to introduction, the test would have to be rigorously reviewed by the American Association for Clinical Chemistry, the largest and most competent organization to cover the scientific work, having industry-based committees.  The only problem is that the companies may have products that are patented and have competing claims or interests. This is perhaps most likely to be problematic in the competitive environment of  genomics testing.

The company here reported on is Pathway Genomics, that offers Ingenuity for pathway and variant analysis.  There is no concern about the analysis methods, that are well studied.  The concern is the validation of such method for screening of patients without prior diagnosis.

Model, analyze, and understand the complex biological and chemical systems at the core of life science research with IPA

QIAGEN’S Ingenuity Pathway Analysis (IPA) has been broadly adopted by the life science research community and is cited in thousands of peer-reviewed journal articles.

https://youtu.be/_HDkjuxYRcY

https://youtu.be/_HDkjuxYRcY?t=25

For the analysis and interpretation of ’omics data
Market Leading Pathway Analysis
Unlock the insights buried in experimental data by quickly identifying relationships, mechanisms, functions, and pathways of relevance.
Predictive Causal Analytics
Powerful causal analytics at your fingertips help you to build a more complete regulatory picture and a better understanding of the biology underlying a given gene expression study.
NGS/RNA-Seq Data Analysis
Get a better understanding of the isoform-specific biology resulting from RNA-Seq experiments.
Identify causal variants from human sequencing data
Ingenuity IPA Interpret Biological Meaning Graphic

http://www.ingenuity.com/wp-content/uploads/2014/01/variant-analyisis-interpretation.png

Rapidly Identify and Prioritize Variants

Ingenuity Variant Analysis combines analytical tools and integrated content to help you rapidly identify and prioritize variants by drilling down to a small, targeted subset of compelling variants based both upon published biological evidence and your own knowledge of disease biology. With Variant Analysis, you can interrogate your variants from multiple biological perspectives, explore different biological hypotheses, and identify the most promising variants for follow-up.

Variant Analysis used in NCI-60 Interpretation of Genomic Variants

The NCI-60 Data Set offers tremendous promise in the development and prescription of cancer drugs

97% of surveyed researchers are satisfied with the ease of use of Ingenuity Variant Analysis and we are honored that they chose to share the data through our Publish tool.

See the research verified by TechValidate

“Being a bioinformatician, I appreciated the speed and the complexity of analysis. Without Variant Analysis, I couldn’t have completed the analysis of 700 exomes in such a short time …. I found Variant Analysis very intuitive and easy to use.”

Francesco Lescai, Senior Research Associate in Genome Analysis, University College of London.

This appears to be the new rocky road to verification for validity in diagnostic and treatment application.

Read Full Post »

Turning genetic information into working proteins

Larry H Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Intelligence

Series 2; 3.3

James E. Darnell Jr. (1930— )
Vincent Astor Professor Emeritus
2002 Albert Lasker Award for Special Achievement in Medical Science

Responsible for the various tasks required in turning genetic information into working proteins, ribonucleic acids are one of the most essential players in the life of a cell. First discovered in 1868, RNA today remains the subject of intense scientific scrutiny. Over the course of a career dedicated to understanding the intricate workings of gene transcription, Rockefeller University scientist James E. Darnell Jr. has revealed some of RNA’s most secretive and surprising mechanisms. For his half-century of illuminating research, Dr. Darnell received the 2002 Albert Lasker Award for Special Achievement in Medical Science.

In 1963, Dr. Darnell described a phenomenon he termed “RNA processing,” a step in the process of gene transcription, which had only recently been elucidated in bacterial systems. Working with mammalian cells — which differ from bacterial cells in that they contain a nucleus, where RNA is created — Dr. Darnell observed that very long strings of RNA disappear from the cell nucleus and that subsequently, shorter RNAs resembling the absent longer ones appear in the cytoplasm. Mammalian cells, he concluded, must distill their massive, immature nuclear RNA into shorter, mature forms that are individually coded for specific purposes by specific segments of the genome.

Dr. Darnell carried the principles of his finding — which he made in ribosomal RNA, part of the construction crew that builds cellular proteins — to other long nuclear RNA, including the longest one, which he named heterogeneous nuclear RNA (hnRNA). His hypothesis, that hnRNA is the precursor of the better known messenger RNA — which carries the genetic blueprint for protein building — soon bore fruit when he found a structural correlation between the two. Certain hnRNAs and nearly all messenger RNAs have a “tail” of adenine nucleotides at one end. Dr. Darnell followed this discovery with the observation that when an hnRNA string with an adenine tail disappears from the nucleus, a messenger RNA with the same tail then appears in the cytoplasm, suggesting a causal link between the two. When he found a second similarity — a cap at the end of the string opposite the adenine tail — he faced a conundrum. Scientific dogma had it that the order of nucleotides in any RNA mirrors that of DNA, whether the RNA is modeled from somewhere in the middle of the DNA or from one of the ends. The matching of a nuclear RNA to its cytoplasmic product by two end pieces glued together was surprising, but the concept was soon proven by colleagues at other institutions and called RNA splicing.

After a brief sojourn in Paris to work in François Jacob’s lab, Darnell worked at MIT, the Albert Einstein College of Medicine, and Rockefeller University on the relationship between mRNA and hnRNA. hnRNA was believed to be the precursor to mRNA, and despite making some key discoveries, Darnell admits that he could not free his imagination from the idea of colinearity and envision an hnRNA spliced to produce a smaller mRNA.

At this time, Darnell turned his attention to the question he had pondered since Paris: how were genes regulated in animal cells? This led to the discovery of the STAT and the Jak-STAT pathway of transcription control.

With the knowledge of RNA processing and splicing, Dr. Darnell next examined how cells begin the process of transcription and how they activate particular segments of DNA. Having moved to Rockefeller University in 1974, he found in the early 1980s that cells retain their specificity only in the context of their natural environment. Away from other liver cells, for example, a single liver cell stops producing liver-specific RNA, though it continues to make RNA for more generic cellular tasks. To pinpoint the signals responsible, which he believed must be coming from outside the cell, Dr. Darnell took a closer look at interferons (IFN), proteins that warn a cell when it’s time to raise its genetic defenses against harmful microbes.

Dr. Darnell’s laboratory studies how signals from the cell surface affect transcription of genes in the nucleus. Originally using interferon as a model cytokine, the Darnell group discovered that cell transcription was quickly changed by binding of cytokines to the cell surface. Introducing IFNβ into cell cultures, he watched as a particular type of mRNA accumulated in the cytoplasm, unaccompanied by any new protein synthesis. Analyzing the mRNA led him to the segment of DNA that had been activated, and the lack of new proteins told him that the cell contained its own, usually dormant, IFN-responsive transcription factor. By isolating a particular stretch of DNA from IFN-treated cells, he was able to call out of hiding the proteins that make up that factor, which, partly because they respond to signals very quickly, he called “STATs.” Dr. Darnell then traced the chemical relay that activates the STATs after IFN contact, called the Jak-Stat pathway.

The bound interferon led to the tyrosine phosphorylation of latent cytoplasmic proteins now called STATs (signal transducers and activators of transcription) that dimerize by reciprocal phosphotyrosine-SH2 interchange. They accumulate in the nucleus, bind DNA and drive transcription. This pathway has proved to be of wide importance, with seven STATs now known in mammals that take part in a wide variety of developmental and homeostatic events in all multicellular animals. Crystallographic analysis defined functional domains in the STATs, and current attention is focused on two areas: how the STATs complete their cycle of activation and inactivation, which requires regulated tyrosine dephosphorylation; and how persistent activation of STAT3 that occurs in a high proportion of many human cancers contributes to blocking apoptosis in cancer cells. Current efforts are devoted to inhibiting STAT3 with modified peptides that can enter cells.

 

Dr. Darnell received his M.D. in 1955 from the Washington University School of Medicine. His career has included poliovirus research with Harry Eagle at the National Institute of Allergy and Infectious Diseases, research with François Jacob at the Pasteur Institute in Paris and academic appointments at the Massachusetts Institute of Technology, the Albert Einstein College of Medicine and Columbia University. In 1974 Dr. Darnell joined Rockefeller as Vincent Astor Professor, and from 1990 to 1991 he was vice president for academic affairs.

A member of the National Academy of Sciences since 1973, he has received numerous awards, including the 2012 Albany Medical Center Prize in Medicine and Biomedical Research, the 2003 National Medal of Science, the 2002 Albert Lasker Award for Special Achievement in Medical Science, the 1997 Passano Award, the 1994 Paul Janssen Prize in Advanced Biotechnology and Medicine and the 1986 Gairdner Foundation International Award.

He is the coauthor with S.E. Luria of General Virology and the founding author with Harvey Lodish and David Baltimore of Molecular Cell Biology, now in its seventh edition. His book RNA, Life’s Indispensable Molecule was published in July 2011 by Cold Spring Harbor Laboratory Press. He is a member of the American Academy of Arts and Sciences and a foreign member of The Royal Society and The Royal Swedish Academy of Sciences.

 

Read Full Post »

RNA polymerase – molecular basis for DNA transcription

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Intelligence

Series E: 2; 3.1

Roger Kornberg, MD
Nobel Prize in Chemistry
Stanford University

Son of Arthur Kornberg, who received the Nobel Prize for DNA polymerase, Roger Kornberg spent decades on the problem of transcription of the genetic code in eukaryotic cells. Roger Kornberg made several contributions to the understanding of the transcription model including – recognition of the nucleosomal structure of DNA, characterization of the chromatin modifying factors, and discovering the bridging factor that mediates transcriptional activation (called Mediator). The three types of RNA are termed mRNA, tRNA, and rRNA. Kornberg recognized that chromatin consists of nucleosomes arranged along DNA in the form of beads on a string. He used electron crystallography to determine that lateral diffusion in molecules tethered to the bilayer to pack into two-dimensional crystals suitable for crystallography.   Using yeast, Kornberg identified the role of RNA polymerase II and other proteins in transcribing DNA, and he created three-dimensional images of the protein cluster using X-ray crystallography. Polymerase II is used by all organisms with nuclei, including humans, to transcribe DNA.

While a graduate student working with Harden McConnell at Stanford in the late 1960s, he discovered the “flip-flop” and lateral diffusion of phospholipids in bilayer membranes. While a postdoctoral fellow working with Aaron Klug and Francis Crick at the MRC in the 1970s, Kornberg discovered the nucleosome as the basic protein complex packaging chromosomal DNA in the nucleus of eukaryotic cells (chromosomal DNA is often termed “Chromatin” when it is bound to proteins in this manner, reflecting Walther Flemming‘s discovery that certain structures within the cell nucleus would absorb dyes and become visible under a microscope).[10] Within the nucleosome, Kornberg found that roughly 200 bp of DNA are wrapped around an octamer of histone proteins.

Kornberg’s research group at Stanford later succeeded in the development of a faithful transcription system from baker’s yeast, a simple unicellular eukaryote, which they then used to isolate in a purified form all of the several dozen proteins required for the transcription process. Through the work of Kornberg and others, it has become clear that these protein components are remarkably conserved across the full spectrum of eukaryotes, from yeast to human cells.

Using this system, Kornberg made the major discovery that transmission of gene regulatory signals to the RNA polymerase machinery is accomplished by an additional protein complex that they dubbed Mediator.[11] As noted by the Nobel Prize committee, “the great complexity of eukaryotic organisms is actually enabled by the fine interplay between tissue-specific substances, enhancers in the DNA and Mediator. The discovery of Mediator is therefore a true milestone in the understanding of the transcription process.”[12]

Kornberg took advantage of expertise with lipid membranes gained from his graduate studies to devise a technique for the formation of two-dimensional protein crystals on lipid bilayers. These 2D crystals could then be analyzed using electron microscopy to derive low-resolution images of the protein’s structure. Eventually, Kornberg was able to use X-ray crystallography to solve the 3-dimensional structure of RNA polymerase at atomic resolution.[13][14] He extended these studies to obtain structural images of RNA polymerase associated with accessory proteins.[15] Through these studies, Kornberg created an actual picture of how transcription works at a molecular level.

“I measured the molecular weight of the purified H3/H4 preparation by equilibrium ultracentrifugation, while Jean Thomas offered to analyze the material by chemical cross-linking. Both methods showed unequivocally that H3 and H4 were in the form of a double dimer, an (H3)2(H4)2 tetramer (Kornberg and Thomas, 1974). I pondered this result for days, and came to the following conclusions (Kornberg, 1974). First, the exact equivalence of H3 and H4 in the tetramer implied that the differences in relative amounts of the histones from various sources measured in the past must be due to experimental error. This and the stoichiometry of the tetramer implied a unit of structure in chromatin based on two each of the four histones, or an (H2A)2(H2B)2(H3)2(H4)2 octamer. Second, since chromatin from all sources contains roughly one of each histone for every 100 bp of DNA, a histone octamer would be associated with 200 bp of DNA. Finally, the (H3)2(H4)2 tetramer was reminiscent of hemoglobin, an a2b2 tetramer. The X-ray structures of hemoglobin and other oligomeric proteins available at the time were compact, with no holes through which a molecule the size of DNA might pass. Rather, the DNA in chromatin must be wrapped on the outside of the histone octamer.

As I turned these ideas over in mind, it struck me how I might explain the results of Hewish and Burgoyne. What if their sedimentation coefficient of unit length DNA fragments was measured under neutral rather than alkaline conditions? Then the DNA would have been double stranded and about 250 bp in length. Allowing for the approximate nature of the result, the correspondence with my prediction of 200 bp was electrifying. Then I recalled a reference near the end of the Hewish and Burgoyne paper to a report of a similar pattern of DNA fragments by Williamson. I rushed to the library and found that Williamson had obtained a ladder of DNA fragments from the cytoplasm of necrotic cells and measured the unit size by sedimentation under neutral conditions: the result was 205 bp! … with colleagues in Cambridge, I proved the existence of the histone octamer and the equivalence of the 200 bp unit with the particle seen in the electron microscope (Kornberg, 1977). This chapter of the chromatin story concluded with the X-ray crystal structure determination of the particle, now known as the nucleosome, showing a histone octamer surrounded by DNA, in near atomic detail (Luger et al., 1997).

I had decided to pursue the function rather than the structure of the nucleosome, and was joined in this by Yahli Lorch, who became my lifelong partner in chromatin research, and also my partner in life. We investigated the consequences of the nucleosome for transcription. It was believed that histones are generally inhibitory to transcription. We found, to the contrary, that RNA polymerases are capable of reading right through a nucleosome. Coiling of promoter DNA in a nucleosome, however, abolished initiation by RNA polymerase II (pol II) (Lorch et al., 1987). This finding, together with genetic studies of Michael Grunstein and colleagues, identified a regulatory role of the nucleosome in transcription. It has since emerged that nucleosomes play regulatory roles in a wide range of chromosomal transactions. A whole new field has emerged, one of the most active in bioscience today. It involves a bewildering variety of posttranslational modifications of the histones, and a protein machinery of great complexity for applying, recognizing, and removing these modifications.”

 

Read Full Post »

« Newer Posts - Older Posts »