Archive for the ‘lasers and photonics’ Category

Disease related changes in proteomics, protein folding, protein-protein interaction

Curator: Larry H. Bernstein, MD, FCAP



Frankenstein Proteins Stitched Together by Scientists



The Frankenstein monster, stitched together from disparate body parts, proved to be an abomination, but stitched together proteins may fare better. They may, for example, serve specific purposes in medicine, research, and industry. At least, that’s the ambition of scientists based at the University of North Carolina. They have developed a computational protocol called SEWING that builds new proteins from connected or disconnected pieces of existing structures. [Wikipedia]

Unlike Victor Frankenstein, who betrayed Promethean ambition when he sewed together his infamous creature, today’s biochemists are relatively modest. Rather than defy nature, they emulate it. For example, at the University of North Carolina (UNC), researchers have taken inspiration from natural evolutionary mechanisms to develop a technique called SEWING—Structure Extension With Native-substructure Graphs. SEWING is a computational protocol that describes how to stitch together new proteins from connected or disconnected pieces of existing structures.

“We can now begin to think about engineering proteins to do things that nothing else is capable of doing,” said UNC’s Brian Kuhlman, Ph.D. “The structure of a protein determines its function, so if we are going to learn how to design new functions, we have to learn how to design new structures. Our study is a critical step in that direction and provides tools for creating proteins that haven’t been seen before in nature.”

Traditionally, researchers have used computational protein design to recreate in the laboratory what already exists in the natural world. In recent years, their focus has shifted toward inventing novel proteins with new functionality. These design projects all start with a specific structural “blueprint” in mind, and as a result are limited. Dr. Kuhlman and his colleagues, however, believe that by removing the limitations of a predetermined blueprint and taking cues from evolution they can more easily create functional proteins.

Dr. Kuhlman’s UNC team developed a protein design approach that emulates natural mechanisms for shuffling tertiary structures such as pleats, coils, and furrows. Putting the approach into action, the UNC team mapped 50,000 stitched together proteins on the computer, and then it produced 21 promising structures in the laboratory. Details of this work appeared May 6 in the journal Science, in an article entitled, “Design of Structurally Distinct Proteins Using Strategies Inspired by Evolution.”

“Helical proteins designed with SEWING contain structural features absent from other de novo designed proteins and, in some cases, remain folded at more than 100°C,” wrote the authors. “High-resolution structures of the designed proteins CA01 and DA05R1 were solved by x-ray crystallography (2.2 angstrom resolution) and nuclear magnetic resonance, respectively, and there was excellent agreement with the design models.”

Essentially, the UNC scientists confirmed that the proteins they had synthesized contained the unique structural varieties that had been designed on the computer. The UNC scientists also determined that the structures they had created had new surface and pocket features. Such features, they noted, provide potential binding sites for ligands or macromolecules.

“We were excited that some had clefts or grooves on the surface, regions that naturally occurring proteins use for binding other proteins,” said the Science article’s first author, Tim M. Jacobs, Ph.D., a former graduate student in Dr. Kuhlman’s laboratory. “That’s important because if we wanted to create a protein that can act as a biosensor to detect a certain metabolite in the body, either for diagnostic or research purposes, it would need to have these grooves. Likewise, if we wanted to develop novel therapeutics, they would also need to attach to specific proteins.”

Currently, the UNC researchers are using SEWING to create proteins that can bind to several other proteins at a time. Many of the most important proteins are such multitaskers, including the blood protein hemoglobin.


Histone Mutation Deranges DNA Methylation to Cause Cancer



In some cancers, including chondroblastoma and a rare form of childhood sarcoma, a mutation in histone H3 reduces global levels of methylation (dark areas) in tumor cells but not in normal cells (arrowhead). The mutation locks the cells in a proliferative state to promote tumor development. [Laboratory of Chromatin Biology and Epigenetics at The Rockefeller University]

They have been called oncohistones, the mutated histones that are known to accompany certain pediatric cancers. Despite their suggestive moniker, oncohistones have kept their oncogenic secrets. For example, it has been unclear whether oncohistones are able to cause cancer on their own, or whether they need to act in concert with additional DNA mutations, that is, mutations other than those affecting histone structures.

While oncohistone mechanisms remain poorly understood, this particular question—the oncogenicity of lone oncohistones—has been resolved, at least in part. According to researchers based at The Rockefeller University, a change to the structure of a histone can trigger a tumor on its own.

This finding appeared May 13 in the journal Science, in an article entitled, “Histone H3K36 Mutations Promote Sarcomagenesis Through Altered Histone Methylation Landscape.” The article describes the Rockefeller team’s study of a histone protein called H3, which has been found in about 95% of samples of chondoblastoma, a benign tumor that arises in cartilage, typically during adolescence.

The Rockefeller scientists found that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo.

After the scientists inserted the H3 histone mutation into mouse mesenchymal progenitor cells (MPCs)—which generate cartilage, bone, and fat—they watched these cells lose the ability to differentiate in the lab. Next, the scientists injected the mutant cells into living mice, and the animals developed the tumors rich in MPCs, known as an undifferentiated sarcoma. Finally, the researchers tried to understand how the mutation causes the tumors to develop.

The scientists determined that H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases.

“Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation,” the authors of the Science study wrote. “After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation.”

Essentially, when the H3K36M mutation occurs, the cell becomes locked in a proliferative state—meaning it divides constantly, leading to tumors. Specifically, the mutation inhibits enzymes that normally tag the histone with chemical groups known as methyls, allowing genes to be expressed normally.

In response to this lack of modification, another part of the histone becomes overmodified, or tagged with too many methyl groups. “This leads to an overall resetting of the landscape of chromatin, the complex of DNA and its associated factors, including histones,” explained co-author Peter Lewis, Ph.D., a professor at the University of Wisconsin-Madison and a former postdoctoral fellow in laboratory of C. David Allis, Ph.D., a professor at Rockefeller.

The finding—that a “resetting” of the chromatin landscape can lock the cell into a proliferative state—suggests that researchers should be on the hunt for more mutations in histones that might be driving tumors. For their part, the Rockefeller researchers are trying to learn more about how this specific mutation in histone H3 causes tumors to develop.

“We want to know which pathways cause the mesenchymal progenitor cells that carry the mutation to continue to divide, and not differentiate into the bone, fat, and cartilage cells they are destined to become,” said co-author Chao Lu, Ph.D., a postdoctoral fellow in the Allis lab.

Once researchers understand more about these pathways, added Dr. Lewis, they can consider ways of blocking them with drugs, particularly in tumors such as MPC-rich sarcomas—which, unlike chondroblastoma, can be deadly. In fact, drugs that block these pathways may already exist and may even be in use for other types of cancers.

“One long-term goal of our collaborative team is to better understand fundamental mechanisms that drive these processes, with the hope of providing new therapeutic approaches,” concluded Dr. Allis.


Histone H3K36 mutations promote sarcomagenesis through altered histone methylation landscape

Chao Lu, Siddhant U. Jain, Dominik Hoelper, …, C. David Allis1,, Nada Jabado,, Peter W. Lewis,
Science  13 May 2016; 352(6287):844-849 http://dx.doi.org:/10.1126/science.aac7272  http://science.sciencemag.org/content/352/6287/844

An oncohistone deranges inhibitory chromatin

Missense mutations (that change one amino acid for another) in histone H3 can produce a so-called oncohistone and are found in a number of pediatric cancers. For example, the lysine-36–to-methionine (K36M) mutation is seen in almost all chondroblastomas. Lu et al. show that K36M mutant histones are oncogenic, and they inhibit the normal methylation of this same residue in wild-type H3 histones. The mutant histones also interfere with the normal development of bone-related cells and the deposition of inhibitory chromatin marks.

Science, this issue p. 844

Several types of pediatric cancers reportedly contain high-frequency missense mutations in histone H3, yet the underlying oncogenic mechanism remains poorly characterized. Here we report that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo. H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases. Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation. After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation. Our findings are mirrored in human undifferentiated sarcomas in which novel K36M/I mutations in H3.1 are identified.


Mitochondria? We Don’t Need No Stinking Mitochondria!


Diagram comparing typical eukaryotic cell to the newly discovered mitochondria-free organism. [Karnkowska et al., 2016, Current Biology 26, 1–11]
  • The organelle that produces a significant portion of energy for eukaryotic cells would seemingly be indispensable, yet over the years, a number of organisms have been discovered that challenge that biological pretense. However, these so-called amitochondrial species may lack a defined organelle, but they still retain some residual functions of their mitochondria-containing brethren. Even the intestinal eukaryotic parasite Giardia intestinalis, which was for many years considered to be mitochondria-free, was proven recently to contain a considerably shriveled version of the organelle.
  • Now, an international group of scientists has released results from a new study that challenges the notion that mitochondria are essential for eukaryotes—discovering an organism that resides in the gut of chinchillas that contains absolutely no trace of mitochondria at all.
  • “In low-oxygen environments, eukaryotes often possess a reduced form of the mitochondrion, but it was believed that some of the mitochondrial functions are so essential that these organelles are indispensable for their life,” explained lead study author Anna Karnkowska, Ph.D., visiting scientist at the University of British Columbia in Vancouver. “We have characterized a eukaryotic microbe which indeed possesses no mitochondrion at all.”


Mysterious Eukaryote Missing Mitochondria

Researchers uncover the first example of a eukaryotic organism that lacks the organelles.

By Anna Azvolinsky | May 12, 2016




Scientists have long thought that mitochondria—organelles responsible for energy generation—are an essential and defining feature of a eukaryotic cell. Now, researchers from Charles University in Prague and their colleagues are challenging this notion with their discovery of a eukaryotic organism,Monocercomonoides species PA203, which lacks mitochondria. The team’s phylogenetic analysis, published today (May 12) in Current Biology,suggests that Monocercomonoides—which belong to the Oxymonadida group of protozoa and live in low-oxygen environmentsdid have mitochondria at one point, but eventually lost the organelles.

“This is quite a groundbreaking discovery,” said Thijs Ettema, who studies microbial genome evolution at Uppsala University in Sweden and was not involved in the work.

“This study shows that mitochondria are not so central for all lineages of living eukaryotes,” Toni Gabaldonof the Center for Genomic Regulation in Barcelona, Spain, who also was not involved in the work, wrote in an email to The Scientist. “Yet, this mitochondrial-devoid, single-cell eukaryote is as complex as other eukaryotic cells in almost any other aspect of cellular complexity.”

Charles University’s Vladimir Hampl studies the evolution of protists. Along with Anna Karnkowska and colleagues, Hampl decided to sequence the genome of Monocercomonoides, a little-studied protist that lives in the digestive tracts of vertebrates. The 75-megabase genome—the first of an oxymonad—did not contain any conserved genes found on mitochondrial genomes of other eukaryotes, the researchers found. It also did not contain any nuclear genes associated with mitochondrial functions.

“It was surprising and for a long time, we didn’t believe that the [mitochondria-associated genes were really not there]. We thought we were missing something,” Hampl told The Scientist. “But when the data kept accumulating, we switched to the hypothesis that this organism really didn’t have mitochondria.”

Because researchers have previously not found examples of eukaryotes without some form of mitochondria, the current theory of the origin of eukaryotes poses that the appearance of mitochondria was crucial to the identity of these organisms.

“We now view these mitochondria-like organelles as a continuum from full mitochondria to very small . Some anaerobic protists, for example, have only pared down versions of mitochondria, such as hydrogenosomes and mitosomes, which lack a mitochondrial genome. But these mitochondrion-like organelles perform essential functions of the iron-sulfur cluster assembly pathway, which is known to be conserved in virtually all eukaryotic organisms studied to date.

Yet, in their analysis, the researchers found no evidence of the presence of any components of this mitochondrial pathway.

Like the scaling down of mitochondria into mitosomes in some organisms, the ancestors of modernMonocercomonoides once had mitochondria. “Because this organism is phylogenetically nested among relatives that had conventional mitochondria, this is most likely a secondary adaptation,” said Michael Gray, a biochemist who studies mitochondria at Dalhousie University in Nova Scotia and was not involved in the study. According to Gray, the finding of a mitochondria-deficient eukaryote does not mean that the organelles did not play a major role in the evolution of eukaryotic cells.

To be sure they were not missing mitochondrial proteins, Hampl’s team also searched for potential mitochondrial protein homologs of other anaerobic species, and for signature sequences of a range of known mitochondrial proteins. While similar searches with other species uncovered a few mitochondrial proteins, the team’s analysis of Monocercomonoides came up empty.

“The data is very complete,” said Ettema. “It is difficult to prove the absence of something but [these authors] do a convincing job.”

To form the essential iron-sulfur clusters, the team discovered that Monocercomonoides use a sulfur mobilization system found in the cytosol, and that an ancestor of the organism acquired this system by lateral gene transfer from bacteria. This cytosolic, compensating system allowed Monocercomonoides to lose the otherwise essential iron-sulfur cluster-forming pathway in the mitochondrion, the team proposed.

“This work shows the great evolutionary plasticity of the eukaryotic cell,” said Karnkowska, who participated in the study while she was a postdoc at Charles University. Karnkowska, who is now a visiting researcher at the University of British Columbia in Canada, added: “This is a striking example of how far the evolution of a eukaryotic cell can go that was beyond our expectations.”

“The results highlight how many surprises may await us in the poorly studied eukaryotic phyla that live in under-explored environments,” Gabaldon said.

Ettema agreed. “Now that we’ve found one, we need to look at the bigger picture and see if there are other examples of eukaryotes that have lost their mitochondria, to understand how adaptable eukaryotes are.”

  1. Karnkowska et al., “A eukaryote without a mitochondrial organelle,” Current Biology,doi:10.1016/j.cub.2016.03.053, 2016.

organellesmitochondriagenetics & genomics and evolution


A Eukaryote without a Mitochondrial Organelle

Anna Karnkowska,  Vojtěch Vacek,  Zuzana Zubáčová,…,  Čestmír Vlček,  Vladimír HamplDOI: http://dx.doi.org/10.1016/j.cub.2016.03.053  Article Info

PDF (2 MB)   Extended PDF (2 MB)  Download Images(.ppt)  About Images & Usage


  • Monocercomonoides sp. is a eukaryotic microorganism with no mitochondria
  • •The complete absence of mitochondria is a secondary loss, not an ancestral feature
  • •The essential mitochondrial ISC pathway was replaced by a bacterial SUF system

The presence of mitochondria and related organelles in every studied eukaryote supports the view that mitochondria are essential cellular components. Here, we report the genome sequence of a microbial eukaryote, the oxymonad Monocercomonoides sp., which revealed that this organism lacks all hallmark mitochondrial proteins. Crucially, the mitochondrial iron-sulfur cluster assembly pathway, thought to be conserved in virtually all eukaryotic cells, has been replaced by a cytosolic sulfur mobilization system (SUF) acquired by lateral gene transfer from bacteria. In the context of eukaryotic phylogeny, our data suggest that Monocercomonoides is not primitively amitochondrial but has lost the mitochondrion secondarily. This is the first example of a eukaryote lacking any form of a mitochondrion, demonstrating that this organelle is not absolutely essential for the viability of a eukaryotic cell.



HIV Particles Used to Trap Intact Mammalian Protein Complexes

Belgian scientists from VIB and UGent developed Virotrap, a viral particle sorting approach for purifying protein complexes under native conditions.


This method catches a bait protein together with its associated protein partners in virus-like particles that are budded from human cells. Like this, cell lysis is not needed and protein complexes are preserved during purification.

With his feet in both a proteomics lab and an interactomics lab, VIB/UGent professor Sven Eyckerman is well aware of the shortcomings of conventional approaches to analyze protein complexes. The lysis conditions required in mass spectrometry–based strategies to break open cell membranes often affect protein-protein interactions. “The first step in a classical study on protein complexes essentially turns the highly organized cellular structure into a big messy soup”, Eyckerman explains.

Inspired by virus biology, Eyckerman came up with a creative solution. “We used the natural process of HIV particle formation to our benefit by hacking a completely safe form of the virus to abduct intact protein machines from the cell.” It is well known that the HIV virus captures a number of host proteins during its particle formation. By fusing a bait protein to the HIV-1 GAG protein, interaction partners become trapped within virus-like particles that bud from mammalian cells. Standard proteomic approaches are used next to reveal the content of these particles. Fittingly, the team named the method ‘Virotrap’.

The Virotrap approach is exceptional as protein networks can be characterized under natural conditions. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. The researchers showed the method was suitable for detection of known binary interactions as well as mass spectrometry-based identification of novel protein partners.

Virotrap is a textbook example of bringing research teams with complementary expertise together. Cross-pollination with the labs of Jan Tavernier (VIB/UGent) and Kris Gevaert (VIB/UGent) enabled the development of this platform.

Jan Tavernier: “Virotrap represents a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. It is complementary to the arsenal of existing interactomics methods, but also holds potential for other fields, like drug target characterization. We also developed a small molecule-variant of Virotrap that could successfully trap protein partners for small molecule baits.”

Kris Gevaert: “Virotrap can also impact our understanding of disease pathways. We were actually surprised to see that this virus-based system could be used to study antiviral pathways, like Toll-like receptor signaling. Understanding these protein machines in their natural environment is essential if we want to modulate their activity in pathology.“


Trapping mammalian protein complexes in viral particles

Sven Eyckerman, Kevin Titeca, …Kris GevaertJan Tavernier
Nature Communications Apr 2016; 7(11416)   http://dx.doi.org:/10.1038/ncomms11416

Cell lysis is an inevitable step in classical mass spectrometry–based strategies to analyse protein complexes. Complementary lysis conditions, in situ cross-linking strategies and proximal labelling techniques are currently used to reduce lysis effects on the protein complex. We have developed Virotrap, a viral particle sorting approach that obviates the need for cell homogenization and preserves the protein complexes during purification. By fusing a bait protein to the HIV-1 GAG protein, we show that interaction partners become trapped within virus-like particles (VLPs) that bud from mammalian cells. Using an efficient VLP enrichment protocol, Virotrap allows the detection of known binary interactions and MS-based identification of novel protein partners as well. In addition, we show the identification of stimulus-dependent interactions and demonstrate trapping of protein partners for small molecules. Virotrap constitutes an elegant complementary approach to the arsenal of methods to study protein complexes.

Proteins mostly exert their function within supramolecular complexes. Strategies for detecting protein–protein interactions (PPIs) can be roughly divided into genetic systems1 and co-purification strategies combined with mass spectrometry (MS) analysis (for example, AP–MS)2. The latter approaches typically require cell or tissue homogenization using detergents, followed by capture of the protein complex using affinity tags3 or specific antibodies4. The protein complexes extracted from this ‘soup’ of constituents are then subjected to several washing steps before actual analysis by trypsin digestion and liquid chromatography–MS/MS analysis. Such lysis and purification protocols are typically empirical and have mostly been optimized using model interactions in single labs. In fact, lysis conditions can profoundly affect the number of both specific and nonspecific proteins that are identified in a typical AP–MS set-up. Indeed, recent studies using the nuclear pore complex as a model protein complex describe optimization of purifications for the different proteins in the complex by examining 96 different conditions5. Nevertheless, for new purifications, it remains hard to correctly estimate the loss of factors in a standard AP–MS experiment due to washing and dilution effects during treatments (that is, false negatives). These considerations have pushed the concept of stabilizing PPIs before the actual homogenization step. A classical approach involves cross-linking with simple reagents (for example, formaldehyde) or with more advanced isotope-labelled cross-linkers (reviewed in ref. 2). However, experimental challenges such as cell permeability and reactivity still preclude the widespread use of cross-linking agents. Moreover, MS-generated spectra of cross-linked peptides are notoriously difficult to identify correctly. A recent lysis-independent solution involves the expression of a bait protein fused to a promiscuous biotin ligase, which results in labelling of proteins proximal to the activity of the enzyme-tagged bait protein6. When compared with AP–MS, this BioID approach delivers a complementary set of candidate proteins, including novel interaction partners78. Such particular studies clearly underscore the need for complementary approaches in the co-complex strategies.

The evolutionary stress on viruses promoted highly condensed coding of information and maximal functionality for small genomes. Accordingly, for HIV-1 it is sufficient to express a single protein, the p55 GAG protein, for efficient production of virus-like particles (VLPs) from cells910. This protein is highly mobile before its accumulation in cholesterol-rich regions of the membrane, where multimerization initiates the budding process11. A total of 4,000–5,000 GAG molecules is required to form a single particle of about 145 nm (ref. 12). Both VLPs and mature viruses contain a number of host proteins that are recruited by binding to viral proteins. These proteins can either contribute to the infectivity (for example, Cyclophilin/FKBPA13) or act as antiviral proteins preventing the spreading of the virus (for example, APOBEC proteins14).

We here describe the development and application of Virotrap, an elegant co-purification strategy based on the trapping of a bait protein together with its associated protein partners in VLPs that are budded from the cell. After enrichment, these particles can be analysed by targeted (for example, western blotting) or unbiased approaches (MS-based proteomics). Virotrap allows detection of known binary PPIs, analysis of protein complexes and their dynamics, and readily detects protein binders for small molecules.

Concept of the Virotrap system

Classical AP–MS approaches rely on cell homogenization to access protein complexes, a step that can vary significantly with the lysis conditions (detergents, salt concentrations, pH conditions and so on)5. To eliminate the homogenization step in AP–MS, we reasoned that incorporation of a protein complex inside a secreted VLP traps the interaction partners under native conditions and protects them during further purification. We thus explored the possibility of protein complex packaging by the expression of GAG-bait protein chimeras (Fig. 1) as expression of GAG results in the release of VLPs from the cells910. As a first PPI pair to evaluate this concept, we selected the HRAS protein as a bait combined with the RAF1 prey protein. We were able to specifically detect the HRAS–RAF1 interaction following enrichment of VLPs via ultracentrifugation (Supplementary Fig. 1a). To prevent tedious ultracentrifugation steps, we designed a novel single-step protocol wherein we co-express the vesicular stomatitis virus glycoprotein (VSV-G) together with a tagged version of this glycoprotein in addition to the GAG bait and prey. Both tagged and untagged VSV-G proteins are probably presented as trimers on the surface of the VLPs, allowing efficient antibody-based recovery from large volumes. The HRAS–RAF1 interaction was confirmed using this single-step protocol (Supplementary Fig. 1b). No associations with unrelated bait or prey proteins were observed for both protocols.

Figure 1: Schematic representation of the Virotrap strategy.



Expression of a GAG-bait fusion protein (1) results in submembrane multimerization (2) and subsequent budding of VLPs from cells (3). Interaction partners of the bait protein are also trapped within these VLPs and can be identified after purification by western blotting or MS analysis (4).

Virotrap for the detection of binary interactions

We next explored the reciprocal detection of a set of PPI pairs, which were selected based on published evidence and cytosolic localization15. After single-step purification and western blot analysis, we could readily detect reciprocal interactions between CDK2 and CKS1B, LCP2 and GRAP2, and S100A1 and S100B (Fig. 2a). Only for the LCP2 prey we observed nonspecific association with an irrelevant bait construct. However, the particle levels of the GRAP2 bait were substantially lower as compared with those of the GAG control construct (GAG protein levels in VLPs; Fig. 2a, second panel of the LCP2 prey). After quantification of the intensities of bait and prey proteins and normalization of prey levels using bait levels, we observed a strong enrichment for the GAG-GRAP2 bait (Supplementary Fig. 2).


Virotrap for unbiased discovery of novel interactions

For the detection of novel interaction partners, we scaled up VLP production and purification protocols (Supplementary Fig. 5 and Supplementary Note 1 for an overview of the protocol) and investigated protein partners trapped using the following bait proteins: Fas-associated via death domain (FADD), A20 (TNFAIP3), nuclear factor-κB (NF-κB) essential modifier (IKBKG), TRAF family member-associated NF-κB activator (TANK), MYD88 and ring finger protein 41 (RNF41). To obtain specific interactors from the lists of identified proteins, we challenged the data with a combined protein list of 19 unrelated Virotrap experiments (Supplementary Table 1 for an overview). Figure 3 shows the design and the list of candidate interactors obtained after removal of all proteins that were found in the 19 control samples (including removal of proteins from the control list identified with a single peptide). The remaining list of confident protein identifications (identified with at least two peptides in at least two biological repeats) reveals both known and novel candidate interaction partners. All candidate interactors including single peptide protein identifications are given in Supplementary Data 2 and also include recurrent protein identifications of known interactors based on a single peptide; for example, CASP8 for FADD and TANK for NEMO. Using alternative methods, we confirmed the interaction between A20 and FADD, and the associations with transmembrane proteins (insulin receptor and insulin-like growth factor receptor 1) that were captured using RNF41 as a bait (Supplementary Fig. 6). To address the use of Virotrap for the detection of dynamic interactions, we activated the NF-κB pathway via the tumour necrosis factor (TNF) receptor (TNFRSF1A) using TNFα (TNF) and performed Virotrap analysis using A20 as bait (Fig. 3). This resulted in the additional enrichment of receptor-interacting kinase (RIPK1), TNFR1-associated via death domain (TRADD), TNFRSF1A and TNF itself, confirming the expected activated complex20.

Figure 3: Use of Virotrap for unbiased interactome analysis


Figure 4: Use of Virotrap for detection of protein partners of small molecules.



Lysis conditions used in AP–MS strategies are critical for the preservation of protein complexes. A multitude of lysis conditions have been described, culminating in a recent report where protein complex stability was assessed under 96 lysis/purification protocols5. Moreover, the authors suggest to optimize the conditions for every complex, implying an important workload for researchers embarking on protein complex analysis using classical AP–MS. As lysis results in a profound change of the subcellular context and significantly alters the concentration of proteins, loss of complex integrity during a classical AP–MS protocol can be expected. A clear evolution towards ‘lysis-independent’ approaches in the co-complex analysis field is evident with the introduction of BioID6 and APEX25 where proximal proteins, including proteins residing in the complex, are labelled with biotin by an enzymatic activity fused to a bait protein. A side-by-side comparison between classical AP–MS and BioID showed overlapping and unique candidate binding proteins for both approaches78, supporting the notion that complementary methods are needed to provide a comprehensive view on protein complexes. This has also been clearly demonstrated for binary approaches15 and is a logical consequence of the heterogenic nature underlying PPIs (binding mechanism, requirement for posttranslational modifications, location, affinity and so on).

In this report, we explore an alternative, yet complementary method to isolate protein complexes without interfering with cellular integrity. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. This constitutes a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. A comparison of our Virotrap approach with AP–MS shows complementary data, with specific false positives and false negatives for both methods (Supplementary Fig. 7).

The current implementation of the Virotrap platform implies the use of a GAG-bait construct resulting in considerable expression of the bait protein. Different strategies are currently pursued to reduce bait expression including co-expression of a native GAG protein together with the GAG-bait protein, not only reducing bait expression but also creating more ‘space’ in the particles potentially accommodating larger bait protein complexes. Nevertheless, the presence of the bait on the forming GAG scaffold creates an intracellular affinity matrix (comparable to the early in vitro affinity columns for purification of interaction partners from lysates26) that has the potential to compete with endogenous complexes by avidity effects. This avidity effect is a powerful mechanism that aids in the recruitment of cyclophilin to GAG27, a well-known weak interaction (Kd=16 μM (ref. 28)) detectable as a background association in the Virotrap system. Although background binding may be increased by elevated bait expression, weaker associations are readily detectable (for example, MAL—MYD88-binding study; Fig. 2c).

The size of Virotrap particles (around 145 nm) suggests limitations in the size of the protein complex that can be accommodated in the particles. Further experimentation is required to define the maximum size of proteins or the number of protein complexes that can be trapped inside the particles.


In conclusion, Virotrap captures significant parts of known interactomes and reveals new interactions. This cell lysis-free approach purifies protein complexes under native conditions and thus provides a powerful method to complement AP–MS or other PPI data. Future improvements of the system include strategies to reduce bait expression to more physiological levels and application of advanced data analysis options to filter out background. These developments can further aid in the deployment of Virotrap as a powerful extension of the current co-complex technology arsenal.


New Autism Blood Biomarker Identified

Researchers at UT Southwestern Medical Center have identified a blood biomarker that may aid in earlier diagnosis of children with autism spectrum disorder, or ASD



In a recent edition of Scientific Reports, UT Southwestern researchers reported on the identification of a blood biomarker that could distinguish the majority of ASD study participants versus a control group of similar age range. In addition, the biomarker was significantly correlated with the level of communication impairment, suggesting that the blood test may give insight into ASD severity.

“Numerous investigators have long sought a biomarker for ASD,” said Dr. Dwight German, study senior author and Professor of Psychiatry at UT Southwestern. “The blood biomarker reported here along with others we are testing can represent a useful test with over 80 percent accuracy in identifying ASD.”

ASD1 –  was 66 percent accurate in diagnosing ASD. When combined with thyroid stimulating hormone level measurements, the ASD1-binding biomarker was 73 percent accurate at diagnosis


A Search for Blood Biomarkers for Autism: Peptoids

Sayed ZamanUmar Yazdani,…, Laura Hewitson & Dwight C. German
Scientific Reports 2016; 6(19164) http://dx.doi.org:/10.1038/srep19164

Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social interaction and communication, and restricted, repetitive patterns of behavior. In order to identify individuals with ASD and initiate interventions at the earliest possible age, biomarkers for the disorder are desirable. Research findings have identified widespread changes in the immune system in children with autism, at both systemic and cellular levels. In an attempt to find candidate antibody biomarkers for ASD, highly complex libraries of peptoids (oligo-N-substituted glycines) were screened for compounds that preferentially bind IgG from boys with ASD over typically developing (TD) boys. Unexpectedly, many peptoids were identified that preferentially bound IgG from TD boys. One of these peptoids was studied further and found to bind significantly higher levels (>2-fold) of the IgG1 subtype in serum from TD boys (n = 60) compared to ASD boys (n = 74), as well as compared to older adult males (n = 53). Together these data suggest that ASD boys have reduced levels (>50%) of an IgG1 antibody, which resembles the level found normally with advanced age. In this discovery study, the ASD1 peptoid was 66% accurate in predicting ASD.


Peptoid libraries have been used previously to search for autoantibodies for neurodegenerative diseases19 and for systemic lupus erythematosus (SLE)21. In the case of SLE, peptoids were identified that could identify subjects with the disease and related syndromes with moderate sensitivity (70%) and excellent specificity (97.5%). Peptoids were used to measure IgG levels from both healthy subjects and SLE patients. Binding to the SLE-peptoid was significantly higher in SLE patients vs. healthy controls. The IgG bound to the SLE-peptoid was found to react with several autoantigens, suggesting that the peptoids are capable of interacting with multiple, structurally similar molecules. These data indicate that IgG binding to peptoids can identify subjects with high levels of pathogenic autoantibodies vs. a single antibody.

In the present study, the ASD1 peptoid binds significantly lower levels of IgG1 in ASD males vs. TD males. This finding suggests that the ASD1 peptoid recognizes antibody(-ies) of an IgG1 subtype that is (are) significantly lower in abundance in the ASD males vs. TD males. Although a previous study14 has demonstrated lower levels of plasma IgG in ASD vs. TD children, here, we additionally quantified serum IgG levels in our individuals and found no difference in IgG between the two groups (data not shown). Furthermore, our IgG levels did not correlate with ASD1 binding levels, indicating that ASD1 does not bind IgG generically, and that the peptoid’s ability to differentiate between ASD and TD males is related to a specific antibody(-ies).

ASD subjects underwent a diagnostic evaluation using the ADOS and ADI-R, and application of the DSM-IV criteria prior to study inclusion. Only those subjects with a diagnosis of Autistic Disorder were included in the study. The ADOS is a semi-structured observation of a child’s behavior that allows examiners to observe the three core domains of ASD symptoms: reciprocal social interaction, communication, and restricted and repetitive behaviors1. When ADOS subdomain scores were compared with peptoid binding, the only significant relationship was with Social Interaction. However, the positive correlation would suggest that lower peptoid binding is associated with better social interaction, not poorer social interaction as anticipated.

The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.

It is interesting that in serum samples from older men, the ASD1 binding is similar to that in the ASD boys. This is consistent with the observation that with aging there is a reduction in the strength of the immune system, and the changes are gender-specific25. Recent studies using parabiosis26, in which blood from young mice reverse age-related impairments in cognitive function and synaptic plasticity in old mice, reveal that blood constituents from young subjects may contain important substances for maintaining neuronal functions. Work is in progress to identify the antibody/antibodies that are differentially binding to the ASD1 peptoid, which appear as a single band on the electrophoresis gel (Fig. 4).


The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.


  • Titration of IgG binding to ASD1 using serum pooled from 10 TD males and 10 ASD males demonstrates ASD1’s ability to differentiate between the two groups. (B)Detecting IgG1 subclass instead of total IgG amplifies this differentiation. (C) IgG1 binding of individual ASD (n=74) and TD (n=60) male serum samples (1:100 dilution) to ASD1 significantly differs with TD>ASD. In addition, IgG1 binding of older adult male (AM) serum samples (n=53) to ASD1 is significantly lower than TD males, and not different from ASD males. The three groups were compared with a Kruskal-Wallis ANOVA, H = 10.1781, p<0.006. **p<0.005. Error bars show SEM. (D) Receiver-operating characteristic curve for ASD1’s ability to discriminate between ASD and TD males.



Association between peptoid binding and ADOS and ADI-R subdomains

Higher scores in any domain on the ADOS and ADI-R are indicative of more abnormal behaviors and/or symptoms. Among ADOS subdomains, there was no significant relationship between Communication and peptoid binding (z = 0.04, p = 0.966), Communication + Social interaction (z = 1.53, p = 0.127), or Stereotyped Behaviors and Restrictive Interests (SBRI) (z = 0.46, p = 0.647). Higher scores on the Social Interaction domain were significantly associated with higher peptoid binding (z = 2.04, p = 0.041).

Among ADI-R subdomains, higher scores on the Communication domain were associated with lower levels of peptoid binding (z = −2.28, p = 0.023). There was not a significant relationship between Social Interaction (z = 0.07, p = 0.941) or Restrictive/Repetitive Stereotyped Behaviors (z = −1.40, p = 0.162) and peptoid binding.



Computational Model Finds New Protein-Protein Interactions

Researchers at University of Pittsburgh have discovered 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia.


Using a computational model they developed, researchers at the University of Pittsburgh School of Medicine have discovered more than 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia. The findings, published online in npj Schizophrenia, a Nature Publishing Group journal, could lead to greater understanding of the biological underpinnings of this mental illness, as well as point the way to treatments.

There have been many genome-wide association studies (GWAS) that have identified gene variants associated with an increased risk for schizophrenia, but in most cases there is little known about the proteins that these genes make, what they do and how they interact, said senior investigator Madhavi Ganapathiraju, Ph.D., assistant professor of biomedical informatics, Pitt School of Medicine.

“GWAS studies and other research efforts have shown us what genes might be relevant in schizophrenia,” she said. “What we have done is the next step. We are trying to understand how these genes relate to each other, which could show us the biological pathways that are important in the disease.”

Each gene makes proteins and proteins typically interact with each other in a biological process. Information about interacting partners can shed light on the role of a gene that has not been studied, revealing pathways and biological processes associated with the disease and also its relation to other complex diseases.

Dr. Ganapathiraju’s team developed a computational model called High-Precision Protein Interaction Prediction (HiPPIP) and applied it to discover PPIs of schizophrenia-linked genes identified through GWAS, as well as historically known risk genes. They found 504 never-before known PPIs, and noted also that while schizophrenia-linked genes identified historically and through GWAS had little overlap, the model showed they shared more than 100 common interactors.

“We can infer what the protein might do by checking out the company it keeps,” Dr. Ganapathiraju explained. “For example, if I know you have many friends who play hockey, it could mean that you are involved in hockey, too. Similarly, if we see that an unknown protein interacts with multiple proteins involved in neural signaling, for example, there is a high likelihood that the unknown entity also is involved in the same.”

Dr. Ganapathiraju and colleagues have drawn such inferences on protein function based on the PPIs of proteins, and made their findings available on a website Schizo-Pi. This information can be used by biologists to explore the schizophrenia interactome with the aim of understanding more about the disease or developing new treatment drugs.

Schizophrenia interactome with 504 novel protein–protein interactions

MK GanapathirajuM Thahir,…,  CE LoscherEM Bauer & S Chaparala
npj Schizophrenia 2016;  2(16012)   http://dx.doi.org:/10.1038/npjschz.2016.12

(GWAS) have revealed the role of rare and common genetic variants, but the functional effects of the risk variants remain to be understood. Protein interactome-based studies can facilitate the study of molecular mechanisms by which the risk genes relate to schizophrenia (SZ) genesis, but protein–protein interactions (PPIs) are unknown for many of the liability genes. We developed a computational model to discover PPIs, which is found to be highly accurate according to computational evaluations and experimental validations of selected PPIs. We present here, 365 novel PPIs of liability genes identified by the SZ Working Group of the Psychiatric Genomics Consortium (PGC). Seventeen genes that had no previously known interactions have 57 novel interactions by our method. Among the new interactors are 19 drug targets that are targeted by 130 drugs. In addition, we computed 147 novel PPIs of 25 candidate genes investigated in the pre-GWAS era. While there is little overlap between the GWAS genes and the pre-GWAS genes, the interactomes reveal that they largely belong to the same pathways, thus reconciling the apparent disparities between the GWAS and prior gene association studies. The interactome including 504 novel PPIs overall, could motivate other systems biology studies and trials with repurposed drugs. The PPIs are made available on a webserver, called Schizo-Pi at http://severus.dbmi.pitt.edu/schizo-pi with advanced search capabilities.

Schizophrenia (SZ) is a common, potentially severe psychiatric disorder that afflicts all populations.1 Gene mapping studies suggest that SZ is a complex disorder, with a cumulative impact of variable genetic effects coupled with environmental factors.2 As many as 38 genome-wide association studies (GWAS) have been reported on SZ out of a total of 1,750 GWAS publications on 1,087 traits or diseases reported in the GWAS catalog maintained by the National Human Genome Research Institute of USA3 (as of April 2015), revealing the common variants associated with SZ.4 The SZ Working Group of the Psychiatric Genomics Consortium (PGC) identified 108 genetic loci that likely confer risk for SZ.5 While the role of genetics has been clearly validated by this study, the functional impact of the risk variants is not well-understood.6,7 Several of the genes implicated by the GWAS have unknown functions and could participate in possibly hitherto unknown pathways.8 Further, there is little or no overlap between the genes identified through GWAS and ‘candidate genes’ proposed in the pre-GWAS era.9

Interactome-based studies can be useful in discovering the functional associations of genes. For example,disrupted in schizophrenia 1 (DISC1), an SZ related candidate gene originally had no known homolog in humans. Although it had well-characterized protein domains such as coiled-coil domains and leucine-zipper domains, its function was unknown.10,11 Once its protein–protein interactions (PPIs) were determined using yeast 2-hybrid technology,12 investigators successfully linked DISC1 to cAMP signaling, axon elongation, and neuronal migration, and accelerated the research pertaining to SZ in general, and DISC1 in particular.13 Typically such studies are carried out on known protein–protein interaction (PPI) networks, or as in the case of DISC1, when there is a specific gene of interest, its PPIs are determined by methods such as yeast 2-hybrid technology.

Knowledge of human PPI networks is thus valuable for accelerating discovery of protein function, and indeed, biomedical research in general. However, of the hundreds of thousands of biophysical PPIs thought to exist in the human interactome,14,15 <100,000 are known today (Human Protein Reference Database, HPRD16 and BioGRID17 databases). Gold standard experimental methods for the determination of all the PPIs in human interactome are time-consuming, expensive and may not even be feasible, as about 250 million pairs of proteins would need to be tested overall; high-throughput methods such as yeast 2-hybrid have important limitations for whole interactome determination as they have a low recall of 23% (i.e., remaining 77% of true interactions need to be determined by other means), and a low precision (i.e., the screens have to be repeated multiple times to achieve high selectivity).18,19Computational methods are therefore necessary to complete the interactome expeditiously. Algorithms have begun emerging to predict PPIs using statistical machine learning on the characteristics of the proteins, but these algorithms are employed predominantly to study yeast. Two significant computational predictions have been reported for human interactome; although they have had high false positive rates, these methods have laid the foundation for computational prediction of human PPIs.20,21

We have created a new PPI prediction model called High-Confidence Protein–Protein Interaction Prediction (HiPPIP) model. Novel interactions predicted with this model are making translational impact. For example, we discovered a PPI between OASL and DDX58, which on validation showed that an increased expression of OASL could boost innate immunity to combat influenza by activating the RIG-I pathway.22 Also, the interactome of the genes associated with congenital heart disease showed that the disease morphogenesis has a close connection with the structure and function of cilia.23Here, we describe the HiPPIP model and its application to SZ genes to construct the SZ interactome. After computational evaluations and experimental validations of selected novel PPIs, we present here 504 highly confident novel PPIs in the SZ interactome, shedding new light onto several uncharacterized genes that are associated with SZ.

We developed a computational model called HiPPIP to predict PPIs (see Methods and Supplementary File 1). The model has been evaluated by computational methods and experimental validations and is found to be highly accurate. Evaluations on a held-out test data showed a precision of 97.5% and a recall of 5%. 5% recall out of 150,000 to 600,000 estimated number of interactions in the human interactome corresponds to 7,500–30,000 novel PPIs in the whole interactome. Note that, it is likely that the real precision would be higher than 97.5% because in this test data, randomly paired proteins are treated as non-interacting protein pairs, whereas some of them may actually be interacting pairs with a small probability; thus, some of the pairs that are treated as false positives in test set are likely to be true but hitherto unknown interactions. In Figure 1a, we show the precision versus recall of our method on ‘hub proteins’ where we considered all pairs that received a score >0.5 by HiPPIP to be novel interactions. In Figure 1b, we show the number of true positives versus false positives observed in hub proteins. Both these figures also show our method to be superior in comparison to the prediction of membrane-receptor interactome by Qi et al’s.24 True positives versus false positives are also shown for individual hub proteins by our method in Figure 1cand by Qi et al’s.23 in Figure 1d. These evaluations showed that our predictions contain mostly true positives. Unlike in other domains where ranked lists are commonly used such as information retrieval, in PPI prediction the ‘false positives’ may actually be unlabeled instances that are indeed true interactions that are not yet discovered. In fact, such unlabeled pairs predicted as interactors of the hub gene HMGB1 (namely, the pairs HMGB1-KL and HMGB1-FLT1) were validated by experimental methods and found to be true PPIs (See the Figures e–g inSupplementary File 3). Thus, we concluded that the protein pairs that received a score of ⩾0.5 are highly confident to be true interactions. The pairs that receive a score less than but close to 0.5 (i.e., in the range of 0.4–0.5) may also contain several true PPIs; however, we cannot confidently say that all in this range are true PPIs. Only the PPIs predicted with a score >0.5 are included in the interactome.

Figure 1


Computational evaluation of predicted protein–protein interactions on hub proteins: (a) precision recall curve. (b) True positive versus false positives in ranked lists of hub type membrane receptors for our method and that by Qi et al. True positives versus false positives are shown for individual membrane receptors by our method in (c) and by Qi et al. in (d). Thick line is the average, which is also the same as shown in (b). Note:x-axis is recall in (a), whereas it is number of false positives in (bd). The range of y-axis is observed by varying the threshold from 1.0–0 in (a), and to 0.5 in (bd).

SZ interactome

By applying HiPPIP to the GWAS genes and Historic (pre-GWAS) genes, we predicted over 500 high confidence new PPIs adding to about 1400 previously known PPIs.

Schizophrenia interactome: network view of the schizophrenia interactome is shown as a graph, where genes are shown as nodes and PPIs as edges connecting the nodes. Schizophrenia-associated genes are shown as dark blue nodes, novel interactors as red color nodes and known interactors as blue color nodes. The source of the schizophrenia genes is indicated by its label font, where Historic genes are shown italicized, GWAS genes are shown in bold, and the one gene that is common to both is shown in italicized and bold. For clarity, the source is also indicated by the shape of the node (triangular for GWAS and square for Historic and hexagonal for both). Symbols are shown only for the schizophrenia-associated genes; actual interactions may be accessed on the web. Red edges are the novel interactions, whereas blue edges are known interactions. GWAS, genome-wide association studies of schizophrenia; PPI, protein–protein interaction.



Webserver of SZ interactome

We have made the known and novel interactions of all SZ-associated genes available on a webserver called Schizo-Pi, at the addresshttp://severus.dbmi.pitt.edu/schizo-pi. This webserver is similar to Wiki-Pi33 which presents comprehensive annotations of both participating proteins of a PPI side-by-side. The difference between Wiki-Pi which we developed earlier, and Schizo-Pi, is the inclusion of novel predicted interactions of the SZ genes into the latter.

Despite the many advances in biomedical research, identifying the molecular mechanisms underlying the disease is still challenging. Studies based on protein interactions were proven to be valuable in identifying novel gene associations that could shed new light on disease pathology.35 The interactome including more than 500 novel PPIs will help to identify pathways and biological processes associated with the disease and also its relation to other complex diseases. It also helps identify potential drugs that could be repurposed to use for SZ treatment.

Functional and pathway enrichment in SZ interactome

When a gene of interest has little known information, functions of its interacting partners serve as a starting point to hypothesize its own function. We computed statistically significant enrichment of GO biological process terms among the interacting partners of each of the genes using BinGO36 (see online at http://severus.dbmi.pitt.edu/schizo-pi).


Protein aggregation and aggregate toxicity: new insights into protein folding, misfolding diseases and biological evolution

Massimo Stefani · Christopher M. Dobson

Abstract The deposition of proteins in the form of amyloid fibrils and plaques is the characteristic feature of more than 20 degenerative conditions affecting either the central nervous system or a variety of peripheral tissues. As these conditions include Alzheimer’s, Parkinson’s and the prion diseases, several forms of fatal systemic amyloidosis, and at least one condition associated with medical intervention (haemodialysis), they are of enormous importance in the context of present-day human health and welfare. Much remains to be learned about the mechanism by which the proteins associated with these diseases aggregate and form amyloid structures, and how the latter affect the functions of the organs with which they are associated. A great deal of information concerning these diseases has emerged, however, during the past 5 years, much of it causing a number of fundamental assumptions about the amyloid diseases to be reexamined. For example, it is now apparent that the ability to form amyloid structures is not an unusual feature of the small number of proteins associated with these diseases but is instead a general property of polypeptide chains. It has also been found recently that aggregates of proteins not associated with amyloid diseases can impair the ability of cells to function to a similar extent as aggregates of proteins linked with specific neurodegenerative conditions. Moreover, the mature amyloid fibrils or plaques appear to be substantially less toxic than the prefibrillar aggregates that are their precursors. The toxicity of these early aggregates appears to result from an intrinsic ability to impair fundamental cellular processes by interacting with cellular membranes, causing oxidative stress and increases in free Ca2+ that eventually lead to apoptotic or necrotic cell death. The ‘new view’ of these diseases also suggests that other degenerative conditions could have similar underlying origins to those of the amyloidoses. In addition, cellular protection mechanisms, such as molecular chaperones and the protein degradation machinery, appear to be crucial in the prevention of disease in normally functioning living organisms. It also suggests some intriguing new factors that could be of great significance in the evolution of biological molecules and the mechanisms that regulate their behaviour.

The genetic information within a cell encodes not only the specific structures and functions of proteins but also the way these structures are attained through the process known as protein folding. In recent years many of the underlying features of the fundamental mechanism of this complex process and the manner in which it is regulated in living systems have emerged from a combination of experimental and theoretical studies [1]. The knowledge gained from these studies has also raised a host of interesting issues. It has become apparent, for example, that the folding and unfolding of proteins is associated with a whole range of cellular processes from the trafficking of molecules to specific organelles to the regulation of the cell cycle and the immune response. Such observations led to the inevitable conclusion that the failure to fold correctly, or to remain correctly folded, gives rise to many different types of biological malfunctions and hence to many different forms of disease [2]. In addition, it has been recognised recently that a large number of eukaryotic genes code for proteins that appear to be ‘natively unfolded’, and that proteins can adopt, under certain circumstances, highly organised multi-molecular assemblies whose structures are not specifically encoded in the amino acid sequence. Both these observations have raised challenging questions about one of the most fundamental principles of biology: the close relationship between the sequence, structure and function of proteins, as we discuss below [3].

It is well established that proteins that are ‘misfolded’, i.e. that are not in their functionally relevant conformation, are devoid of normal biological activity. In addition, they often aggregate and/or interact inappropriately with other cellular components leading to impairment of cell viability and eventually to cell death. Many diseases, often known as misfolding or conformational diseases, ultimately result from the presence in a living system of protein molecules with structures that are ‘incorrect’, i.e. that differ from those in normally functioning organisms [4]. Such diseases include conditions in which a specific protein, or protein complex, fails to fold correctly (e.g. cystic fibrosis, Marfan syndrome, amyotonic lateral sclerosis) or is not sufficiently stable to perform its normal function (e.g. many forms of cancer). They also include conditions in which aberrant folding behaviour results in the failure of a protein to be correctly trafficked (e.g. familial hypercholesterolaemia, α1-antitrypsin deficiency, and some forms of retinitis pigmentosa) [4]. The tendency of proteins to aggregate, often to give species extremely intractable to dissolution and refolding, is of course also well known in other circumstances. Examples include the formation of inclusion bodies during overexpression of heterologous proteins in bacteria and the precipitation of proteins during laboratory purification procedures. Indeed, protein aggregation is well established as one of the major difficulties associated with the production and handling of proteins in the biotechnology and pharmaceutical industries [5].

Considerable attention is presently focused on a group of protein folding diseases known as amyloidoses. In these diseases specific peptides or proteins fail to fold or to remain correctly folded and then aggregate (often with other components) so as to give rise to ‘amyloid’ deposits in tissue. Amyloid structures can be recognised because they possess a series of specific tinctorial and biophysical characteristics that reflect a common core structure based on the presence of highly organised βsheets [6]. The deposits in strictly defined amyloidoses are extracellular and can often be observed as thread-like fibrillar structures, sometimes assembled further into larger aggregates or plaques. These diseases include a range of sporadic, familial or transmissible degenerative diseases, some of which affect the brain and the central nervous system (e.g. Alzheimer’s and Creutzfeldt-Jakob diseases), while others involve peripheral tissues and organs such as the liver, heart and spleen (e.g. systemic amyloidoses and type II diabetes) [7, 8]. In other forms of amyloidosis, such as primary or secondary systemic amyloidoses, proteinaceous deposits are found in skeletal tissue and joints (e.g. haemodialysis-related amyloidosis) as well as in several organs (e.g. heart and kidney). Yet other components such as collagen, glycosaminoglycans and proteins (e.g. serum amyloid protein) are often present in the deposits protecting them against degradation [9, 10, 11]. Similar deposits to those in the amyloidoses are, however, found intracellularly in other diseases; these can be localised either in the cytoplasm, in the form of specialised aggregates known as aggresomes or as Lewy or Russell bodies or in the nucleus (see below).

The presence in tissue of proteinaceous deposits is a hallmark of all these diseases, suggesting a causative link between aggregate formation and pathological symptoms (often known as the amyloid hypothesis) [7, 8, 12]. At the present time the link between amyloid formation and disease is widely accepted on the basis of a large number of biochemical and genetic studies. The specific nature of the pathogenic species, and the molecular basis of their ability to damage cells, are however, the subject of intense debate [13, 14, 15, 16, 17, 18, 19, 20]. In neurodegenerative disorders it is very likely that the impairment of cellular function follows directly from the interactions of the aggregated proteins with cellular components [21, 22]. In the systemic non-neurological diseases, however, it is widely believed that the accumulation in vital organs of large amounts of amyloid deposits can by itself cause at least some of the clinical symptoms [23]. It is quite possible, however, that there are other more specific effects of aggregates on biochemical processes even in these diseases. The presence of extracellular or intracellular aggregates of a specific polypeptide molecule is a characteristic of all the 20 or so recognised amyloid diseases. The polypeptides involved include full length proteins (e.g. lysozyme or immunoglobulin light chains), biological peptides (amylin, atrial natriuretic factor) and fragments of larger proteins produced as a result of specific processing (e.g. the Alzheimer βpeptide) or of more general degradation [e.g. poly(Q) stretches cleaved from proteins with poly(Q) extensions such as huntingtin, ataxins and the androgen receptor]. The peptides and proteins associated with known amyloid diseases are listed in Table 1. In some cases the proteins involved have wild type sequences, as in sporadic forms of the diseases, but in other cases these are variants resulting from genetic mutations associated with familial forms of the diseases. In some cases both sporadic and familial diseases are associated with a given protein; in this case the mutational variants are usually associated with early-onset forms of the disease. In the case of the neurodegenerative diseases associated with the prion protein some forms of the diseases are transmissible. The existence of familial forms of a number of amyloid diseases has provided significant clues to the origins of the pathologies. For example, there are increasingly strong links between the age at onset of familial forms of disease and the effects of the mutations involved on the propensity of the affected proteins to aggregate in vitro. Such findings also support the link between the process of aggregation and the clinical manifestations of disease [24, 25].

The presence in cells of misfolded or aggregated proteins triggers a complex biological response. In the cytosol, this is referred to as the ‘heat shock response’ and in the endoplasmic reticulum (ER) it is known as the ‘unfolded protein response’. These responses lead to the expression, among others, of the genes for heat shock proteins (Hsp, or molecular chaperone proteins) and proteins involved in the ubiquitin-proteasome pathway [26]. The evolution of such complex biochemical machinery testifies to the fact that it is necessary for cells to isolate and clear rapidly and efficiently any unfolded or incorrectly folded protein as soon as it appears. In itself this fact suggests that these species could have a generally adverse effect on cellular components and cell viability. Indeed, it was a major step forward in understanding many aspects of cell biology when it was recognised that proteins previously associated only with stress, such as heat shock, are in fact crucial in the normal functioning of living systems. This advance, for example, led to the discovery of the role of molecular chaperones in protein folding and in the normal ‘housekeeping’ processes that are inherent in healthy cells [27, 28]. More recently a number of degenerative diseases, both neurological and systemic, have been linked to, or shown to be affected by, impairment of the ubiquitin-proteasome pathway (Table 2). The diseases are primarily associated with a reduction in either the expression or the biological activity of Hsps, ubiquitin, ubiquitinating or deubiquitinating enzymes and the proteasome itself, as we show below [29, 30, 31, 32], or even to the failure of the quality control mechanisms that ensure proper maturation of proteins in the ER. The latter normally leads to degradation of a significant proportion of polypeptide chains before they have attained their native conformations through retrograde translocation to the cytosol [33, 34].


It is now well established that the molecular basis of protein aggregation into amyloid structures involves the existence of ‘misfolded’ forms of proteins, i.e. proteins that are not in the structures in which they normally function in vivo or of fragments of proteins resulting from degradation processes that are inherently unable to fold [4, 7, 8, 36]. Aggregation is one of the common consequences of a polypeptide chain failing to reach or maintain its functional three-dimensional structure. Such events can be associated with specific mutations, misprocessing phenomena, aberrant interactions with metal ions, changes in environmental conditions, such as pH or temperature, or chemical modification (oxidation, proteolysis). Perturbations in the conformational properties of the polypeptide chain resulting from such phenomena may affect equilibrium 1 in Fig. 1 increasing the population of partially unfolded, or misfolded, species that are much more aggregation-prone than the native state.

Fig. 1 Overview of the possible fates of a newly synthesised polypeptide chain. The equilibrium ① between the partially folded molecules and the natively folded ones is usually strongly in favour of the latter except as a result of specific mutations, chemical modifications or partially destabilising solution conditions. The increased equilibrium populations of molecules in the partially or completely unfolded ensemble of structures are usually degraded by the proteasome; when this clearance mechanism is impaired, such species often form disordered aggregates or shift equilibrium ② towards the nucleation of pre-fibrillar assemblies that eventually grow into mature fibrils (equilibrium ③). DANGER! indicates that pre-fibrillar aggregates in most cases display much higher toxicity than mature fibrils. Heat shock proteins (Hsp) can suppress the appearance of pre-fibrillar assemblies by minimising the population of the partially folded molecules by assisting in the correct folding of the nascent chain and the unfolded protein response target incorrectly folded proteins for degradation.


Little is known at present about the detailed arrangement of the polypeptide chains themselves within amyloid fibrils, either those parts involved in the core βstrands or in regions that connect the various β-strands. Recent data suggest that the sheets are relatively untwisted and may in some cases at least exist in quite specific supersecondary structure motifs such as β-helices [6, 40] or the recently proposed µ-helix [41]. It seems possible that there may be significant differences in the way the strands are assembled depending on characteristics of the polypeptide chain involved [6, 42]. Factors including length, sequence (and in some cases the presence of disulphide bonds or post-translational modifications such as glycosylation) may be important in determining details of the structures. Several recent papers report structural models for amyloid fibrils containing different polypeptide chains, including the Aβ40 peptide, insulin and fragments of the prion protein, based on data from such techniques as cryo-electron microscopy and solid-state magnetic resonance spectroscopy [43, 44]. These models have much in common and do indeed appear to reflect the fact that the structures of different fibrils are likely to be variations on a common theme [40]. It is also emerging that there may be some common and highly organised assemblies of amyloid protofilaments that are not simply extended threads or ribbons. It is clear, for example, that in some cases large closed loops can be formed [45, 46, 47], and there may be specific types of relatively small spherical or ‘doughnut’ shaped structures that can result in at least some circumstances (see below).


The similarity of some early amyloid aggregates with the pores resulting from oligomerisation of bacterial toxins and pore-forming eukaryotic proteins (see below) also suggest that the basic mechanism of protein aggregation into amyloid structures may not only be associated with diseases but in some cases could result in species with functional significance. Recent evidence indicates that a variety of micro-organisms may exploit the controlled aggregation of specific proteins (or their precursors) to generate functional structures. Examples include bacterial curli [52] and proteins of the interior fibre cells of mammalian ocular lenses, whose β-sheet arrays seem to be organised in an amyloid-like supramolecular order [53]. In this case the inherent stability of amyloid-like protein structure may contribute to the long-term structural integrity and transparency of the lens. Recently it has been hypothesised that amyloid-like aggregates of serum amyloid A found in secondary amyloidoses following chronic inflammatory diseases protect the host against bacterial infections by inducing lysis of bacterial cells [54]. One particularly interesting example is a ‘misfolded’ form of the milk protein α-lactalbumin that is formed at low pH and trapped by the presence of specific lipid molecules [55]. This form of the protein has been reported to trigger apoptosis selectively in tumour cells providing evidence for its importance in protecting infants from certain types of cancer [55]. ….

Amyloid formation is a generic property of polypeptide chains ….

It is clear that the presence of different side chains can influence the details of amyloid structures, particularly the assembly of protofibrils, and that they give rise to the variations on the common structural theme discussed above. More fundamentally, the composition and sequence of a peptide or protein affects profoundly its propensity to form amyloid structures under given conditions (see below).

Because the formation of stable protein aggregates of amyloid type does not normally occur in vivo under physiological conditions, it is likely that the proteins encoded in the genomes of living organisms are endowed with structural adaptations that mitigate against aggregation under these conditions. A recent survey involving a large number of structures of β-proteins highlights several strategies through which natural proteins avoid intermolecular association of β-strands in their native states [65].  Other surveys of protein databases indicate that nature disfavours sequences of alternating polar and nonpolar residues, as well as clusters of several consecutive hydrophobic residues, both of which enhance the tendency of a protein to aggregate prior to becoming completely folded [66, 67].


Precursors of amyloid fibrils can be toxic to cells

It was generally assumed until recently that the proteinaceous aggregates most toxic to cells are likely to be mature amyloid fibrils, the form of aggregates that have been commonly detected in pathological deposits. It therefore appeared probable that the pathogenic features underlying amyloid diseases are a consequence of the interaction with cells of extracellular deposits of aggregated material. As well as forming the basis for understanding the fundamental causes of these diseases, this scenario stimulated the exploration of therapeutic approaches to amyloidoses that focused mainly on the search for molecules able to impair the growth and deposition of fibrillar forms of aggregated proteins. ….

Structural basis and molecular features of amyloid toxicity

The presence of toxic aggregates inside or outside cells can impair a number of cell functions that ultimately lead to cell death by an apoptotic mechanism [95, 96]. Recent research suggests, however, that in most cases initial perturbations to fundamental cellular processes underlie the impairment of cell function induced by aggregates of disease-associated polypeptides. Many pieces of data point to a central role of modifications to the intracellular redox status and free Ca2+ levels in cells exposed to toxic aggregates [45, 89, 97, 98, 99, 100, 101]. A modification of the intracellular redox status in such cells is associated with a sharp increase in the quantity of reactive oxygen species (ROS) that is reminiscent of the oxidative burst by which leukocytes destroy invading foreign cells after phagocytosis. In addition, changes have been observed in reactive nitrogen species, lipid peroxidation, deregulation of NO metabolism [97], protein nitrosylation [102] and upregulation of heme oxygenase-1, a specific marker of oxidative stress [103]. ….

Results have recently been reported concerning the toxicity towards cultured cells of aggregates of poly(Q) peptides which argues against a disease mechanism based on specific toxic features of the aggregates. These results indicate that there is a close relationship between the toxicity of proteins with poly(Q) extensions and their nuclear localisation. In addition they support the hypotheses that the toxicity of poly(Q) aggregates can be a consequence of altered interactions with nuclear coactivator or corepressor molecules including p53, CBP, Sp1 and TAF130 or of the interaction with transcription factors and nuclear coactivators, such as CBP, endowed with short poly(Q) stretches ([95] and references therein)…..

Concluding remarks
The data reported in the past few years strongly suggest that the conversion of normally soluble proteins into amyloid fibrils and the toxicity of small aggregates appearing during the early stages of the formation of the latter are common or generic features of polypeptide chains. Moreover, the molecular basis of this toxicity also appears to display common features between the different systems that have so far been studied. The ability of many, perhaps all, natural polypeptides to ‘misfold’ and convert into toxic aggregates under suitable conditions suggests that one of the most important driving forces in the evolution of proteins must have been the negative selection against sequence changes that increase the tendency of a polypeptide chain to aggregate. Nevertheless, as protein folding is a stochastic process, and no such process can be completely infallible, misfolded proteins or protein folding intermediates in equilibrium with the natively folded molecules must continuously form within cells. Thus mechanisms to deal with such species must have co-evolved with proteins. Indeed, it is clear that misfolding, and the associated tendency to aggregate, is kept under control by molecular chaperones, which render the resulting species harmless assisting in their refolding, or triggering their degradation by the cellular clearance machinery [166, 167, 168, 169, 170, 171, 172, 173, 175, 177, 178].

Misfolded and aggregated species are likely to owe their toxicity to the exposure on their surfaces of regions of proteins that are buried in the interior of the structures of the correctly folded native states. The exposure of large patches of hydrophobic groups is likely to be particularly significant as such patches favour the interaction of the misfolded species with cell membranes [44, 83, 89, 90, 91, 93]. Interactions of this type are likely to lead to the impairment of the function and integrity of the membranes involved, giving rise to a loss of regulation of the intracellular ion balance and redox status and eventually to cell death. In addition, misfolded proteins undoubtedly interact inappropriately with other cellular components, potentially giving rise to the impairment of a range of other biological processes. Under some conditions the intracellular content of aggregated species may increase directly, due to an enhanced propensity of incompletely folded or misfolded species to aggregate within the cell itself. This could occur as the result of the expression of mutational variants of proteins with decreased stability or cooperativity or with an intrinsically higher propensity to aggregate. It could also occur as a result of the overproduction of some types of protein, for example, because of other genetic factors or other disease conditions, or because of perturbations to the cellular environment that generate conditions favouring aggregation, such as heat shock or oxidative stress. Finally, the accumulation of misfolded or aggregated proteins could arise from the chaperone and clearance mechanisms becoming overwhelmed as a result of specific mutant phenotypes or of the general effects of ageing [173, 174].

The topics discussed in this review not only provide a great deal of evidence for the ‘new view’ that proteins have an intrinsic capability of misfolding and forming structures such as amyloid fibrils but also suggest that the role of molecular chaperones is even more important than was thought in the past. The role of these ubiquitous proteins in enhancing the efficiency of protein folding is well established [185]. It could well be that they are at least as important in controlling the harmful effects of misfolded or aggregated proteins as in enhancing the yield of functional molecules.


Nutritional Status is Associated with Faster Cognitive Decline and Worse Functional Impairment in the Progression of Dementia: The Cache County Dementia Progression Study1

Sanders, Chelseaa | Behrens, Stephaniea | Schwartz, Sarahb | Wengreen, Heidic | Corcoran, Chris D.b; d | Lyketsos, Constantine G.e | Tschanz, JoAnn T.a; d;
Journal of Alzheimer’s Disease 2016; 52(1):33-42,     http://content.iospress.com/articles/journal-of-alzheimers-disease/jad150528   http://dx.doi.org:/10.3233/JAD-150528

Nutritional status may be a modifiable factor in the progression of dementia. We examined the association of nutritional status and rate of cognitive and functional decline in a U.S. population-based sample. Study design was an observational longitudinal study with annual follow-ups up to 6 years of 292 persons with dementia (72% Alzheimer’s disease, 56% female) in Cache County, UT using the Mini-Mental State Exam (MMSE), Clinical Dementia Rating Sum of Boxes (CDR-sb), and modified Mini Nutritional Assessment (mMNA). mMNA scores declined by approximately 0.50 points/year, suggesting increasing risk for malnutrition. Lower mMNA score predicted faster rate of decline on the MMSE at earlier follow-up times, but slower decline at later follow-up times, whereas higher mMNA scores had the opposite pattern (mMNA by time β= 0.22, p = 0.017; mMNA by time2 β= –0.04, p = 0.04). Lower mMNA score was associated with greater impairment on the CDR-sb over the course of dementia (β= 0.35, p <  0.001). Assessment of malnutrition may be useful in predicting rates of progression in dementia and may provide a target for clinical intervention.


Shared Genetic Risk Factors for Late-Life Depression and Alzheimer’s Disease

Ye, Qing | Bai, Feng* | Zhang, Zhijun
Journal of Alzheimer’s Disease 2016; 52(1): 1-15.                                      http://dx.doi.org:/10.3233/JAD-151129

Background: Considerable evidence has been reported for the comorbidity between late-life depression (LLD) and Alzheimer’s disease (AD), both of which are very common in the general elderly population and represent a large burden on the health of the elderly. The pathophysiological mechanisms underlying the link between LLD and AD are poorly understood. Because both LLD and AD can be heritable and are influenced by multiple risk genes, shared genetic risk factors between LLD and AD may exist. Objective: The objective is to review the existing evidence for genetic risk factors that are common to LLD and AD and to outline the biological substrates proposed to mediate this association. Methods: A literature review was performed. Results: Genetic polymorphisms of brain-derived neurotrophic factor, apolipoprotein E, interleukin 1-beta, and methylenetetrahydrofolate reductase have been demonstrated to confer increased risk to both LLD and AD by studies examining either LLD or AD patients. These results contribute to the understanding of pathophysiological mechanisms that are common to both of these disorders, including deficits in nerve growth factors, inflammatory changes, and dysregulation mechanisms involving lipoprotein and folate. Other conflicting results have also been reviewed, and few studies have investigated the effects of the described polymorphisms on both LLD and AD. Conclusion: The findings suggest that common genetic pathways may underlie LLD and AD comorbidity. Studies to evaluate the genetic relationship between LLD and AD may provide insights into the molecular mechanisms that trigger disease progression as the population ages.


Association of Vitamin B12, Folate, and Sulfur Amino Acids With Brain Magnetic Resonance Imaging Measures in Older Adults: A Longitudinal Population-Based Study

B Hooshmand, F Mangialasche, G Kalpouzos…, et al.
AMA Psychiatry. Published online April 27, 2016.    http://dx.doi.org:/10.1001/jamapsychiatry.2016.0274

Importance  Vitamin B12, folate, and sulfur amino acids may be modifiable risk factors for structural brain changes that precede clinical dementia.

Objective  To investigate the association of circulating levels of vitamin B12, red blood cell folate, and sulfur amino acids with the rate of total brain volume loss and the change in white matter hyperintensity volume as measured by fluid-attenuated inversion recovery in older adults.

Design, Setting, and Participants  The magnetic resonance imaging subsample of the Swedish National Study on Aging and Care in Kungsholmen, a population-based longitudinal study in Stockholm, Sweden, was conducted in 501 participants aged 60 years or older who were free of dementia at baseline. A total of 299 participants underwent repeated structural brain magnetic resonance imaging scans from September 17, 2001, to December 17, 2009.

Main Outcomes and Measures  The rate of brain tissue volume loss and the progression of total white matter hyperintensity volume.

Results  In the multi-adjusted linear mixed models, among 501 participants (300 women [59.9%]; mean [SD] age, 70.9 [9.1] years), higher baseline vitamin B12 and holotranscobalamin levels were associated with a decreased rate of total brain volume loss during the study period: for each increase of 1 SD, β (SE) was 0.048 (0.013) for vitamin B12 (P < .001) and 0.040 (0.013) for holotranscobalamin (P = .002). Increased total homocysteine levels were associated with faster rates of total brain volume loss in the whole sample (β [SE] per 1-SD increase, –0.035 [0.015]; P = .02) and with the progression of white matter hyperintensity among participants with systolic blood pressure greater than 140 mm Hg (β [SE] per 1-SD increase, 0.000019 [0.00001]; P = .047). No longitudinal associations were found for red blood cell folate and other sulfur amino acids.

Conclusions and Relevance  This study suggests that both vitamin B12 and total homocysteine concentrations may be related to accelerated aging of the brain. Randomized clinical trials are needed to determine the importance of vitamin B12supplementation on slowing brain aging in older adults.



Notes from Kurzweill

This vitamin stops the aging process in organs, say Swiss researchers

A potential breakthrough for regenerative medicine, pending further studies


Improved muscle stem cell numbers and muscle function in NR-treated aged mice: Newly regenerated muscle fibers 7 days after muscle damage in aged mice (left: control group; right: fed NR). (Scale bar = 50 μm). (credit: Hongbo Zhang et al./Science) http://www.kurzweilai.net/images/improved-muscle-fibers.png

EPFL researchers have restored the ability of mice organs to regenerate and extend life by simply administering nicotinamide riboside (NR) to them.

NR has been shown in previous studies to be effective in boosting metabolism and treating a number of degenerative diseases. Now, an article by PhD student Hongbo Zhang published in Science also describes the restorative effects of NR on the functioning of stem cells for regenerating organs.

As in all mammals, as mice age, the regenerative capacity of certain organs (such as the liver and kidneys) and muscles (including the heart) diminishes. Their ability to repair them following an injury is also affected. This leads to many of the disorders typical of aging.

Mitochondria —> stem cells —> organs

To understand how the regeneration process deteriorates with age, Zhang teamed up with colleagues from ETH Zurich, the University of Zurich, and universities in Canada and Brazil. By using several biomarkers, they were able to identify the molecular chain that regulates how mitochondria — the “powerhouse” of the cell — function and how they change with age. “We were able to show for the first time that their ability to function properly was important for stem cells,” said Auwerx.

Under normal conditions, these stem cells, reacting to signals sent by the body, regenerate damaged organs by producing new specific cells. At least in young bodies. “We demonstrated that fatigue in stem cells was one of the main causes of poor regeneration or even degeneration in certain tissues or organs,” said Zhang.

How to revitalize stem cells

Which is why the researchers wanted to “revitalize” stem cells in the muscles of elderly mice. And they did so by precisely targeting the molecules that help the mitochondria to function properly. “We gave nicotinamide riboside to 2-year-old mice, which is an advanced age for them,” said Zhang.

“This substance, which is close to vitamin B3, is a precursor of NAD+, a molecule that plays a key role in mitochondrial activity. And our results are extremely promising: muscular regeneration is much better in mice that received NR, and they lived longer than the mice that didn’t get it.”

Parallel studies have revealed a comparable effect on stem cells of the brain and skin. “This work could have very important implications in the field of regenerative medicine,” said Auwerx. This work on the aging process also has potential for treating diseases that can affect — and be fatal — in young people, like muscular dystrophy (myopathy).

So far, no negative side effects have been observed following the use of NR, even at high doses. But while it appears to boost the functioning of all cells, it could include pathological ones, so further in-depth studies are required.

Abstract of NAD+ repletion improves mitochondrial and stem cell function and enhances life span in mice

Adult stem cells (SCs) are essential for tissue maintenance and regeneration yet are susceptible to senescence during aging. We demonstrate the importance of the amount of the oxidized form of cellular nicotinamide adenine dinucleotide (NAD+) and its impact on mitochondrial activity as a pivotal switch to modulate muscle SC (MuSC) senescence. Treatment with the NAD+ precursor nicotinamide riboside (NR) induced the mitochondrial unfolded protein response (UPRmt) and synthesis of prohibitin proteins, and this rejuvenated MuSCs in aged mice. NR also prevented MuSC senescence in the Mdx mouse model of muscular dystrophy. We furthermore demonstrate that NR delays senescence of neural SCs (NSCs) and melanocyte SCs (McSCs), and increased mouse lifespan. Strategies that conserve cellular NAD+ may reprogram dysfunctional SCs and improve lifespan in mammals.


Hongbo Zhang, Dongryeol Ryu, Yibo Wu, Karim Gariani, Xu Wang, Peiling Luan, Davide D’amico, Eduardo R. Ropelle, Matthias P. Lutolf, Ruedi Aebersold, Kristina Schoonjans, Keir J. Menzies, Johan Auwerx. NAD repletion improves mitochondrial and stem cell function and enhances lifespan in mice. Science, 2016 DOI: 10.1126/science.aaf2693


Enhancer–promoter interactions are encoded by complex genomic signatures on looping chromatin

Sean WhalenRebecca M Truty & Katherine S Pollard
Nature Genetics 2016; 48:488–496

Discriminating the gene target of a distal regulatory element from other nearby transcribed genes is a challenging problem with the potential to illuminate the causal underpinnings of complex diseases. We present TargetFinder, a computational method that reconstructs regulatory landscapes from diverse features along the genome. The resulting models accurately predict individual enhancer–promoter interactions across multiple cell lines with a false discovery rate up to 15 times smaller than that obtained using the closest gene. By evaluating the genomic features driving this accuracy, we uncover interactions between structural proteins, transcription factors, epigenetic modifications, and transcription that together distinguish interacting from non-interacting enhancer–promoter pairs. Most of this signature is not proximal to the enhancers and promoters but instead decorates the looping DNA. We conclude that complex but consistent combinations of marks on the one-dimensional genome encode the three-dimensional structure of fine-scale regulatory interactions.


Read Full Post »

Crystal Resolution in Raman Spetctoscopy for Pharmaceutical Analysis

Curator: Larry H. Bernstein, MD, FCAP


Investigating Crystallinity Using Low Frequency Raman Spectroscopy: Applications in Pharmaceutical Analysis


Figure 1: Illustration of an exemplar low-frequency Raman setup with a 785-nm laser.

The second system is based on a pre-built SureBlock XLF-CLM THz-Raman system from Ondax Inc. The laser (830 nm, 200 mW), cleanup filters, and laser line filters are all self-contained inside of the instrument but operate on the same principles as the 785-nm system. The sample is arranged in a 180° backscattering geometry relative to a 10× microscope lens. This system is then coupled via a fiber-optic cable to a Princeton Instruments SP2150i spectrograph and PIXIS 100 CCD camera. The 0.15-m spectrograph is used in conjunction with either a 1200- or 1800-groove/mm blazed diffraction grating to adjust the resolution and spectral range.

Crystalline Versus Amorphous Samples

The Raman spectrum of crystalline and amorphous solids differ greatly in the low-frequency region (see Figure 2) because of the highly ordered and highly disordered molecular environments of the respective solids. However, the mid-frequency region can also be noticeably altered by the changing environment (Figure 3).




Figure 3: Raman spectra of griseofulvin

Ensuring Accuracy

A potential issue is optical artifacts, and these may be identified by the analysis of both Stokes and anti-Stokes spectra. One advantage of the experimental setups described is that signal from the sample may be measured within minutes and it is nondestructive, thus allowing Raman spectra to be collected from a single sample using both techniques at virtually the same time. This approach permits the examination of low-frequency Raman data with 785-nm and 830-nm excitation and allows comparison with Fourier transform (FT)-Raman spectra, in which it is possible to collect meaningful data down to a Raman shift of 50 cm-1. The benefits are demonstrated in Figure 4. In this data, each technique produces consistent bands with similar Raman shifts and relative intensities. While Raman data were not collected below 50 cm-1 using the 1064-nm system, the bands at 69 and 96 cm-1 are consistent with the 785- and 830-nm data. Furthermore, the latter two methods show consistency with bands appearing around 32 and 46 cm-1 for both techniques.


Figure 4: Comparison of the low-frequency region of three Raman spectroscopic techniques.

Case Studies

So far there have been few studies to utilize low-frequency Raman spectroscopy in the analysis of pharmaceutical crystallinity. Despite this, the literature does contain articles that demonstrate the promising applicability of the technique.

Mah and colleagues (38) studied the level of crystallinity of griseofulvin using low-frequency Raman spectroscopy with PLS analysis. In this study a batch of amorphous griseofulvin (which was checked using X-ray powder diffractometry) was prepared by melting the griseofulvin and rapidly cooling it again using liquid nitrogen. Condensed water was removed by placing the sample over phosphorus pentoxide and the glassy sample was then ground using mortar and pestle. Calibrated samples of 2%, 4%, 6%, 8%, and 10% crystallinity were then created though geometric mixing of the amorphous and crystalline samples; following this mixing, the samples were then pressed into tablets. Many tablets were then stored in differing temperatures (30 °C, 35 °C, and 40 °C) at 0% humidity. Low-frequency 785-nm, mid-frequency 785-nm, and FT-Raman spectroscopies were performed simultaneously on each sample. After PLS analysis, limits of detection (LOD) and limits of quantification (LOQ) were calculated. The results of this research showed that each of these three techniques were capable of quantifying crystallinity. It also showed that FT-Raman and low-frequency Raman techniques were able to both detect and quantify crystallinity earlier than the mid-frequency 785 nm Raman technique. The respective LOD and LOQ values for FT-Raman, low-frequency Raman, and mid-frequency Raman are as follows: LOD values: 0.6%, 1.1%, and 1.5%; LOQ values: 1.8%, 3.4%, and 4.6%. The root mean squared errors of prediction (RMSEP) were also calculated and, like the LOD and LOQ values, indicated that the FT-Raman data had the lowest error, followed by the low-frequency Raman, and mid-frequency Raman had the largest errors of the three techniques. The recrystallization tests that were performed indicated that higher temperatures showed a distinct increase in the rate of recrystallization and that each technique provided similar results (within experimental error). It is also important to note that each technique gave similar spectra (where applicable), which provides supporting evidence that the data is meaningful. Overall, the conclusions of this research were that low-frequency predictions of crystallinity are at least as accurate as the predictions made using mid-frequency Raman techniques. It is arguable that low-frequency Raman is better because of the presence of stronger spectral features and because they are intrinsically linked with crystallinity.

Hédoux and colleagues (36) investigated the crystallinity of indomethacin using low-frequency Raman spectroscopy and compared the results with high frequency data. The ranges of interest were indicated to be 5–250 cm-1and 1500–1750 cm-1 regions. Samples of indomethacin were milled using a cryogenic mill to avoid mechanical heating of the sample, with full amorphous samples being obtained after 25 min of milling. Methods used in this study include Raman spectroscopy, isothermal differential scanning calorimetry (DSC), and X-ray diffractometry as well as the milling technique. The primary objective of this research was to use all of these techniques to monitor the crystallization of amorphous indomethacin to the more stable γ-state while the sample was at room temperature–well below the glass transition temperature,Tg = 43 °C. The results of this research did in fact show that low-frequency Raman spectroscopy is a very sensitive technique for identifying very small amounts of crystallinity within mostly amorphous samples. The data was supported by the well-established methods for monitoring crystallinity: XRD and DSC. This paper particularly noted the benefit of low acquisition times associated with low-frequency Raman spectroscopy compared with the other techniques used.

Low-frequency Raman spectroscopy was also used to monitor two polymorphic forms of caffeine after grinding and pressurization of the samples (39). Pressurization was performed hydrostatically using a gasketed membrane diamond anvil cell (MDAC), while ball milling was used as the method of grinding the sample. Analysis methods used were low-frequency Raman and X-ray diffraction. Low-frequency Raman spectra revealed that, upon slight pressurization, caffeine form I transforms into a metastable state slightly different from that of form II and that a disordered (amorphous) state is achieved in both forms when pressurized above 2 GPa. In contrast, it is concluded that grinding results in the transformation of each form into the other with precise grinding times, thus also generating an intermediate form, which was found to only be observable using low-frequency Raman spectroscopy. The caffeine data, as well as the low-frequency data obtained for indomethacin were further discussed by Hédoux and colleagues (40).

Larkin and colleagues (41) used low-frequency Raman in conjunction with other techniques to characterize several different APIs and their various forms. The other techniques include FT-Raman spectroscopy, X-ray powder diffraction (XRPD), and single-crystal X-ray diffractometry. The APIs studied include carbamazepine, apixaban diacid co-crystals, theophylline, and caffeine and were prepared in various ways that are not detailed here. During this research, low-frequency Raman spectroscopy played an important role in understanding the structures while in their various forms. However, more importantly, low-frequency Raman spectroscopy produced information-rich regions below 200 cm-1 for each of the crystalline samples and noticeably broad features when the APIs were in solution.

Wang and colleagues (42) investigated the applicability of low-frequency Raman spectroscopy in the analysis of respirable dosage forms of various pharmaceuticals. The analyzed pharmaceuticals were involved in the treatment of asthma or chronic obstructive pulmonary disease (COPD) and include salmeterol xinafoate, formoterol fumarate, glycopyrronium bromide, fluticasone propionate, mometasone furoate, and salbutamol sulfate. Various formulations of amino acid excipients were also analyzed in this study. Results indicated that the use of low-frequency Raman analysis was beneficial because of the large features found in the region and allowed for reliable identification of each of the dosage forms. Not only this, it also allowed unambiguous identification of two similar bronchodilators, albuterol (Ventolin) and salbutamol (Airomir).

Heyler and colleagues (43) collected both the low-frequency and fingerprint region of Raman spectra from several polymorphs of carbamazepine, an anticonvulsant and mood stabilizer. This study found that the different polymorphs of this API could be distinguished effectively using these two regions. Similarly, Al-Dulaimi and colleagues (44) demonstrated that polymorphic forms of paracetamol, flufenamic acid, and imipramine hydrochloride could be screened using low-frequency Raman and only milligram quantities of each drug. In this study, paracetamol and flufenamic acid were used as the model compounds for comparison with a previously unstudied system (imipramine hydrochloride). Features within the low-frequency Raman regions of spectra were shown to be significantly different between forms of each drug. Therefore this study also indicated that the polymorphs were highly distinguishable using the technique. Hence, like all other previously mentioned case studies, these investigations further demonstrate the utility of low-frequency Raman spectroscopy as a fast and effective method for screening pharmaceuticals for crystallinity.


Low-frequency Raman spectroscopy is a new technique in the field of pharmaceuticals, as well as in general studies of crystallinity. This is despite indications in previous studies showing an innate ability of the technique for identifying crystalline materials and in some cases, quantifying crystallinity. Arguably one of the most beneficial aspects of this technique is the relatively small amount of time necessary to prepare and analyze samples when compared with XRD or DSC. This should ensure the growing use of low-frequency Raman spectroscopy in, not only pharmaceutical crystallinity studies, but also crystallinity studies of other substances as well.


  1. J.R. Ferraro and K. Nakamoto, Introductory Raman Spectroscopy, 1st Edition (Academic Press, San Diego, 1994).
  2. K.C. Gordon and C.M. McGoverin, Int. J. Pharm. 417, 151–162 (2011).
  3. D. Law et al., J. Pharm. Sci. 90, 1015–1025 (2001).
  4. G.H. Ward and R.K. Schultz, Pharm. Res. 12, 773–779 (1995).
  5. M.D. Ticehurst et al., Int. J. Pharm. 193, 247–259 (2000).
  6. M. Rani, R. Govindarajan, R. Surana, and R. Suryanarayanan, Pharm. Res. 23, 2356–2367 (2006).
  7. M.J. Pikal, in Polymorphs of Pharmaceutical Solids, H.G. Brittain, Ed. (Marcel Dekker, New York, 1999), pp. 395–419.
  8. M. Ohta and G. Buckton, Int. J. Pharm. 289, 31–38 (2005).
  9. J. Han and R. Suryanarayanan, Pharm. Dev. Technol. 3, 587–596 (1998).
  10. S. Debnath and R. Suryanarayanan, AAPS PharmSciTech. 5, 1–11 (2004).
  11. C.J. Strachan, T. Rades, D.A. Newnham, K.C. Gordon, M. Pepper, and P.F. Taday, Chem. Phys. Lett. 390, 20–24 (2004).
  12. Y.C. Shen, Int. J. Pharm. 417, 48–60 (2011).
  13. G.W. Chantry, in Submillimeter Spectroscopy: A Guide to the Theoretical and Experimental Physics of the Far Infrared, 1st Edition (Academic Press Inc. Ltd., Waltam, 1971).
  14. D. Tuschel, Spectroscopy 30(9), 18–31 (2015).
  15. P.M.A. Sherwood, Vibrational Spectroscopy of Solids (Cambridge University Press, Cambridge, 1972).
  16. L. Ho et al., J. Control. Release. 119, 253–261 (2007).
  17. V.P. Wallace et al., Faraday Discuss. 126, 255–263 (2004).
  18. F.S. Vieira and C. Pasquini, Anal. Chem. 84, 3780–3786 (2014).
  19. J. Darkwah, G. Smith, I. Ermolina, and M. Mueller-Holtz, Int. J. Pharm.455, 357–364 (2013).
  20. S. Kojima, T. Shibata, H. Igawa, and T. Mori, IOP Conf. Ser. Mater. Sci. Eng. 54, 1–6 (2014).
  21. T. Shibata, T. Mori, and S. Kojima, Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 150, 207–211 (2015).
  22. S.P. Delaney, D. Pan, M. Galella, S.X. Yin, and T.M. Korter, Cryst. Growth Des. 12, 5017–5024 (2012).
  23. M.D. King, W.D. Buchanan, and T.M. Korter, Anal. Chem. 83, 3786–3792 (2011).
  24. C.J. Strachan et al., J. Pharm. Sci. 94, 837–846 (2005).
  25. C.M. McGoverin, T. Rades, and K.C. Gordon, J. Pharm. Sci. 97, 4598–4621 (2008).
  26. A. Heinz, C.J. Strachan, K.C. Gordon, and T. Rades, J. Pharm. Pharmacol. 971–988 (2009).,<>
  27. H.G. Brittain, J. Pharm. Sci. 86, 405–412 (1997).
  28. L. Yu, S.M. Reutzel, and G. A. Stephenson, Pharm. Sci. Technol. Today1, 118–127 (1998).
  29. M. Dracínský, E. Procházková, J. Kessler, J. Šebestík, P. Matejka, and P. Bour, J. Phys. Chem. B. 117, 7297–7307 (2013).
  30. P. Sharma et al., J. Raman Spectrosc. DOI 10.1002/jrs.4834, wileyonlinelibrary.com (2015).
  31. A.P. Ayala, Vib. Spectrosc. 45, 112–116 (2007).
  32. J.F. Scott, Spex Speak. 17, 1–12 (1972).
  33. D.P. Strommen and K. Nakamoto, in Laboratory Raman Spectroscopy, 1st Edition (John Wiley & Sons Inc., New York, 1984).
  34. A.L. Glebov, O. Mokhun, A. Rapaport, S. Vergnole, V. Smirnov, and L.B. Glebov, Proc. SPIE. 8428, 84280C1–84280C11 (2012).
  35. E.P.J. Parrott and J.A. Zeitler, Appl. Spectrosc. 69, 1–25 (2015).
  36. A. Hédoux, L. Paccou, Y. Guinet, J.F. Willart, and M. Descamps, Eur. J. Pharm. Sci. 38, 156–164 (2009).
  37. R.L. McCreery, in Raman Spectroscopy for Chemical Analysis, 1st Edition (John Wiley & Sons Inc., New York, 2000).
  38. P.T. Mah, S.J. Fraser, M.E. Reish, T. Rades, K.C. Gordon, and C.J. Strachan, Vib. Spectrosc. 77, 10–16 (2015).
  39. A. Hédoux, A.A. Decroix, Y. Guinet, L. Paccou, P. Derollez, and M. Descamps, J. Phys. Chem. B. 115, 5746–5753 (2011).
  40. A. Hédoux, Y. Guinet, and M. Descamps, Int. J. Pharm. 417, 17–31 (2011).
  41. P.J. Larkin, M. Dabros, B. Sarsfield, E. Chan, J.T. Carriere, and B.C. Smith, Appl. Spectrosc. 68, 758–776 (2014).
  42. H. Wang, M. A. Boraey, L. Williams, D. Lechuga-Ballesteros, and R. Vehring, Int. J. Pharm. 469, 197–205 (2014).
  43. R. Heyler, J. Carriere, and B. Smith, in “Raman Technology for Today’s Spectroscopists,” supplement to Spectroscopy (June), 44–50 (2013).
  44. S. Al-Dulaimi, A. Aina, and J. Burley, CrystEngComm. 12, 1038–1040 (2010).


The drawing in Figure 1 is that of a six-membered ring or hexagon. A carbon atom is located at each vertex of the hexagon and a hydrogen atom is attached to each carbon, although it is not written in. The circle inside the ring represents that the electrons are delocalized which is illustrated in Figure 2.


Figure 2: Top: The P orbitals on each of the six carbon atoms in benzene that contribute an electron to the ring. Bottom: the collection of delocalized P orbital electrons forming a cloud of electron density above and below the benzene ring.

Each of the carbon atoms in a benzene ring contains two P orbitals containing a lone electron, and one of these orbitals is perpendicular to the benzene ring as seen in the top of Figure 2. There is enough orbital overlap that these electrons, rather than being confined between two carbon atoms as might be expected, instead delocalize and form clouds of electron density above and below the plane of the ring. This type of bonding is called aromatic bonding(2), and a ring that has aromatic bonding is called an aromatic ring. It is aromatic bonding that gives aromatic rings their unique structures, chemistry, and IR spectra. Benzene is simply a commonly found aromatic ring. Other types of aromatic molecules include polycyclic aromatic hydrocarbons (PAHs), such as naphthalene, that contain two or more benzene rings that are fused (which means adjacent rings share two carbon atoms), and heterocyclic aromatic rings which are aromatic rings that contain a noncarbon atom such as nitrogen. Pyridine is an example of one of these. The interpretation of the IR spectra of these latter aromatic molecules will be discussed in future articles.

The IR Spectrum of Benzene

The IR spectrum of benzene is shown in Figure 3.




Super-Resolution Fluorescence Microscopy: Where To Go Now?
Bernd Rieger, Quantitative Imaging Group Leader, Delft University of Technology


Keynote Presentation

From Molecules To Whole Organs
Francesco Pavone, Principal Investigator, LENS, University of Florence

Some examples of correlative microscopies, combining linear and non linear techniques will be described. Particular attention will be devoted Alzheimer disease or to neural plasticity after damage as neurobiological application.


Super-Resolution Imaging by dSTORM
Markus Sauer, Professor, Julius-Maximilians-Universität Würzburg


Coffee and Networking in Exhibition Hall


Correlated Fluorescence And X-Ray Tomography: Finding Molecules In Cellular CT Scans
Carolyn Larabell, Professor, University of California San Francisco


Integrating Advanced Fluorescence Microscopy Techniques Reveals Nanoscale Architecture And Mesoscale Dynamics Of Cytoskeletal Structures Promoting Cell Migration And Invasion
Alessandra Cambi, Assistant Professor, University of Nijmegen

This lecture will describe our efforts to exploit and integrate a variety of advanced microscopy techniques to unravel the nanoscale structural and dynamic complexity of individual podosomes as well as formation, architecture and function of mesoscale podosome clusters.


Multi-Photon-Like Fluorescence Microscopy Using Two-Step Imaging Probes
George Patterson, Investigator, National Institutes of Health


Lunch & Networking in Exhibition Hall


Technology Spotlight


3D Single Particle Tracking: Following Mitochondria in Zebrafish Embryos
Don Lamb, Professor, Ludwig-Maximilians-University


Visualizing Mechano-Biology: Quantitative Bioimaging Tools To Study The Impact Of Mechanical Stress On Cell Adhesion And Signalling
Bernhard Wehrle-Haller, Group Leader, University of Geneva


Superresolution Imaging Of Clathrin-Mediated Endocytosis In Yeast
Jonas Ries, Group Leader, EMBL Heidelberg

We use single-molecule localization microscopy to investigate the dynamic structural organization of the east endocytic machinery. We discovered a striking ring-shaped pre-patterning of the actin nucleation zone, which is key for an efficient force generation and membrane invagination.


Coffee and Networking in Exhibition Hall


Optical Imaging of Molecular Mechanisms of Disease
Clemens Kaminski, Professor, University of Cambridge


3-D Optical Tomography For Ex Vivo And In Vivo Imaging
James McGinty, Professor, Imperial College London


End Of Day One

Wednesday, 15 June 2016


Imaging Gene Regulation in Living Cells at the Single Molecule Level
James Zhe Liu, Group Leader, Janelia Research Campus, Howard Hughes Medical Institute


Keynote Presentation

Super-Resolution Microscopy With DNA Molecules
Ralf Jungmann, Group Leader, Max Planck Institute of Biochemistry


A Revolutionary Miniaturised Instrument For Single-Molecule Localization Microscopy And FRET
Achillefs Kapanidis, Professor, University of Oxford


Coffee and Networking in Exhibition Hall


Democratising Live-Cell High-Speed Super-Resolution Microscopy
Ricardo Henriques, Group Leader, University College London


Democratising Live-Cell High-Speed Super-Resolution Microscopy


Information In Localisation Microscopy
Susan Cox, Professor, Kings College London


Lunch & Networking in Exhibition Hall


Technology Spotlight


High-Content Imaging Approaches For Drug Discovery For Neglected Tropical Diseases
Manu De Rycker, Team Leader, University of Dundee

The development of new drugs for intracellular parasitic diseases is hampered by difficulties in developing relevant high-throughput cell-based assays. Here we present how we have used image-based high-content screening approaches to address some of these issues.


High Resolution In Vivo Histology: Clinical in vivo Subcellular Imaging using Femtoseceond Laser Multiphoton/CARS Tomography
Karsten König, Professor, Saarland University

We report on a certified, medical, transportable multipurpose nonlinear microscopic imagingsystem based on a femtosecond excitation source and a photonic crystal fiber with multiple miniaturized time-correlated single-photon counting detectors.


Coffee and Networking in Exhibition Hall


Lateral Organization Of Plasma Membrane Constituents At The Nanoscale
Gerhard Schutz, Professor, Vienna University of Technology

It is of interest how proteins are spatially distributed over the membrane, and whether they conjoin and move as part of multi-molecular complexes. In my lecture, I will discuss methods for approaching the two questions, and provide biological examples.


Correlative Light And Electron Microscopy In Structural Cell Biology
Wanda Kukulski, Group Leader, University of Cambridge


Close of Conference


Read Full Post »

Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator



Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

April 13, 2016

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.


Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.


Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang, Kayvan Reza Niazi & Bahram Jalali. Deep Learning in Label-free Cell Classification. Scientific Reports 6, Article number: 21471 (2016); doi:10.1038/srep21471 (open access)

Supplementary Information


Deep Learning in Label-free Cell Classification

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang,Kayvan Reza Niazi & Bahram Jalali

Scientific Reports 6, Article number: 21471 (2016)    http://dx.doi.org:/10.1038/srep21471

Deep learning extracts patterns and knowledge from rich multidimenstional datasets. While it is extensively used for image recognition and speech processing, its application to label-free classification of cells has not been exploited. Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells1,2. However, each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy1,3. Imaging flow cytometry4,5 on the other hand captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering6.

In addition to classification accuracy, the throughput is another critical specification of a flow cytometer. Indeed high throughput, typically 100,000 cells per second, is needed to screen a large enough cell population to find rare abnormal cells that are indicative of early stage diseases. However there is a fundamental trade-off between throughput and accuracy in any measurement system7,8. For example, imaging flow cytometers face a throughput limit imposed by the speed of the CCD or the CMOS cameras, a number that is approximately 2000 cells/s for present systems9. Higher flow rates lead to blurred cell images due to the finite camera shutter speed. Many applications of flow analyzers such as cancer diagnostics, drug discovery, biofuel development, and emulsion characterization require classification of large sample sizes with a high-degree of statistical accuracy10. This has fueled research into alternative optical diagnostic techniques for characterization of cells and particles in flow.

Recently, our group has developed a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept11. This instrument overcomes the trade-off between sensitivity and speed by using Amplified Time-stretch Dispersive Fourier Transform12,13,14,15. In time stretched imaging16, the object’s spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds (Fig. 1). Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching8,11,16. Moreover, warped stretch transform17,18can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view19. In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput20. Integrated with a microfluidic channel, coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.

Figure 1: Time stretch quantitative phase imaging (TS-QPI) and analytics system; A mode-locked laser followed by a nonlinear fiber, an erbium doped fiber amplifier (EDFA), and a wavelength-division multiplexing (WDM) filter generate and shape a train of broadband optical pulses. http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg


Box 1: The pulse train is spatially dispersed into a train of rainbow flashes illuminating the target as line scans. The spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame. The ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/s. The phase shift and intensity loss at each location within the field of view are embedded into the spectral interference patterns using a Michelson interferometer. Box 2: The interferogram pulses were then stretched in time so that spatial information could be mapped into time through time-stretch dispersive Fourier transform (TS-DFT), and then captured by a single pixel photodetector and an analog-to-digital converter (ADC). The loss of sensitivity at high shutter speed is compensated by stimulated Raman amplification during time stretch. Box 3: (a) Pulse synchronization; the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images. (b) The biomass density of a cell leads to a spatially varying optical phase shift. When a rainbow flash passes through the cells, the changes in refractive index at different locations will cause phase walk-off at interrogation wavelengths. Hilbert transformation and phase unwrapping are used to extract the spatial phase shift. (c) Decoding the phase shift in each pulse at each wavelength and remapping it into a pixel reveals the protein concentration distribution within cells. The optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms. Thus, quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent. Cell features describing morphology, granularity, biomass, etc are extracted from the images. (d) These biophysical features are used in a machine learning algorithm for high-accuracy label-free classification of the cells.

On another note, surface markers used to label cells, such as EpCAM21, are unavailable in some applications; for example, melanoma or pancreatic circulating tumor cells (CTCs) as well as some cancer stem cells are EpCAM-negative and will escape EpCAM-based detection platforms22. Furthermore, large-population cell sorting opens the doors to downstream operations, where the negative impacts of labels on cellular behavior and viability are often unacceptable23. Cell labels may cause activating/inhibitory signal transduction, altering the behavior of the desired cellular subtypes, potentially leading to errors in downstream analysis, such as DNA sequencing and subpopulation regrowth. In this way, quantitative phase imaging (QPI) methods24,25,26,27 that categorize unlabeled living cells with high accuracy are needed. Coherent time stretch imaging is a method that enables quantitative phase imaging at ultrahigh throughput for non-invasive label-free screening of large number of cells.

In this work, the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning. Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely28,29,30,31 among cells and their variations reflect important information of genotypes and physiological stimuli32. The multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.

We further improved the accuracy, repeatability, and the balance between sensitivity and specificity of our label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating characteristics (ROC). To demonstrate sensitivity, specificity, and accuracy of multi-feature label-free flow cytometry using our technique, we classified (1) OT-IIhybridoma T-lymphocytes and SW-480 colon cancer epithelial cells, and (2) Chlamydomonas reinhardtii algal cells (herein referred to as Chlamydomonas) based on their lipid content, which is related to the yield in biofuel production. Our preliminary results show that compared to classification by individual biophysical parameters, our label-free hyperdimensional technique improves the detection accuracy from 77.8% to 95.5%, or in other words, reduces the classification inaccuracy by about five times.     ……..


Feature Extraction

The decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as shown in Fig. 2 (see Section Methods: Image Reconstruction). These images are used to obtain biophysical fingerprints of the cells8,36. With domain expertise, raw images are fused and transformed into a suitable set of biophysical features, listed in Table 1, which the deep learning model further converts into learned features for improved classification.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.



The optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells. The attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles. The optical loss image is derived from the low frequency component of the pulse interferograms. The optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations can be found in Section Methods. Also, supplementary Videos 1 and 2 show measurements of cell-induced optical path length difference by TS-QPI at four different points along the rainbow for OT-II and SW-480, respectively.

Table 1: List of extracted features.

Feature Name    Description         Category


Figure 3: Biophysical features formed by image fusion.

(a) Pairwise correlation matrix visualized as a heat map. The map depicts the correlation between all major 16 features extracted from the quantitative images. Diagonal elements of the matrix represent correlation of each parameter with itself, i.e. the autocorrelation. The subsets in box 1, box 2, and box 3 show high correlation because they are mainly related to morphological, optical phase, and optical loss feature categories, respectively. (b) Ranking of biophysical features based on their AUCs in single-feature classification. Blue bars show performance of the morphological parameters, which includes diameter along the interrogation rainbow, diameter along the flow direction, tight cell area, loose cell area, perimeter, circularity, major axis length, orientation, and median radius. As expected, morphology contains most information, but other biophysical features can contribute to improved performance of label-free cell classification. Orange bars show optical phase shift features i.e. optical path length differences and refractive index difference. Green bars show optical loss features representing scattering and absorption by the cell. The best performed feature in these three categories are marked in red.

Figure 4: Machine learning pipeline. Information of quantitative optical phase and loss images are fused to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.

The neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification. This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC). Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node. The training process maximizes AUC, pushing the ROC curve toward the upper left corner, which means improved sensitivity and specificity in classification.

….   How to cite this article: Chen, C. L. et al. Deep Learning in Label-free Cell Classification.

Sci. Rep. 6, 21471; http://dx.doi.org:/10.1038/srep21471


Computer Algorithm Helps Characterize Cancerous Genomic Variations


To better characterize the functional context of genomic variations in cancer, researchers developed a new computer algorithm called REVEALER. [UC San Diego Health]

Scientists at the University of California San Diego School of Medicine and the Broad Institute say they have developed a new computer algorithm—REVEALER—to better characterize the functional context of genomic variations in cancer. The tool, described in a paper (“Characterizing Genomic Alterations in Cancer by Complementary Functional Associations”) published in Nature Biotechnology, is designed to help researchers identify groups of genetic variations that together associate with a particular way cancer cells get activated, or how they respond to certain treatments.

REVEALER is available for free to the global scientific community via the bioinformatics software portal GenePattern.org.

“This computational analysis method effectively uncovers the functional context of genomic alterations, such as gene mutations, amplifications, or deletions, that drive tumor formation,” said senior author Pablo Tamayo, Ph.D., professor and co-director of the UC San Diego Moores Cancer Center Genomics and Computational Biology Shared Resource.

Dr. Tamayo and team tested REVEALER using The Cancer Genome Atlas (TCGA), the NIH’s database of genomic information from more than 500 human tumors representing many cancer types. REVEALER revealed gene alterations associated with the activation of several cellular processes known to play a role in tumor development and response to certain drugs. Some of these gene mutations were already known, but others were new.

For example, the researchers discovered new activating genomic abnormalities for beta-catenin, a cancer-promoting protein, and for the oxidative stress response that some cancers hijack to increase their viability.

REVEALER requires as input high-quality genomic data and a significant number of cancer samples, which can be a challenge, according to Dr. Tamayo. But REVEALER is more sensitive at detecting similarities between different types of genomic features and less dependent on simplifying statistical assumptions, compared to other methods, he adds.

“This study demonstrates the potential of combining functional profiling of cells with the characterizations of cancer genomes via next-generation sequencing,” said co-senior author Jill P. Mesirov, Ph.D., professor and associate vice chancellor for computational health sciences at UC San Diego School of Medicine.


Characterizing genomic alterations in cancer by complementary functional associations

Jong Wook Kim, Olga B Botvinnik, Omar Abudayyeh, Chet Birger, et al.

Nature Biotechnology (2016)              http://dx.doi.org:/10.1038/nbt.3527

Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes


Figure 2: REVEALER results for transcriptional activation of β-catenin in cancer.close

(a) This heatmap illustrates the use of the REVEALER approach to find complementary genomic alterations that match the transcriptional activation of β-catenin in cancer. The target profile is a TCF4 reporter that provides an estimate of…


An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

Jonathan P. Celli, Imran Rizvi, Adam R. Blanden, Iqbal Massodi, Michael D. Glidden, Brian W. Pogue & Tayyaba Hasan

Scientific Reports 4; 3751  (2014)    http://dx.doi.org:/10.1038/srep03751

While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

The attrition rates for preclinical development of oncology therapeutics are particularly dismal due to a complex set of factors which includes 1) the failure of pre-clinical models to recapitulate determinants of in vivo treatment response, and 2) the limited ability of available assays to extract treatment-specific data integral to the complexities of therapeutic responses1,2,3. Three-dimensional (3D) tumour models have been shown to restore crucial stromal interactions which are missing in the more commonly used 2D cell culture and that influence tumour organization and architecture4,5,6,7,8, as well as therapeutic response9,10, multicellular resistance (MCR)11,12, drug penetration13,14, hypoxia15,16, and anti-apoptotic signaling17. However, such sophisticated models can only have an impact on therapeutic guidance if they are accompanied by robust quantitative assays, not only for cell viability but also for providing mechanistic insights related to the outcomes. While numerous assays for drug discovery exist18, they are generally not developed for use in 3D systems and are often inherently unsuitable. For example, colorimetric conversion products have been noted to bind to extracellular matrix (ECM)19 and traditional colorimetric cytotoxicity assays reduce treatment response to a single number reflecting a biochemical event that has been equated to cell viability (e.g. tetrazolium salt conversion20). Such approaches fail to provide insight into the spatial patterns of response within colonies, morphological or structural effects of drug response, or how overall culture viability may be obscuring the status of sub-populations that are resistant or partially responsive. Hence, the full benefit of implementing 3D tumour models in therapeutic development has yet to be realized for lack of analytical methods that describe the very aspects of treatment outcome that these systems restore.

Motivated by these factors, we introduce a new platform for quantitative in situ treatment assessment (qVISTA) in 3D tumour models based on computational analysis of information-dense biological image datasets (bioimage-informatics)21,22. This methodology provides software end-users with multiple levels of complexity in output content, from rapidly-interpreted dose response relationships to higher content quantitative insights into treatment-dependent architectural changes, spatial patterns of cytotoxicity within fields of multicellular structures, and statistical analysis of nodule-by-nodule size-dependent viability. The approach introduced here is cognizant of tradeoffs between optical resolution, data sampling (statistics), depth of field, and widespread usability (instrumentation requirement). Specifically, it is optimized for interpretation of fluorescent signals for disease-specific 3D tumour micronodules that are sufficiently small that thousands can be imaged simultaneously with little or no optical bias from widefield integration of signal along the optical axis of each object. At the core of our methodology is the premise that the copious numerical readouts gleaned from segmentation and interpretation of fluorescence signals in these image datasets can be converted into usable information to classify treatment effects comprehensively, without sacrificing the throughput of traditional screening approaches. It is hoped that this comprehensive treatment-assessment methodology will have significant impact in facilitating more sophisticated implementation of 3D cell culture models in preclinical screening by providing a level of content and biological relevance impossible with existing assays in monolayer cell culture in order to focus therapeutic targets and strategies before costly and tedious testing in animal models.

Using two different cell lines and as depicted in Figure 1, we adopt an ECM overlay method pioneered originally for 3D breast cancer models23, and developed in previous studies by us to model micrometastatic ovarian cancer19,24. This system leads to the formation of adherent multicellular 3D acini in approximately the same focal plane atop a laminin-rich ECM bed, implemented here in glass-bottom multiwell imaging plates for automated microscopy. The 3D nodules resultant from restoration of ECM signaling5,8, are heterogeneous in size24, in contrast to other 3D spheroid methods, such as rotary or hanging drop cultures10, in which cells are driven to aggregate into uniformly sized spheroids due to lack of an appropriate substrate to adhere to. Although the latter processes are also biologically relevant, it is the adherent tumour populations characteristic of advanced metastatic disease that are more likely to be managed with medical oncology, which are the focus of therapeutic evaluation herein. The heterogeneity in 3D structures formed via ECM overlay is validated here by endoscopic imaging ofin vivo tumours in orthotopic xenografts derived from the same cells (OVCAR-5).


Figure 1: A simplified schematic flow chart of imaging-based quantitative in situ treatment assessment (qVISTA) in 3D cell culture.

(This figure was prepared in Adobe Illustrator® software by MD Glidden, JP Celli and I Rizvi). A detailed breakdown of the image processing (Step 4) is provided in Supplemental Figure 1.

A critical component of the imaging-based strategy introduced here is the rational tradeoff of image-acquisition parameters for field of view, depth of field and optical resolution, and the development of image processing routines for appropriate removal of background, scaling of fluorescence signals from more than one channel and reliable segmentation of nodules. In order to obtain depth-resolved 3D structures for each nodule at sub-micron lateral resolution using a laser-scanning confocal system, it would require ~ 40 hours (at approximately 100 fields for each well with a 20× objective, times 1 minute/field for a coarse z-stack, times 24 wells) to image a single plate with the same coverage achieved in this study. Even if the resources were available to devote to such time-intensive image acquisition, not to mention the processing, the optical properties of the fluorophores would change during the required time frame for image acquisition, even with environmental controls to maintain culture viability during such extended imaging. The approach developed here, with a mind toward adaptation into high throughput screening, provides a rational balance of speed, requiring less than 30 minutes/plate, and statistical rigour, providing images of thousands of nodules in this time, as required for the high-content analysis developed in this study. These parameters can be further optimized for specific scenarios. For example, we obtain the same number of images in a 96 well plate as for a 24 well plate by acquiring only a single field from each well, rather than 4 stitched fields. This quadruples the number conditions assayed in a single run, at the expense of the number of nodules per condition, and therefore the ability to obtain statistical data sets for size-dependent response, Dfrac and other segmentation-dependent numerical readouts.


We envision that the system for high-content interrogation of therapeutic response in 3D cell culture could have widespread impact in multiple arenas from basic research to large scale drug development campaigns. As such, the treatment assessment methodology presented here does not require extraordinary optical instrumentation or computational resources, making it widely accessible to any research laboratory with an inverted fluorescence microscope and modestly equipped personal computer. And although we have focused here on cancer models, the methodology is broadly applicable to quantitative evaluation of other tissue models in regenerative medicine and tissue engineering. While this analysis toolbox could have impact in facilitating the implementation of in vitro 3D models in preclinical treatment evaluation in smaller academic laboratories, it could also be adopted as part of the screening pipeline in large pharma settings. With the implementation of appropriate temperature controls to handle basement membranes in current robotic liquid handling systems, our analyses could be used in ultra high-throughput screening. In addition to removing non-efficacious potential candidate drugs earlier in the pipeline, this approach could also yield the additional economic advantage of minimizing the use of costly time-intensive animal models through better estimates of dose range, sequence and schedule for combination regimens.


Microscope Uses AI to Find Cancer Cells More Efficiently

Thu, 04/14/2016 – by Shaun Mason


Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses.

There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.

Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data with the goal of achieving accurate decision making.

The study, which was published in the journal Nature Scientific Reports, was led by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.

Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The new microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.

The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.

“Each frame is slowed down in time and optically amplified so it can be digitized,” Mahjoubfar said. “This lets us perform fast cell imaging that the artificial intelligence component can distinguish.”

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem.

“The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination,” Chen said.

The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.   …..  see also http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

Chen, C. L. et al. Deep Learning in Label-free Cell Classification.    Sci. Rep. 6, 21471;   http://dx.doi.org:/10.1038/srep21471



Read Full Post »

Conduction, graphene, elements and light

Larry H. Bernstein, MD, FCAP, Curator



New 2D material could upstage graphene   Mar 25, 2016

Can function as a conductor or semiconductor, is extremely stable, and uses light, inexpensive earth-abundant elements
The atoms in the new structure are arranged in a hexagonal pattern as in graphene, but that is where the similarity ends. The three elements forming the new material all have different sizes; the bonds connecting the atoms are also different. As a result, the sides of the hexagons formed by these atoms are unequal, unlike in graphene. (credit: Madhu Menon)

A new one-atom-thick flat material made up of silicon, boron, and nitrogen can function as a conductor or semiconductor (unlike graphene) and could upstage graphene and advance digital technology, say scientists at the University of Kentucky, Daimler in Germany, and the Institute for Electronic Structure and Laser (IESL) in Greece.

Reported in Physical Review B, Rapid Communications, the new Si2BN material was discovered in theory (not yet made in the lab). It uses light, inexpensive earth-abundant elements and is extremely stable, a property many other graphene alternatives lack, says University of Kentucky Center for Computational Sciences physicist Madhu Menon, PhD.

Limitations of other 2D semiconducting materials

A search for new 2D semiconducting materials has led researchers to a new class of three-layer materials called transition-metal dichalcogenides (TMDCs). TMDCs are mostly semiconductors and can be made into digital processors with greater efficiency than anything possible with silicon. However, these are much bulkier than graphene and made of materials that are not necessarily earth-abundant and inexpensive.

Other graphene-like materials have been proposed but lack the strengths of the new material. Silicene, for example, does not have a flat surface and eventually forms a 3D surface. Other materials are highly unstable, some only for a few hours at most.

The new Si2BN material is metallic, but by attaching other elements on top of the silicon atoms, its band gap can be changed (from conductor to semiconductor, for example) — a key advantage over graphene for electronics applications and solar-energy conversion.

The presence of silicon also suggests possible seamless integration with current silicon-based technology, allowing the industry to slowly move away from silicon, rather than precipitously, notes Menon.


Abstract of Prediction of a new graphenelike Si2BN solid

While the possibility to create a single-atom-thick two-dimensional layer from any material remains, only a few such structures have been obtained other than graphene and a monolayer of boron nitride. Here, based upon ab initiotheoretical simulations, we propose a new stable graphenelike single-atomic-layer Si2BN structure that has all of its atoms with sp2 bonding with no out-of-plane buckling. The structure is found to be metallic with a finite density of states at the Fermi level. This structure can be rolled into nanotubes in a manner similar to graphene. Combining first- and second-row elements in the Periodic Table to form a one-atom-thick material that is also flat opens up the possibility for studying new physics beyond graphene. The presence of Si will make the surface more reactive and therefore a promising candidate for hydrogen storage.


Nano-enhanced textiles clean themselves with light

Catalytic uses for industrial-scale chemical processes in agrochemicals, pharmaceuticals, and natural products also seen
Close-up of nanostructures grown on cotton textiles. Image magnified 150,000 times. (credit: RMIT University)

Researchers at at RMIT University in Australia have developed a cheap, efficient way to grow special copper- and silver-based nanostructures on textiles that can degrade organic matter when exposed to light.

Don’t throw out your washing machine yet, but the work paves the way toward nano-enhanced textiles that can spontaneously clean themselves of stains and grime simply by being put under a light or worn out in the sun.

The nanostructures absorb visible light (via localized surface plasmon resonance — collective electron-charge oscillations in metallic nanoparticles that are excited by light), generating high-energy (“hot”) electrons that cause the nanostructures to act as catalysts for chemical reactions that degrade organic matter.

Steps involved in fabricating copper- and silver-based cotton fabrics: 1. Sensitize the fabric with tin. 2. Form palladium seeds that act as nucleation (clustering) sites. 3. Grow metallic copper and silver nanoparticles on the surface of the cotton fabric. (credit: Samuel R. Anderson et al./Advanced Materials Interfaces)

The challenge for researchers has been to bring the concept out of the lab by working out how to build these nanostructures on an industrial scale and permanently attach them to textiles. The RMIT team’s novel approach was to grow the nanostructures directly onto the textiles by dipping them into specific solutions, resulting in development of stable nanostructures within 30 minutes.

When exposed to light, it took less than six minutes for some of the nano-enhanced textiles to spontaneously clean themselves.

The research was described in the journal Advanced Materials Interfaces.

Scaling up to industrial levels

Rajesh Ramanathan, a RMIT postdoctoral fellow and co-senior author, said the process also had a variety of applications for catalysis-based industries such as agrochemicals, pharmaceuticals, and natural productsand could be easily scaled up to industrial levels. “The advantage of textiles is they already have a 3D structure, so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” he said.

Cotton textile fabric with copper-based nanostructures. The image is magnified 200 times. (credit: RMIT University)

“Our next step will be to test our nano-enhanced textiles with organic compounds that could be more relevant to consumers, to see how quickly they can handle common stains like tomato sauce or wine,” Ramanathan said.

“There’s more work to do to before we can start throwing out our washing machines, but this advance lays a strong foundation for the future development of fully self-cleaning textiles.”

Abstract of Robust Nanostructured Silver and Copper Fabrics with Localized Surface Plasmon Resonance Property for Effective Visible Light Induced Reductive Catalysis

Inspired by high porosity, absorbency, wettability, and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the modes of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.


New type of molecular tag makes MRI 10,000 times more sensitive

Could detect biochemical processes in opaque tissue without requiring PET radiation or CT x-rays

Duke scientists have discovered a new class of inexpensive, long-lived molecular tags that enhance MRI signals by 10,000 times. To activate the tags, the researchers mix them with a newly developed catalyst (center) and a special form of hydrogen (gray), converting them into long-lived magnetic resonance “lightbulbs” that might be used to track disease metabolism in real time. (credit: Thomas Theis, Duke University)

Duke University researchers have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions, such as those involved in cancer and heart disease, and in real time.

Let’s review how MRI (magnetic resonance imaging) works: MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. By generating a strong magnetic field (such as 3 Tesla) and a series of radio-frequency waves, MRI induces these hydrogen magnets in atoms to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs (such as the brain), blood vessels, and tumors inside the body.

MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique. That makes it impossible to detect small numbers of molecules (without using unattainably more massive magnetic fields).

So to take MRI a giant step further in sensitivity, the Duke researchers created a new class of molecular “tags” that can track disease metabolism in real time, and can last for more than an hour, using a technique called hyperpolarization.* These tags are biocompatible and inexpensive to produce, allowing for using existing MRI machines.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

Sensitive tissue detection without radiation

The new molecular tags open up a new world for medicine and research by making it possible to detect what’s happening in optically opaque tissue instead of requiring expensive positron emission tomography (PET), which uses a radioactive tracer chemical to look at organs in the body and only works for (typically) about 20 minutes, or CT x-rays, according to the researchers.

This research was reported in the March 25 issue of Science Advances. It was supported by the National Science Foundation, the National Institutes of Health, the Department of Defense Congressionally Directed Medical Research Programs Breast Cancer grant, the Pratt School of Engineering Research Innovation Seed Fund, the Burroughs Wellcome Fellowship, and the Donors of the American Chemical Society Petroleum Research Fund.

* For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said. But while promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular “lightbulbs” burn out in a matter of seconds.

“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”

So the researchers synthesized a series of molecules containing diazarines — a chemical structure composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly. Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.

Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Conventional magnetic resonance (MR) faces serious sensitivity limitations, which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2singlet spin order, with signal decay time constants of 5.8 and 23 min, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for more than an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.


[Seems like they have a great idea, now all they need to do is confirm very specific uses or types of cancers/diseases or other processes they can track or target. Will be interesting to see if they can do more than just see things, maybe they can use this to target and destroy bad things in the body also. Keep up the good work….. this sounds like a game changer.]


Scientists time-reverse developed stem cells to make them ‘embryonic’ again

May help avoid ethically controversial use of human embryos for research and support other research goals
Researchers have reversed “primed” (developed) “epiblast” stem cells (top) from early mouse embryos using the drug MM-401, causing the treated cells (bottom) to revert to the original form of the stem cells. (credit: University of Michigan)

University of Michigan Medical School researchers have discovered a way to convert mouse stem cells (taken from an embryo) that have  become “primed” (reached the stage where they can  differentiate, or develop into every specialized cell in the body) to a “naïve” (unspecialized) state by simply adding a drug.

This breakthrough has the potential to one day allow researchers to avoid the ethically controversial use of human embryos left over from infertility treatments. To achieve this breakthrough, the researchers treated the primedembryonic stem cells (“EpiSC”) with a drug called MM-401* (a leukemia drug) for a short period of time.

Embryonic stem cells are able to develop into any type of cell, except those of the placenta (credit: Mike Jones/CC)


* The drug, MM-401, specifically targets epigenetic chemical markers on histones, the protein “spools” that DNA coils around to create structures called chromatin. These epigenetic changes signal the cell’s DNA-reading machinery and tell it where to start uncoiling the chromatin in order to read it.

A gene called Mll1 is responsible for the addition of these epigenetic changes, which are like small chemical tags called methyl groups. Mll1 plays a key role in the uncontrolled explosion of white blood cells in leukemia, which is why researchers developed the drug MM-401 to interfere with this process. But Mll1 also plays a role in cell development and the formation of blood cells and other cells in later-stage embryos.

Stem cells do not turn on the Mll1 gene until they are more developed. The MM-401 drug blocks Mll1’s normal activity in developing cells so the epigenetic chemical markers are missing. These cells are then unable to continue to develop into different types of specialized cells but are still able to revert to healthy naive pluripotent stem cells.

Abstract of MLL1 Inhibition Reprograms Epiblast Stem Cells to Naive Pluripotency

The interconversion between naive and primed pluripotent states is accompanied by drastic epigenetic rearrangements. However, it is unclear whether intrinsic epigenetic events can drive reprogramming to naive pluripotency or if distinct chromatin states are instead simply a reflection of discrete pluripotent states. Here, we show that blocking histone H3K4 methyltransferase MLL1 activity with the small-molecule inhibitor MM-401 reprograms mouse epiblast stem cells (EpiSCs) to naive pluripotency. This reversion is highly efficient and synchronized, with more than 50% of treated EpiSCs exhibiting features of naive embryonic stem cells (ESCs) within 3 days. Reverted ESCs reactivate the silenced X chromosome and contribute to embryos following blastocyst injection, generating germline-competent chimeras. Importantly, blocking MLL1 leads to global redistribution of H3K4me1 at enhancers and represses lineage determinant factors and EpiSC markers, which indirectly regulate ESC transcription circuitry. These findings show that discrete perturbation of H3K4 methylation is sufficient to drive reprogramming to naive pluripotency.

Abstract of Naive Pluripotent Stem Cells Derived Directly from Isolated Cells of the Human Inner Cell Mass

Conventional generation of stem cells from human blastocysts produces a developmentally advanced, or primed, stage of pluripotency. In vitro resetting to a more naive phenotype has been reported. However, whether the reset culture conditions of selective kinase inhibition can enable capture of naive epiblast cells directly from the embryo has not been determined. Here, we show that in these specific conditions individual inner cell mass cells grow into colonies that may then be expanded over multiple passages while retaining a diploid karyotype and naive properties. The cells express hallmark naive pluripotency factors and additionally display features of mitochondrial respiration, global gene expression, and genome-wide hypomethylation distinct from primed cells. They transition through primed pluripotency into somatic lineage differentiation. Collectively these attributes suggest classification as human naive embryonic stem cells. Human counterparts of canonical mouse embryonic stem cells would argue for conservation in the phased progression of pluripotency in mammals.



How to kill bacteria in seconds using gold nanoparticles and light

March 24, 2016


zapping bacteria ft Could treat bacterial infections without using antibiotics, which could help reduce the risk of spreading antibiotics resistance

Researchers at the University of Houston have developed a new technique for killing bacteria in 5 to 25 seconds using highly porous gold nanodisks and light, according to a study published today in Optical Materials Express. The method could one day help hospitals treat some common infections without using antibiotics

Read Full Post »

Nanophotonics Applications

Larry H. Bernstein, MD, FCAP, Curator



Copper Plasmonics Explored for Nanophotonics Applications


MOSCOW, March 22, 2016 — Experimental demonstration of copper components has expanded the list of potential materials suited to nanophotonic devices beyond gold and silver.

According to researchers from the Moscow Institute of Physics and Technology (MIPT), copper components are not only just as good as components based on noble metals, such as gold and silver, they can be easily implemented in integrated circuits using industry-standard fabrication processes. Gold and silver, as noble metals, may not enter into the requisite chemical reactions to create nanostructures readily and require expensive, difficult processing steps.

Nanoscale copper plasmonic waveguides on a silicon chip in a scanning near-field optical microscope (left) and their image obtained using electron microscopy (right).

Nanoscale copper plasmonic waveguides on a silicon chip in a scanning near-field optical microscope (left) and their image obtained using electron microscopy (right). Courtesy of MIPT.

In nanophotonics, the diffraction limit of light is overcome by using metal-dielectric structures. Light may be converted into surface plasmon polaritons, surface waves propagating along the surface of a metal, which make it possible to switch from conventional 3D photonics to 2D surface plasmon photonics, also known as plasmonics. This allows control of light at the 100-nm scale, far beyond the diffraction limit.

Now researchers from MIPT’s Laboratory of Nanooptics and Plasmonics have found a solution to the problems posed by noble metals. Based on a generalization of the theory for so-called plasmonic metals, in 2012 they found that copper as an optical material is not only able to compete with gold, but it can also be a better alternative. Unlike gold, copper can be easily structured using wet or dry etching. This gives a possibility to make nanoscale components that are easily integrated into silicon photonic or electronic integrated circuits.

Silicon chip with nanoscale copper plasmonic components.

Silicon chip with nanoscale copper plasmonic components. Courtesy of MIPT.

It took more than two years for the researchers to purchase the required equipment, develop the fabrication process, produce samples, conduct several independent measurements and confirm their hypothesis experimentally.

“As a result, we succeeded in fabricating copper chips with optical properties that are in no way inferior to gold-based chips,” says the research leader Dmitry Fedyanin. “Furthermore, we managed to do this in a fabrication process compatible with the CMOS technology, which is the basis for all modern integrated circuits, including microprocessors. It’s a kind of revolution in nanophotonics.”

The researchers said that the optical properties of thin polycrystalline copper films were determined by their internal structure, and that controlling this structure to achieve and consistently reproduce the required parameters in technological cycles was the most difficult task.

Having demonstrated copper’s suitable material characteristics, as well as nanoscale manufacturing capability, the researchers believe the devices could be integrated with both silicon nanoelectronics and silicon nanophotonics. Such technologies could enable LEDs, nanolasers, highly sensitive sensors and transducers for mobile devices, and high-performance optoelectronic processors with several tens of thousands of cores for graphics cards, personal computers and supercomputers.

“We conducted ellipsometry of the copper films and then confirmed these results using near-field scanning optical microscopy of the nanostructures. This proves that the properties of copper are not impaired during the whole process of manufacturing nanoscale plasmonic components,” says Dmitry Fedyanin.

The research was published in Nano Letters (doi: 10.1021/acs.nanolett.5b03942).


Ultralow-Loss CMOS Copper Plasmonic Waveguides

Surface plasmon polaritons can give a unique opportunity to manipulate light at a scale well below the diffraction limit reducing the size of optical components down to that of nanoelectronic circuits. At the same time, plasmonics is mostly based on noble metals, which are not compatible with microelectronics manufacturing technologies. This prevents plasmonic components from integration with both silicon photonics and silicon microelectronics. Here, we demonstrate ultralow-loss copper plasmonic waveguides fabricated in a simple complementary metal-oxide semiconductor (CMOS) compatible process, which can outperform gold plasmonic waveguides simultaneously providing long (>40 μm) propagation length and deep subwavelength (∼λ2/50, where λ is the free-space wavelength) mode confinement in the telecommunication spectral range. These results create the backbone for the development of a CMOS plasmonic platform and its integration in future electronic chips.

Read Full Post »

Brain Cancer Vaccine in Development and other considerations

Larry H. Bernstein, MD, FCAP, Curator



GEN News Highlights   Mar 3, 2016

Advanced Immunotherapeutic Method Shows Promise against Brain Cancer




The researchers induced a specific type of cell death in brain cancer cells from mice. The dying cancer cells were then incubated together with dendritic cells, which play a vital role in the immune system. The researchers discovered that this type of cancer cell killing releases “danger signals” that fully activate the dendritic cells. “We re-injected the activated dendritic cells into the mice as a therapeutic vaccine,” Professor Patrizia Agostinis explains. “That vaccine alerted the immune system to the presence of dangerous cancer cells in the body. As a result, the immune system could recognize them and start attacking the brain tumor.” [©KU Leuven Laboratory of Cell Death Research & Therapy, Dr. Abhishek D. Garg]


Scientists from KU Leuven in Belgium say they have shown that next-generation cell-based immunotherapy may offer new hope in the fight against brain cancer.

Cell-based immunotherapy involves the injection of a therapeutic anticancer vaccine that stimulates the patient’s immune system to attack the tumor. Thus far, the results of this type of immunotherapy have been mildly promising. However, Abhishek D. Garg and Professor Patrizia Agostinis from the KU Leuven department of cellular and molecular medicine believe they have found a novel way to produce more effective cell-based anticancer vaccines.

The researchers induced a specific type of cell death in brain cancer cells from mice. The dying cancer cells were then incubated together with dendritic cells, which play a vital role in the immune system. The investigators discovered that this type of cancer cell killing releases “danger signals” that fully activate the dendritic cells.

“We re-injected the activated dendritic cells into the mice as a therapeutic vaccine,” explains Prof. Agostinis. “That vaccine alerted the immune system to the presence of dangerous cancer cells in the body. As a result, the immune system could recognize them and start attacking the brain tumor.”

Combined with chemotherapy, this novel cell-based immunotherapy drastically increased the survival rates of mice afflicted with brain tumors. Almost 50% of the mice were completely cured. None of the mice treated with chemotherapy alone became long-term survivors.

“The major goal of any anticancer treatment is to kill all cancer cells and prevent any remaining malignant cells from growing or spreading again,” says Professor Agostinis. “This goal, however, is rarely achieved with current chemotherapies, and many patients relapse. That’s why the co-stimulation of the immune system is so important for cancer treatments. Scientists have to look for ways to kill cancer cells in a manner that stimulates the immune system. With an eye on clinical studies, our findings offer a feasible way to improve the production of vaccines against brain tumors.”

The team published its study (“Dendritic Cell Vaccines Based on Immunogenic Cell Death Elicit Danger Signals and T Cell–Driven Rejection of High-Grade Glioma”) in Science Translational Medicine.


Dendritic cell vaccines based on immunogenic cell death elicit danger signals and T cell–driven rejection of high-grade glioma


SLC7A11 expression is associated with seizures and predicts poor survival in patients with malignant glioma


Cortical GABAergic excitation contributes to epileptic activities around human glioma


Spherical Nucleic Acid Nanoparticle Conjugates as an RNAi-Based Therapy for Glioblastoma

Glioblastoma multiforme (GBM) is a neurologically debilitating disease that culminates in death 14 to 16 months after diagnosis. An incomplete understanding of how cataloged genetic aberrations promote therapy resistance, combined with ineffective drug delivery to the central nervous system, has rendered GBM incurable. Functional genomics efforts have implicated several oncogenes in GBM pathogenesis but have rarely led to the implementation of targeted therapies. This is partly because many “undruggable” oncogenes cannot be targeted by small molecules or antibodies. We preclinically evaluate an RNA interference (RNAi)–based nanomedicine platform, based on spherical nucleic acid (SNA) nanoparticle conjugates, to neutralize oncogene expression in GBM. SNAs consist of gold nanoparticles covalently functionalized with densely packed, highly oriented small interfering RNA duplexes. In the absence of auxiliary transfection strategies or chemical modifications, SNAs efficiently entered primary and transformed glial cells in vitro. In vivo, the SNAs penetrated the blood-brain barrier and blood-tumor barrier to disseminate throughout xenogeneic glioma explants. SNAs targeting the oncoprotein Bcl2Like12 (Bcl2L12)—an effector caspase and p53 inhibitor overexpressed in GBM relative to normal brain and low-grade astrocytomas—were effective in knocking down endogenous Bcl2L12 mRNA and protein levels, and sensitized glioma cells toward therapy-induced apoptosis by enhancing effector caspase and p53 activity. Further, systemically delivered SNAs reduced Bcl2L12 expression in intracerebral GBM, increased intratumoral apoptosis, and reduced tumor burden and progression in xenografted mice, without adverse side effects. Thus, silencing antiapoptotic signaling using SNAs represents a new approach for systemic RNAi therapy for GBM and possibly other lethal malignancies.


Rapid, Label-Free Detection of Brain Tumors with Stimulated Raman Scattering Microscopy

Surgery is an essential component in the treatment of brain tumors. However, delineating tumor from normal brain remains a major challenge. We describe the use of stimulated Raman scattering (SRS) microscopy for differentiating healthy human and mouse brain tissue from tumor-infiltrated brain based on histoarchitectural and biochemical differences. Unlike traditional histopathology, SRS is a label-free technique that can be rapidly performed in situ. SRS microscopy was able to differentiate tumor from nonneoplastic tissue in an infiltrative human glioblastoma xenograft mouse model based on their different Raman spectra. We further demonstrated a correlation between SRS and hematoxylin and eosin microscopy for detection of glioma infiltration (κ = 0.98). Finally, we applied SRS microscopy in vivo in mice during surgery to reveal tumor margins that were undetectable under standard operative conditions. By providing rapid intraoperative assessment of brain tissue, SRS microscopy may ultimately improve the safety and accuracy of surgeries where tumor boundaries are visually indistinct.


Neural Stem Cell–Mediated Enzyme/Prodrug Therapy for Glioma: Preclinical Studies


Magnetic Resonance Metabolic Imaging of Glioma


Exploiting the Immunogenic Potential of Cancer Cells for Improved Dendritic Cell Vaccines

Cancer immunotherapy is currently the hottest topic in the oncology field, owing predominantly to the discovery of immune checkpoint blockers. These promising antibodies and their attractive combinatorial features have initiated the revival of other effective immunotherapies, such as dendritic cell (DC) vaccinations. Although DC-based immunotherapy can induce objective clinical and immunological responses in several tumor types, the immunogenic potential of this monotherapy is still considered suboptimal. Hence, focus should be directed on potentiating its immunogenicity by making step-by-step protocol innovations to obtain next-generation Th1-driving DC vaccines. We review some of the latest developments in the DC vaccination field, with a special emphasis on strategies that are applied to obtain a highly immunogenic tumor cell cargo to load and to activate the DCs. To this end, we discuss the effects of three immunogenic treatment modalities (ultraviolet light, oxidizing treatments, and heat shock) and five potent inducers of immunogenic cell death [radiotherapy, shikonin, high-hydrostatic pressure, oncolytic viruses, and (hypericin-based) photodynamic therapy] on DC biology and their application in DC-based immunotherapy in preclinical as well as clinical settings.

Cancer immunotherapy has gained considerable momentum over the past 5 years, owing predominantly to the discovery of immune checkpoint inhibitors. These inhibitors are designed to release the brakes of the immune system that under physiological conditions prevent auto-immunity by negatively regulating cytotoxic T lymphocyte (CTL) function. Following the FDA approval of the anti-cytotoxic T lymphocyte-associated antigen-4 (CTLA-4) monoclonal antibody (mAb) ipilimumab (Yervoy) in 2011 for the treatment of metastatic melanoma patients (1), two mAbs targeting programed death (PD)-1 receptor signaling (nivolumab and pembrolizumab) have very recently joined the list of FDA-approved checkpoint blockers (respectively, for the treatment of metastatic squamous non-small cell lung cancer and relapsed/refractory melanoma patients) (2, 3).

However, the primary goal of cancer immunotherapy is to activate the immune system in cancer patients. This requires the induction of tumor-specific T-cell-mediated antitumor immunity. Checkpoint blockers are only able to abrogate the brakes of a functioning antitumoral immune response, implying that only patients who have pre-existing tumor-specific T cells will benefit most from checkpoint blockade. This is evidenced by the observation that ipilimumab may be more effective in patients who have pre-existing, albeit ineffective, antitumor immune responses (4). Hence, combining immune checkpoint blockade with immunotherapeutic strategies that prime tumor-specific T cell responses might be an attractive and even synergistic approach. This relatively new paradigm has lead to the revival of existing, and to date disappointing (as monotherapies), active immunotherapeutic treatment modalities. One promising strategy to induce priming of tumor-specific T cells is dendritic cell (DC)-based immunotherapy.

Dendritic cells are positioned at the crucial interface between the innate and adaptive immune system as powerful antigen-presenting cells capable of inducing antigen-specific T cell responses (5). Therefore, they are the most frequently used cellular adjuvant in clinical trials. Since the publication of the first DC vaccination trial in melanoma patients in 1995, the promise of DC immunotherapy is underlined by numerous clinical trials, frequently showing survival benefit in comparison to non-DC control groups (68). Despite the fact that most DC vaccination trials differ in several vaccine parameters (i.e., site and frequency of injection, nature of the DCs, choice of antigen), DC vaccination as a monotherapy is considered safe and rarely associates with immune-related toxicity. This is in sharp contrast with the use of mAbs or cytokine therapies. Ipilumumab has, for instance, been shown to induce immune-related serious adverse events in up to one-third of treated melanoma patients (1). The FDA approval of Sipuleucel-T (Provenge), an autologous DC-enriched vaccine for hormone-resistant metastatic prostate cancer, in 2010 is really considered as a milestone in the vaccination community (9). After 15 years of extensive clinical research, Sipileucel-T became the first cellular immunotherapy ever that received FDA approval, providing compelling evidence for the substantial socio-economic impact of DC-based immunotherapy. DC vaccinations have most often been applied in patients with melanoma, prostate cancer, high-grade glioma, and renal cell cancer. Although promising objective responses and tumor-specific T cell responses have been observed in all these cancer-types (providing proof-of-principle for DC-based immunotherapy), the clinical success of this treatment is still considered suboptimal (6). This poor clinical efficacy can in part be attributed to the severe tumor-induced immune suppression and the selection of patients with advanced disease status and poor survival prognostics (6, 1012).

There is a consensus in the field that step-by-step optimization and standardization of the production process of DC vaccines, to obtain a Th1-driven immune response, might enhance their clinical efficacy (13). In this review, we address some recent DC vaccine adaptations that impact DC biology. Combining these novel insights might bring us closer to an ideal DC vaccine product that can trigger potent CTL- and Th1-driven antitumor immunity.

One factor requiring more attention in this production process is the immunogenicity of the dying or dead cancer cells used to load the DCs. It has been shown in multiple preclinical cancer models that the methodology used to prepare the tumor cell cargo can influence the in vivo immunogenic potential of loaded DC vaccines (1419). Different treatment modalities have been described to enhance the immunogenicity of cancer cells in the context of DC vaccines. These treatments can potentiate antitumor immunity by inducing immune responses against tumor neo-antigens and/or by selectively increasing the exposure/release of particular damage-associated molecular patterns (DAMPs) that can trigger the innate immune system (14, 1719). The emergence of the concept of immunogenic cell death (ICD) might even further improve the immunogenic potential of DC vaccines. Cancer cells undergoing ICD have been shown to exhibit excellent immunostimulatory capacity owing to the spatiotemporally defined emission of a series of critical DAMPs acting as potent danger signals (20, 21). Thus far, three DAMPs have been attributed a crucial role in the immunogenic potential of nearly all ICD inducers: the surface-exposed “eat me” signal calreticulin (ecto-CRT), the “find me” signal ATP and passively released high-mobility group box 1 (HMGB1) (21). Moreover, ICD-experiencing cancer cells have been shown in various mouse models to act as very potent Th1-driving anticancer vaccines, already in the absence of any adjuvants (21, 22). The ability to reject tumors in syngeneic mice after vaccination with cancer cells (of the same type) undergoing ICD is a crucial hallmark of ICD, in addition to the molecular DAMP signature (21).

Here, we review the effects of three frequently used immunogenic modalities and four potent ICD inducers on DC biology and their application in DC vaccines in preclinical as well as clinical settings (Tables (Tables11 and and2).2). Moreover, we discuss the rationale for combining different cell death-inducing regimens to enhance the immunogenic potential of DC vaccines and to ensure the clinical relevance of the vaccine product.

A list of prominent enhancers of immunogenicity and ICD inducers applied in DC vaccine setups and their associations with DAMPs and DC biology.
A list of preclinical tumor models and clinical studies for evaluation of the in vivo potency of DC vaccines loaded with immunogenically killed tumor cells.
The Impact of DC Biology on the Efficacy of DC Vaccines

Over the past years, different DC vaccine parameters have been shown to impact the clinical effectiveness of DC vaccinations. In the next section, we will elaborate on some promising adaptations of the DC preparation protocol.

Given the labor-intensive ex vivo culturing protocol of monocyte-derived DCs and inspired by the results of the Provenge study, several groups are currently exploiting the use of blood-isolated naturally circulating DCs (7678). In this context, De Vries et al. evaluated the use of antigen-loaded purified plasmacytoid DCs for intranodal injection in melanoma patients (79). This strategy was feasible and induced only very mild side effects. In addition, the overall survival of vaccinated patients was greatly enhanced as compared to historical control patients. However, it still remains to be determined whether this strategy is more efficacious than monocyte-derived DC vaccine approaches (78). By contrast, experiments in the preclinical GL261 high-grade glioma model recently showed that vaccination with tumor antigen-loaded myeloid DCs resulted in more robust Th1 responses and a stronger survival benefit as compared to mice vaccinated with their plasmacytoid counterparts (80).

In view of their strong potential to stimulate cytotoxic T cell responses, several groups are currently exploring the use of Langerhans cell-like DCs as sources for DC vaccines (8183). These so-called IL-15 DCs can be derived from CD14+monocytes by culturing them with IL-15 (instead of the standard IL-4). Recently, it has been shown that in comparison to IL-4 DCs, these cells have an increased capacity to stimulate antitumor natural killer (NK) cell cytotoxicity in a contact- and IL-15-dependent manner (84). NK cells are increasingly being recognized as crucial contributors to antitumor immunity, especially in DC vaccination setups (85, 86). Three clinical trials are currently evaluating these Langerhans cell-type DCs in melanoma patients (NCT00700167, NCT 01456104, and NCT01189383).

Targeting cancer stem cells is another promising development, particularly in the setting of glioma (87). Glioma stem cells can foster tumor growth, radio- and chemotherapy-resistance, and local immunosuppression in the tumor microenvironment (87, 88). Furthermore, glioma stem cells may express higher levels of tumor-associated antigens and MHC complex molecules as compared to non-stem cells (89, 90). A preclinical study in a rodent orthotopic glioblastoma model has shown that DC vaccines loaded with neuropsheres enriched in cancer stem cells could induce more immunoreactivity and survival benefit as compared to DCs loaded with GL261 cells grown under standard conditions (91). Currently there are four clinical trials ongoing in high-grade glioma patients evaluating this approach (NCT00890032, NCT00846456, NCT01171469, and NCT01567202).

With regard to the DC maturation status of the vaccine product, a phase I/II clinical trial in metastatic melanoma patients has confirmed the superiority of mature antigen-loaded DCs to elicit immunological responses as compared to their immature counterparts (92). This finding was further substantiated in patients diagnosed with prostate cancer and recurrent high-grade glioma (93, 94). Hence, DCs need to express potent costimulatory molecules and lymph node homing receptors in order to generate a strong T cell response. In view of this finding, the route of administration is another vaccine parameter that can influence the homing of the injected DCs to the lymph nodes. In the context of prostate cancer and renal cell carcinoma it has been shown that vaccination routes with access to the draining lymph nodes (intradermal/intranodal/intralymphatic/subcutaneous) resulted in better clinical response rates as compared to intravenous injection (93). In melanoma patients, a direct comparison between intradermal vaccination and intranodal vaccination concluded that, although more DCs reached the lymph nodes after intranodal vaccination, the melanoma-specific T cells induced by intradermal vaccination were more functional (95). Furthermore, the frequency of vaccination can also influence the vaccine’s immunogenicity. Our group has shown in a cohort-comparison trial involving relapsed high-grade glioma patients that shortening the interval between the four inducer DC vaccines improved the progression-free survival curves (58, 96).

Another variable that has been systematically studied is the cytokine cocktail that is applied to mature the DCs. The current gold standard cocktail for DC maturation contains TNF-α, IL-1β, IL-6, and PGE2 (97, 98). Although this cocktail upregulates DC maturation markers and the lymph node homing receptor CCR7, IL-12 production by DCs could not be evoked (97, 98). Nevertheless, IL-12 is a critical Th1-driving cytokine and DC-derived IL-12 has been shown to associate with improved survival in DC vaccinated high-grade glioma and melanoma patients (99, 100). Recently, a novel cytokine cocktail, including TNF-α, IL-1β, poly-I:C, IFN-α, and IFN-γ, was introduced (101, 102). The type 1-polarized DCs obtained with this cocktail produced high levels of IL-12 and could induce strong tumor-antigen-specific CTL responses through enhanced induction of CXCL10 (99). In addition, CD40-ligand (CD40L) stimulation of DCs has been used to mature DCs in clinical trials (100, 103). Binding of CD40 on DCs to CD40L on CD4+ helper T cells licenses DCs and enables them to prime CD8+ effector T cells.

A final major determinant of the vaccine immunogenicity is the choice of antigen to load the DCs. Two main approaches can be applied: loading with selected tumor antigens (tumor-associated antigens or tumor-specific antigens) and loading with whole tumor cell preparations (13). The former strategy enables easier immune monitoring, has a lower risk of inducing auto-immunity, and can provide “off-the-shelf” availability of the antigenic cargo. Whole tumor cell-based DC vaccines, on the other hand, are not HLA-type dependent, have a reduced risk of inducing immune-escape variants, and can elicit immunity against multiple tumor antigens. Meta-analytical data provided by Neller et al. have demonstrated enhanced clinical efficacy in several tumor types of DCs loaded with whole tumor lysate as compared to DCs pulsed with defined tumor antigens (104). This finding was recently also substantiated in high-grade glioma patients, although this study was not set-up to compare survival parameters (105).

Toward a More Immunogenic Tumor Cell Cargo

The majority of clinical trials that apply autologous whole tumor lysate to load DC vaccines report the straightforward use of multiple freeze–thaw cycles to induce primary necrosis of cancer cells (8, 93). Freeze–thaw induced necrosis is, however, considered non-immunogenic and has even been shown to inhibit toll-like receptor (TLR)-induced maturation and function of DCs (16). To this end, many research groups have focused on tackling this roadblock by applying immunogenic modalities to induce cell death.

Immunogenic Treatment Modalities

Tables Tables11 and and22 list some frequently applied treatment methods to enhance the immunogenic potential of the tumor cell cargo that is used to load DC vaccines in an ICD-independent manner (i.e., these treatments do not meet the molecular and/or cellular determinants of ICD). Immunogenic treatment modalities can positively impact DC biology by inducing particular DAMPs in the dying cancer cells (Table (Table1).1). Table Table22 lists the preclinical and clinical studies that investigated their in vivo potential. Figure Figure11 schematically represents the application and the putative modes of action of these immunogenic enhancers in the setting of DC vaccines.

An external file that holds a picture, illustration, etc.
Object name is fimmu-06-00663-g001.jpg


A schematic representation of immunogenic DC vaccines. Cancer cells show enhanced immunogenicity upon treatment with UV irradiation, oxidizing treaments, and heat shock, characterized by the release of particular danger signals and the (increased) production of tumor (neo-)antigens. Upon loading onto DCs, DCs undergo enhanced phagocytosis and antigen uptake and show phenotypic and partial functional maturation. Upon in vivo immunization, these DC vaccines elicit Th1- and cytotoxic T lymphocyte (CTL)-driven tumor rejection.

Ultraviolet Irradiation ….

Oxidation-Inducing Modalities

In recent years, an increasing number of data were published concerning the ability of oxidative stress to induce oxidation-associate molecular patterns (OAMPs), such as reactive protein carbonyls and peroxidized phospholipids, which can act as DAMPs (28, 29) (Table (Table1).1). Protein carbonylation, a surrogate indicator of irreversible protein oxidation, has for instance been shown to improve cancer cell immunogenicity and to facilitate the formation of immunogenic neo-antigens (30, 31).

One prototypical enhancer of oxidation-based immunogenicity is radiotherapy (21,23). In certain tumor types, such as high-grade glioma and melanoma, clinical trials that apply autologous whole tumor lysate to load DC vaccines report the random use of freeze–thaw cycles (to induce necrosis of cancer cells) or a combination of freeze–thaw cycles and subsequent high-dose γ-irradiation (8, 18) (Table (Table2).2). However, from the available clinical evidence, it is unclear which of both methodologies has superior immunogenic potential. In light of the oxidation-based immunogenicity that is associated with radiotherapy, we recently demonstrated the superiority of DC vaccines loaded with irradiated freeze–thaw lysate (in comparison to freeze–thaw lysate) in terms of survival advantage in a preclinical high-grade glioma model (18) (Table (Table2).2). ….

Heat Shock Treatment

Heat shock is a term that is applied when a cell is subjected to a temperature that is higher than that of the ideal body temperature of the organisms of which the cell is derived. Heat shock can induce apoptosis (41–43°C) or necrosis (>43°C) depending on the temperature that is applied (110). The immunogenicity of heat shock treated cancer cells largely resides within their ability to produce HSPs, such as HSP60, HSP70, and HSP90 (17, 32) (Table (Table1).1). …

An external file that holds a picture, illustration, etc.
Object name is fimmu-06-00663-g002.jpg


Figure 2

A schematic representation of immunogenic cell death (ICD)-based DC vaccines. ICD causes cancer cells to emit a spatiotemporally defined pattern of danger signals. Upon loading of these ICD-undergoing cancer cells onto DCs, they induce extensive phagocytosis and antigen uptake. Loaded DCs show enhanced phenotypic and functional maturation and immunization with these ICD-based DC vaccines instigates Th1-, Th17-, and cytotoxic T lymphocyte (CTL)-driven antitumor immunity in vivo.
Inducers of Immunogenic Cell Death

Immunogenic cell death is a cell death regimen that is associated with the spatiotemporally defined emission of immunogenic DAMPs that can trigger the immune system (20, 21, 113). ICD has been found to depend on the concomitant induction of reactive oxygen species (ROS) and activation of endoplasmatic reticulum (ER) stress (111). Besides the three DAMPs that are most crucial for ICD (ecto-CRT, ATP, and HMGB1), other DAMPs such as surface-exposed or released HSPs (notably HSP70 and HSP90) have also been shown to contribute to the immunogenic capacity of ICD inducers (20, 21). The binding of these DAMPs to their respective immune receptors (CD91 for HSPs/CRT, P2RX7/P2RY2 for ATP, and TLR2/4 for HMGB1/HSP70) leads to the recruitment and/or activation of innate immune cells and facilitates the uptake of tumor antigens by antigen-presenting cells and their cross-presentation to T cells eventually leading to IL-1β-, IL-17-, and IFN-γ-dependent tumor eradiation (22). This in vivo tumor rejecting capacity induced by dying cancer cells in the absence of any adjuvant, is considered as a prerequisite for an agent to be termed an ICD inducer. …

Although the list of ICD inducers is constantly growing (113), only few of these immunogenic modalities have been tested in order to generate an immunogenic tumor cell cargo to load DC vaccines (Tables (Tables11 and and2).2). Figure Figure22 schematically represents the preparation of ICD-based DC vaccines and their putative modes of action.


Ionizing X-ray or γ-ray irradiation exerts its anticancer effect predominantly via its capacity to induce DNA double-strand breaks leading to intrinsic cancer cell apoptosis (114). The idea that radiotherapy could also impact the immune system was derived from the observation that radiotherapy could induce T-cell-mediated delay of tumor growth in a non-irradiated lesion (115). This abscopal (ab-scopus, away from the target) effect of radiotherapy was later explained by the ICD-inducing capacity (116). Together with anthracyclines, γ-irradiation was one of the first treatment modalities identified to induce ICD. …


The phytochemical shikonin, a major component of Chinese herbal medicine, is known to inhibit proteasome activity. It serves multiple biological roles and can be applied as an antibacterial, antiviral, anti-inflammatory, and anticancer treatment. …

High-hydrostatic pressure

High-hydrostatic pressure (HHP) is an established method to sterilize pharmaceuticals, human transplants, and food. HHP between 100 and 250 megapascal (MPa) has been shown to induce apoptosis of murine and human (cancer) cells (121123). While DNA damage does not seem to be induced by HHP <1000 MPa, HHP can inhibit enzymatic functions and the synthesis of cellular proteins (122). Increased ROS production was detected in HHP-treated cancer cell lines and ER stress was evidenced by the rapid phosphorylation of eIF2α (42).  …

Oncolytic Viruses

Oncolytic viruses are self-replicating, tumor selective virus strains that can directly lyse tumor cells. Over the past few years, a new oncolytic paradigm has risen; entailing that, rather than utilizing oncolytic viruses solely for direct tumor eradication, the cell death they induce should be accompanied by the elicitation of antitumor immune responses to maximize their therapeutic efficacy (128). One way in which these oncolytic viruses can fulfill this oncolytic paradigm is by inducing ICD (128).

Thus far, three oncolytic virus strains can meet the molecular requirements of ICD; coxsackievirus B3 (CVB3), oncolytic adenovirus and Newcastle disease virus (NDV) (Table (Table1)1) (113). Infection of tumor cells with these viruses causes the production of viral envelop proteins that induce ER stress by overloading the ER. Hence, all three virus strains can be considered type II ICD inducers (113). …

Photodynamic therapy

Photodynamic therapy (PDT) is an established, minimally invasive anticancer treatment modality. It has a two-step mode of action involving the selective uptake of a photosensitizer by the tumor tissue, followed by its activation by light of a specific wavelength. This activation results in the photochemical production of ROS in the presence of oxygen (129131). One attractive feature of PDT is that the ROS-based oxidative stress originates in the particular subcellular location where the photosensitizer tends to accumulate, ultimately leading to the destruction of the tumor cell (132). …

Combinatorial Regimens

In DC vaccine settings, cancer cells are often not killed by a single treatment strategy but rather by a combination of treatments. In some cases, the underlying rationale lies within the additive or even synergistic value of combining several moderately immunogenic modalities. The combination of radiotherapy and heat shock has, for instance, been shown to induce higher levels of HSP70 in B16 melanoma cells than either therapy alone (16). In addition, a combination therapy consisting of heat shock, γ-irradiation, and UV irradiation has been shown to induce higher levels of ecto-CRT, ecto-HSP90, HMGB1, and ATP in comparison to either therapy alone or doxorubicin, a well-recognized inducer of ICD (57). ….

Triggering antitumor immune responses is an absolute requirement to tackle metastatic and diffusely infiltrating cancer cells that are resistant to standard-of-care therapeutic regimens. ICD-inducing modalities, such as PDT and radiotherapy, have been shown to be able to act as in situ vaccines capable of inducing immune responses that caused regression of distal untreated tumors. Exploiting these ICD inducers and other immunogenic modalities to obtain a highly immunogenic antigenic tumor cell cargo for loading DC vaccines is a highly promising application. In case of the two prominent ICD inducers, Hyp-PDT and HHP, preclinical studies evaluating this relatively new approach are underway and HHP-based DC vaccines are already undergoing clinical testing. In the preclinical testing phase, more attention should be paid to some clinically driven considerations. First, one should consider the requirement of 100% mortality of the tumor cells before in vivo application. A second consideration from clinical practice (especially in multi-center clinical trials) is the fact that most tumor specimens arrive in the lab in a frozen state. This implies that a significant number of cells have already undergone non-immunogenic necrosis before the experimental cell killing strategies are applied. ….


Read Full Post »

Tunable light sources

Larry H. Bernstein, MD, FCAP, Curator



Putting Tunable Light Sources to the Test

Common goals of spectroscopy applications such as studying the chemical or biological properties of a material often dictate the requirements of the measurement system’s lamp, power supply and monochromator.

JEFF ENG AND JOHN PARK, PH.D., NEWPORT CORP.   http://www.photonics.com/Article.aspx?AID=58302

Many common spectroscopic measurements require the coordinated operation of a detection instrument and light source, as well as data acquisition and processing. Integration of individual components can be challenging and various applications may have different requirements. Conventional lamp-based tunable light sources are a popular choice for applications requiring a measurement system with this degree of capability.

Many types of tunable light sources are available, with differences in individual component performance translating to the performance of the system as a whole. Tunable light sources are finding themselves to be an especially ideal system for one application in particular: quantum efficiency and spectral responsivity characterization of photonic sensors, such as solar cells.

Xenon and mercury xenon lamps, two examples of DC arc lamps.


Xenon and mercury xenon lamps, two examples of DC arc lamps.

The tunable light source’s (TLS) versatility as both a broadband and high-resolution monochromatic light source makes the unit suitable for a variety of applications, such as the study of wavelength-dependent chemical or biological properties or wavelength-induced physical changes of materials. These light sources can also be used in color analysis and reflectivity measurements of materials for quality purposes.

Among their unique attributes, the TLS can produce monochromatic light from the UV to near-infrared (NIR). Lamp-based TLSs feature two major components: a light source and a monochromator. Common lamps used in TLSs are the DC arc lamp and quartz tungsten halogen (QTH) lamp. While both of these lamps have a broad emission spectrum, arc and QTH lamps differ in the characteristic wavelength emissions or relatively smooth shape of their spectral output curves, respectively. A stable power supply for the lamp is a critical component since most applications require high light output power stability1.

Smooth spectral output vs. monochromator throughput

DC arc lamps are excellent sources of continuous wave, broadband light. They consist of two electrodes (an anode and a cathode) separated by a gas such as neon, argon, mercury or xenon. Light is generated by ionizing the gas between the electrodes. The bright broadband emission from this short arc between the anode and cathode makes these lamps high-intensity point sources, capable of being collimated with the proper lens configuration.

DC arc lamps also offer the advantages of long lifetime, superior monochromator throughput (particularly in the UV range) and a smaller divergence angle. They are particularly well-suited for fiber coupling applications2. (See Figure 1.)

A xenon arc lamp housed in an Oriel Research lamp housing.

Figure 1. A xenon arc lamp housed in an Oriel Research lamp housing. Photo courtesy of Newport Corp.

Xenon (Xe) arc lamps, in particular, have a relatively smooth emission curve in the UV to visible spectrums, with characteristic wavelengths emitted from 380 to 750 nm. However, strong xenon peaks are emitted between 750 to 1000 nm.

Their sunlike emission spectrum and about 5800 K color temperature make them a popular choice for solar simulation applications. (See Figure 2.)

Arc lamps can have the following specialty characteristics:

Ozone-free: Wavelength emissions below about 260 nm create toxic ozone. Ideally, an arc lamp is operated outdoors or in a room with adequate ventilation to protect the user from the ozone created.

UV-enhanced: For applications requiring additional UV light intensity, UV-enhanced lamps should be used. These lamps provide the same visible to NIR performance of an arc lamp while providing high-intensity UV output due to changes in the material of the lamp’s glass envelope.

High-stability: High-stability arc lamps are made of a higher quality cathode than that typically used for arc lamp construction. As a result, no arc wander occurs, allowing the lamp to maintain consistent output intensity throughout its lifetime.

The spectral output of 3000-W Xe and 250-W QTH lamps used in Oriel’s Tunable Light Sources.

Figure 2. The spectral output of 3000-W Xe and 250-W QTH lamps used in Oriel’s Tunable Light Sources.Photo courtesy of Newport Corp.

QTH lamps produce light by heating a filament wire with an electric current. The hot filament wire is surrounded by a vacuum or inert gas to prevent oxidation. QTH lamps are not very efficient at converting electricity to light, but they offer very accurate color reproduction due to their continuous blackbody spectrum. These lamps are a popular alternative to arc lamps due to their higher output intensity stability and lack of intense UV light emission, spectral emission lines in their output curve and toxic ozone production. These advantages over traditional DC arc lamps make QTH lamps preferable for radiometric and photometric applications as well as excitation sources of visible to NIR light. QTH lamps are also easier to handle and install, and produce a smooth output spectrum. Selecting the most appropriate lamp type is a matter of deciding which performance criteria are most important.

Constant current vs. constant power

The power supply is a vital component for operating a DC arc or QTH lamp with minimum light ripple. The lamps are operated in either constant current or constant power mode and are used in applications such as radiometric measurements, where a stable light output is required for accurate measurement. Providing stable electrical power to the lamp is important since fluctuations in the wavelength and output intensity of the light source impact the accuracy of measurement.

There is very little difference in the short-term output stability when operating an arc lamp or QTH lamp in constant current or constant power mode. However, the differences appear as the lamp ages. For arc lamps, even with a stable power supply, deposits on the inside of the lamp envelope are visible as the electrodes degrade, which causes an unstable arc position, changing the electrical characteristics of the arc lamp. The distance between the cathode and anode of the arc lamp increases, raising the lamp’s operating voltage. For QTH lamps, deposits on the inside of the lamp envelope are visible as the lamp filament degrades, changing the electrical and spectral characteristics of the lamp.

In power mode, the lamp is operated at a constant power setting. As the voltage cannot be changed, the current is raised or lowered to maintain the power at the same level. As the lamp ages, the radiant output decreases. However, lamp lifetime is prolonged.

In current mode, the lamp is operated at a constant current setting. As the voltage cannot be changed, the power is raised or lowered to maintain the current at the same level. As the lamp ages, the input power required for operation is increased. This results in greater output power, which, to some extent, may help compensate for a darkening lamp envelope. However, the lamp’s lifetime is greatly reduced due to the increase in power.

Although power supplies are highly regulated, there are factors beyond the control of the power supply that may affect light output. Some of these factors include lamp aging, ambient temperature fluctuations and filament erosion. For applications in which high stability light output intensity is especially critical, optical feedback control of power supply is suggested in order to compensate for such factors3. (See Figure 3.)

Oriel’s OPS Series Power Supplies offer the option of operating a lamp in constant power, constant current or intensity operation modes.

Figure 3. Oriel’s OPS Series Power Supplies offer the option of operating a lamp in constant power, constant current or intensity operation modes. Photo courtesy of Newport Corp.

Diffraction gratings narrow the wavelength band

Monochromators use diffraction gratings to spatially isolate and select a narrow band of wavelengths from a wider wavelength emitting light source. They are a valuable piece of equipment because they can be used to create quasi-monochromatic light and also take high precision spectral measurements. A high precision stepper motor is typically used to select the desired wavelength and switch between diffraction gratings quickly, without sacrificing instrument performance.

Determining which slit width to use is based on the trade-off between light throughput and the resolution required for measurement. A larger slit width allows for more light throughput. However, more light throughput results in poorer resolution. When choosing a slit width at which to operate the monochromator, both the input and output ports must be set to the same slit width. (See Figure 4.) Focused light enters the monochromator through the entrance slit, and is redirected by the collimating mirror toward the grating. The grating directs the light toward the focusing mirror, which then redirects the chosen wavelength toward the exit slit. At the exit slits, quasi-monochromatic light is emitted4.

A fixed width slit being installed into an Oriel Cornerstone 130 monochromator.

Figure 4. A fixed width slit being installed into an Oriel Cornerstone 130 monochromator.Photo courtesy of Newport Corp.

Measuring quantum efficiencies

Measuring quantum efficiency (QE) over a range of different wavelengths to measure a device’s QE at each photon energy level is an ideal task for a tunable light source. The QE of a photoelectric material for photons with energy below the band gap is zero. The QE value of a light-sensing device such as a solar cell indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. The principle of QE measurement is to count the proportion of carriers extracted from the material’s valence band to the number of photons impinging on the surface. To do this, it is necessary to shine a calibrated, tunable light on the cell, while simultaneously measuring the output current. The key to accurate measurement of the QE/internal photon to current efficiency is to accurately know how much scanning light is incident on the device under test and how much current is generated. Thus, measurement of light output with a NIST (National Institute of Standards and Technology) traceable calibrated detector is necessary prior to testing since illumination of an absolute optical power is required.

External quantum efficiency (EQE) is the ratio of the number of photons incident on a solar cell to the number of generated charge carriers. Internal quantum efficiency (IQE) also considers the internal efficiency — that is, the losses associated with the photons absorbed by nonactive layers of the cell. By comparison, EQE is much more straightforward to measure, and gives a direct parameter of how much output current will be contributed to the output circuit per incident photon at a given wavelength. IQE is a more in-depth parameter, taking into account the photoelectric efficiency of all composite layers of a material. In an IQE measurement, these losses from nonactive layers of the material are measured in order to calculate a net quantum efficiency — a much truer efficiency measurement.

Understanding the conversion efficiency as a function of the wavelength of light impingent on the cell makes QE measurement critical for materials research and solar cell design. With this data, the solar cell composition and topography can be modified to optimize conversion over the broadest possible range of wavelengths.

As a formula, it is given by IQE = EQE/(1 − R), where R is the reflectivity, direct and diffuse, of the solar cell. The IQE is an indication of the capacity of the active layers of the solar cell to make good use of the absorbed photons. It is always higher than the EQE, but should never exceed 100 percent, with the exception of multiple-exciton generation. Figure 5 illustrates how the tunable light source is used to illuminate the solar cell to perform an IQE measurement. The software controls all components of the measurement system, including the monochromator and data acquisition5.

A sample QE measurement system using the components of a tunable light source.

Figure 5. A sample QE measurement system using the components of a tunable light source. Photo courtesy of Newport Corp.

To measure quantum efficiency in 10-nm wavelength steps, the slit size of the monochromator is typically hundreds of micron in width. The slit width is reduced approximately half if 5-nm wavelength increments are desired. However, output power of the monochromator is reduced by more than 50 percent if the slit width is halved. Lowering optical power impacts QE measurement since a solar cell responds to this diminished optical power with low output current. This can result in a poor signal-to-noise ratio, making a QE measurement error more likely. The detection of low current requires very sensitive equipment with the ability to measure current down to the pico-ampere level. To make for an easier signal measurement, optical power is typically increased. A DC arc source is the better choice for QE measurements made in 5-nm increments or lower due to the lamp’s arc size resulting in better monochromator throughput. However, a QTH lamp is the better choice if greater than 0.1 percent light stability is required, with the trade-off of not being able to measure in as precise wavelength increments as if an arc lamp was used.

Balance between optical power and resolution is an important consideration as it impacts the quality of the QE measurement. The selection of lamp type and monochromator specifications are important considerations for TLS design. To be considered a suitable component for the majority of spectroscopic applications, high-output power and stability, long lifetime of the lamp, and broadband spectral emission with high resolution capability are required for the TLS.

Meet the authors

John Park, new product development manager at Newport Corp., has designed and developed numerous spectroscopy instruments for the photonics industry for over 10 years. He holds two granted patents and is a graduate from University of California, Irvine, with a Ph.D. in electrical engineering; email: john.park@newport.com. Jeff Eng is a product specialist for Oriel Spectroscopy Products at Newport Corp. His work experience includes application support, business-to-business sales and marketing activity of photonic light sources and detectors. He is a graduate of Rutgers University; email: jeff.eng@newport.com.


1. Newport Corp., Oriel Instruments TLS datasheet. Tunable Xe arch lamp sources.http://assets.newport.com/webDocuments-EN/images/39191.pdf.

2. Newport Corp., Oriel Instruments handbook: The Book of Photon Tools, light source section.

3. Newport Corp., Oriel Instruments OPS datasheet. OPS-A series arc lamp power supplies.http://assets.newport.com/webDocuments-EN/images/OPS-A%20Series%20Power%20Supply%20Datasheet.pdf.

4. J. M. Lerner and A. Thevenon (1988). The Optics of Spectroscopy. Edison, N.J.: Optical Systems/Instruments SA Inc.

5. K. Emery (2005). Handbook of Photovoltaic Science and Engineering, eds. A. Luque and S. Hegedus. Chapter 16: Measurement and characterization of solar cells and modules. Hoboken, N.J.: John Wiley & Sons Ltd.

Read Full Post »

Older Posts »