Feeds:
Posts
Comments

Archive for the ‘Medical Imaging Technology’ Category

Disease related changes in proteomics, protein folding, protein-protein interaction, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Disease related changes in proteomics, protein folding, protein-protein interaction

Curator: Larry H. Bernstein, MD, FCAP

LPBI

 

Frankenstein Proteins Stitched Together by Scientists

http://www.genengnews.com/gen-news-highlights/frankenstein-proteins-stitched-together-by-scientists/81252715/

http://www.genengnews.com/Media/images/GENHighlight/thumb_May11_2016_Wikipedia_1831Frankenstein2192501426.jpg

The Frankenstein monster, stitched together from disparate body parts, proved to be an abomination, but stitched together proteins may fare better. They may, for example, serve specific purposes in medicine, research, and industry. At least, that’s the ambition of scientists based at the University of North Carolina. They have developed a computational protocol called SEWING that builds new proteins from connected or disconnected pieces of existing structures. [Wikipedia]

Unlike Victor Frankenstein, who betrayed Promethean ambition when he sewed together his infamous creature, today’s biochemists are relatively modest. Rather than defy nature, they emulate it. For example, at the University of North Carolina (UNC), researchers have taken inspiration from natural evolutionary mechanisms to develop a technique called SEWING—Structure Extension With Native-substructure Graphs. SEWING is a computational protocol that describes how to stitch together new proteins from connected or disconnected pieces of existing structures.

“We can now begin to think about engineering proteins to do things that nothing else is capable of doing,” said UNC’s Brian Kuhlman, Ph.D. “The structure of a protein determines its function, so if we are going to learn how to design new functions, we have to learn how to design new structures. Our study is a critical step in that direction and provides tools for creating proteins that haven’t been seen before in nature.”

Traditionally, researchers have used computational protein design to recreate in the laboratory what already exists in the natural world. In recent years, their focus has shifted toward inventing novel proteins with new functionality. These design projects all start with a specific structural “blueprint” in mind, and as a result are limited. Dr. Kuhlman and his colleagues, however, believe that by removing the limitations of a predetermined blueprint and taking cues from evolution they can more easily create functional proteins.

Dr. Kuhlman’s UNC team developed a protein design approach that emulates natural mechanisms for shuffling tertiary structures such as pleats, coils, and furrows. Putting the approach into action, the UNC team mapped 50,000 stitched together proteins on the computer, and then it produced 21 promising structures in the laboratory. Details of this work appeared May 6 in the journal Science, in an article entitled, “Design of Structurally Distinct Proteins Using Strategies Inspired by Evolution.”

“Helical proteins designed with SEWING contain structural features absent from other de novo designed proteins and, in some cases, remain folded at more than 100°C,” wrote the authors. “High-resolution structures of the designed proteins CA01 and DA05R1 were solved by x-ray crystallography (2.2 angstrom resolution) and nuclear magnetic resonance, respectively, and there was excellent agreement with the design models.”

Essentially, the UNC scientists confirmed that the proteins they had synthesized contained the unique structural varieties that had been designed on the computer. The UNC scientists also determined that the structures they had created had new surface and pocket features. Such features, they noted, provide potential binding sites for ligands or macromolecules.

“We were excited that some had clefts or grooves on the surface, regions that naturally occurring proteins use for binding other proteins,” said the Science article’s first author, Tim M. Jacobs, Ph.D., a former graduate student in Dr. Kuhlman’s laboratory. “That’s important because if we wanted to create a protein that can act as a biosensor to detect a certain metabolite in the body, either for diagnostic or research purposes, it would need to have these grooves. Likewise, if we wanted to develop novel therapeutics, they would also need to attach to specific proteins.”

Currently, the UNC researchers are using SEWING to create proteins that can bind to several other proteins at a time. Many of the most important proteins are such multitaskers, including the blood protein hemoglobin.

 

Histone Mutation Deranges DNA Methylation to Cause Cancer

http://www.genengnews.com/gen-news-highlights/histone-mutation-deranges-dna-methylation-to-cause-cancer/81252723/

http://www.genengnews.com/Media/images/GENHighlight/thumb_May13_2016_RockefellerUniv_ChildhoodSarcoma1293657114.jpg

In some cancers, including chondroblastoma and a rare form of childhood sarcoma, a mutation in histone H3 reduces global levels of methylation (dark areas) in tumor cells but not in normal cells (arrowhead). The mutation locks the cells in a proliferative state to promote tumor development. [Laboratory of Chromatin Biology and Epigenetics at The Rockefeller University]

They have been called oncohistones, the mutated histones that are known to accompany certain pediatric cancers. Despite their suggestive moniker, oncohistones have kept their oncogenic secrets. For example, it has been unclear whether oncohistones are able to cause cancer on their own, or whether they need to act in concert with additional DNA mutations, that is, mutations other than those affecting histone structures.

While oncohistone mechanisms remain poorly understood, this particular question—the oncogenicity of lone oncohistones—has been resolved, at least in part. According to researchers based at The Rockefeller University, a change to the structure of a histone can trigger a tumor on its own.

This finding appeared May 13 in the journal Science, in an article entitled, “Histone H3K36 Mutations Promote Sarcomagenesis Through Altered Histone Methylation Landscape.” The article describes the Rockefeller team’s study of a histone protein called H3, which has been found in about 95% of samples of chondoblastoma, a benign tumor that arises in cartilage, typically during adolescence.

The Rockefeller scientists found that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo.

After the scientists inserted the H3 histone mutation into mouse mesenchymal progenitor cells (MPCs)—which generate cartilage, bone, and fat—they watched these cells lose the ability to differentiate in the lab. Next, the scientists injected the mutant cells into living mice, and the animals developed the tumors rich in MPCs, known as an undifferentiated sarcoma. Finally, the researchers tried to understand how the mutation causes the tumors to develop.

The scientists determined that H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases.

“Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation,” the authors of the Science study wrote. “After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation.”

Essentially, when the H3K36M mutation occurs, the cell becomes locked in a proliferative state—meaning it divides constantly, leading to tumors. Specifically, the mutation inhibits enzymes that normally tag the histone with chemical groups known as methyls, allowing genes to be expressed normally.

In response to this lack of modification, another part of the histone becomes overmodified, or tagged with too many methyl groups. “This leads to an overall resetting of the landscape of chromatin, the complex of DNA and its associated factors, including histones,” explained co-author Peter Lewis, Ph.D., a professor at the University of Wisconsin-Madison and a former postdoctoral fellow in laboratory of C. David Allis, Ph.D., a professor at Rockefeller.

The finding—that a “resetting” of the chromatin landscape can lock the cell into a proliferative state—suggests that researchers should be on the hunt for more mutations in histones that might be driving tumors. For their part, the Rockefeller researchers are trying to learn more about how this specific mutation in histone H3 causes tumors to develop.

“We want to know which pathways cause the mesenchymal progenitor cells that carry the mutation to continue to divide, and not differentiate into the bone, fat, and cartilage cells they are destined to become,” said co-author Chao Lu, Ph.D., a postdoctoral fellow in the Allis lab.

Once researchers understand more about these pathways, added Dr. Lewis, they can consider ways of blocking them with drugs, particularly in tumors such as MPC-rich sarcomas—which, unlike chondroblastoma, can be deadly. In fact, drugs that block these pathways may already exist and may even be in use for other types of cancers.

“One long-term goal of our collaborative team is to better understand fundamental mechanisms that drive these processes, with the hope of providing new therapeutic approaches,” concluded Dr. Allis.

 

Histone H3K36 mutations promote sarcomagenesis through altered histone methylation landscape

Chao Lu, Siddhant U. Jain, Dominik Hoelper, …, C. David Allis1,, Nada Jabado,, Peter W. Lewis,
Science  13 May 2016; 352(6287):844-849 http://dx.doi.org:/10.1126/science.aac7272  http://science.sciencemag.org/content/352/6287/844

An oncohistone deranges inhibitory chromatin

Missense mutations (that change one amino acid for another) in histone H3 can produce a so-called oncohistone and are found in a number of pediatric cancers. For example, the lysine-36–to-methionine (K36M) mutation is seen in almost all chondroblastomas. Lu et al. show that K36M mutant histones are oncogenic, and they inhibit the normal methylation of this same residue in wild-type H3 histones. The mutant histones also interfere with the normal development of bone-related cells and the deposition of inhibitory chromatin marks.

Science, this issue p. 844

Several types of pediatric cancers reportedly contain high-frequency missense mutations in histone H3, yet the underlying oncogenic mechanism remains poorly characterized. Here we report that the H3 lysine 36–to–methionine (H3K36M) mutation impairs the differentiation of mesenchymal progenitor cells and generates undifferentiated sarcoma in vivo. H3K36M mutant nucleosomes inhibit the enzymatic activities of several H3K36 methyltransferases. Depleting H3K36 methyltransferases, or expressing an H3K36I mutant that similarly inhibits H3K36 methylation, is sufficient to phenocopy the H3K36M mutation. After the loss of H3K36 methylation, a genome-wide gain in H3K27 methylation leads to a redistribution of polycomb repressive complex 1 and de-repression of its target genes known to block mesenchymal differentiation. Our findings are mirrored in human undifferentiated sarcomas in which novel K36M/I mutations in H3.1 are identified.

 

Mitochondria? We Don’t Need No Stinking Mitochondria!

 

http://www.genengnews.com/Media/images/GENHighlight/thumb_fx11801711851.jpg
Diagram comparing typical eukaryotic cell to the newly discovered mitochondria-free organism. [Karnkowska et al., 2016, Current Biology 26, 1–11]
  • The organelle that produces a significant portion of energy for eukaryotic cells would seemingly be indispensable, yet over the years, a number of organisms have been discovered that challenge that biological pretense. However, these so-called amitochondrial species may lack a defined organelle, but they still retain some residual functions of their mitochondria-containing brethren. Even the intestinal eukaryotic parasite Giardia intestinalis, which was for many years considered to be mitochondria-free, was proven recently to contain a considerably shriveled version of the organelle.
  • Now, an international group of scientists has released results from a new study that challenges the notion that mitochondria are essential for eukaryotes—discovering an organism that resides in the gut of chinchillas that contains absolutely no trace of mitochondria at all.
  • “In low-oxygen environments, eukaryotes often possess a reduced form of the mitochondrion, but it was believed that some of the mitochondrial functions are so essential that these organelles are indispensable for their life,” explained lead study author Anna Karnkowska, Ph.D., visiting scientist at the University of British Columbia in Vancouver. “We have characterized a eukaryotic microbe which indeed possesses no mitochondrion at all.”

 

Mysterious Eukaryote Missing Mitochondria

Researchers uncover the first example of a eukaryotic organism that lacks the organelles.

By Anna Azvolinsky | May 12, 2016

http://www.the-scientist.com/?articles.view/articleNo/46077/title/Mysterious-Eukaryote-Missing-Mitochondria

http://www.the-scientist.com/images/News/May2016/620_Monocercomonides-Pa203.jpg

Monocercomonoides sp. PA203VLADIMIR HAMPL, CHARLES UNIVERSITY, PRAGUE, CZECH REPUBLIC

Scientists have long thought that mitochondria—organelles responsible for energy generation—are an essential and defining feature of a eukaryotic cell. Now, researchers from Charles University in Prague and their colleagues are challenging this notion with their discovery of a eukaryotic organism,Monocercomonoides species PA203, which lacks mitochondria. The team’s phylogenetic analysis, published today (May 12) in Current Biology,suggests that Monocercomonoides—which belong to the Oxymonadida group of protozoa and live in low-oxygen environmentsdid have mitochondria at one point, but eventually lost the organelles.

“This is quite a groundbreaking discovery,” said Thijs Ettema, who studies microbial genome evolution at Uppsala University in Sweden and was not involved in the work.

“This study shows that mitochondria are not so central for all lineages of living eukaryotes,” Toni Gabaldonof the Center for Genomic Regulation in Barcelona, Spain, who also was not involved in the work, wrote in an email to The Scientist. “Yet, this mitochondrial-devoid, single-cell eukaryote is as complex as other eukaryotic cells in almost any other aspect of cellular complexity.”

Charles University’s Vladimir Hampl studies the evolution of protists. Along with Anna Karnkowska and colleagues, Hampl decided to sequence the genome of Monocercomonoides, a little-studied protist that lives in the digestive tracts of vertebrates. The 75-megabase genome—the first of an oxymonad—did not contain any conserved genes found on mitochondrial genomes of other eukaryotes, the researchers found. It also did not contain any nuclear genes associated with mitochondrial functions.

“It was surprising and for a long time, we didn’t believe that the [mitochondria-associated genes were really not there]. We thought we were missing something,” Hampl told The Scientist. “But when the data kept accumulating, we switched to the hypothesis that this organism really didn’t have mitochondria.”

Because researchers have previously not found examples of eukaryotes without some form of mitochondria, the current theory of the origin of eukaryotes poses that the appearance of mitochondria was crucial to the identity of these organisms.

“We now view these mitochondria-like organelles as a continuum from full mitochondria to very small . Some anaerobic protists, for example, have only pared down versions of mitochondria, such as hydrogenosomes and mitosomes, which lack a mitochondrial genome. But these mitochondrion-like organelles perform essential functions of the iron-sulfur cluster assembly pathway, which is known to be conserved in virtually all eukaryotic organisms studied to date.

Yet, in their analysis, the researchers found no evidence of the presence of any components of this mitochondrial pathway.

Like the scaling down of mitochondria into mitosomes in some organisms, the ancestors of modernMonocercomonoides once had mitochondria. “Because this organism is phylogenetically nested among relatives that had conventional mitochondria, this is most likely a secondary adaptation,” said Michael Gray, a biochemist who studies mitochondria at Dalhousie University in Nova Scotia and was not involved in the study. According to Gray, the finding of a mitochondria-deficient eukaryote does not mean that the organelles did not play a major role in the evolution of eukaryotic cells.

To be sure they were not missing mitochondrial proteins, Hampl’s team also searched for potential mitochondrial protein homologs of other anaerobic species, and for signature sequences of a range of known mitochondrial proteins. While similar searches with other species uncovered a few mitochondrial proteins, the team’s analysis of Monocercomonoides came up empty.

“The data is very complete,” said Ettema. “It is difficult to prove the absence of something but [these authors] do a convincing job.”

To form the essential iron-sulfur clusters, the team discovered that Monocercomonoides use a sulfur mobilization system found in the cytosol, and that an ancestor of the organism acquired this system by lateral gene transfer from bacteria. This cytosolic, compensating system allowed Monocercomonoides to lose the otherwise essential iron-sulfur cluster-forming pathway in the mitochondrion, the team proposed.

“This work shows the great evolutionary plasticity of the eukaryotic cell,” said Karnkowska, who participated in the study while she was a postdoc at Charles University. Karnkowska, who is now a visiting researcher at the University of British Columbia in Canada, added: “This is a striking example of how far the evolution of a eukaryotic cell can go that was beyond our expectations.”

“The results highlight how many surprises may await us in the poorly studied eukaryotic phyla that live in under-explored environments,” Gabaldon said.

Ettema agreed. “Now that we’ve found one, we need to look at the bigger picture and see if there are other examples of eukaryotes that have lost their mitochondria, to understand how adaptable eukaryotes are.”

  1. Karnkowska et al., “A eukaryote without a mitochondrial organelle,” Current Biology,doi:10.1016/j.cub.2016.03.053, 2016.

organellesmitochondriagenetics & genomics and evolution

 

A Eukaryote without a Mitochondrial Organelle

Anna Karnkowska,  Vojtěch Vacek,  Zuzana Zubáčová,…,  Čestmír Vlček,  Vladimír HamplDOI: http://dx.doi.org/10.1016/j.cub.2016.03.053  Article Info

PDF (2 MB)   Extended PDF (2 MB)  Download Images(.ppt)  About Images & Usage

Highlights

  • Monocercomonoides sp. is a eukaryotic microorganism with no mitochondria
  • •The complete absence of mitochondria is a secondary loss, not an ancestral feature
  • •The essential mitochondrial ISC pathway was replaced by a bacterial SUF system

The presence of mitochondria and related organelles in every studied eukaryote supports the view that mitochondria are essential cellular components. Here, we report the genome sequence of a microbial eukaryote, the oxymonad Monocercomonoides sp., which revealed that this organism lacks all hallmark mitochondrial proteins. Crucially, the mitochondrial iron-sulfur cluster assembly pathway, thought to be conserved in virtually all eukaryotic cells, has been replaced by a cytosolic sulfur mobilization system (SUF) acquired by lateral gene transfer from bacteria. In the context of eukaryotic phylogeny, our data suggest that Monocercomonoides is not primitively amitochondrial but has lost the mitochondrion secondarily. This is the first example of a eukaryote lacking any form of a mitochondrion, demonstrating that this organelle is not absolutely essential for the viability of a eukaryotic cell.

http://www.cell.com/cms/attachment/2056332410/2061316405/fx1.jpg

 

HIV Particles Used to Trap Intact Mammalian Protein Complexes

Belgian scientists from VIB and UGent developed Virotrap, a viral particle sorting approach for purifying protein complexes under native conditions.

http://www.technologynetworks.com/Proteomics/news.aspx?ID=191122

This method catches a bait protein together with its associated protein partners in virus-like particles that are budded from human cells. Like this, cell lysis is not needed and protein complexes are preserved during purification.

With his feet in both a proteomics lab and an interactomics lab, VIB/UGent professor Sven Eyckerman is well aware of the shortcomings of conventional approaches to analyze protein complexes. The lysis conditions required in mass spectrometry–based strategies to break open cell membranes often affect protein-protein interactions. “The first step in a classical study on protein complexes essentially turns the highly organized cellular structure into a big messy soup”, Eyckerman explains.

Inspired by virus biology, Eyckerman came up with a creative solution. “We used the natural process of HIV particle formation to our benefit by hacking a completely safe form of the virus to abduct intact protein machines from the cell.” It is well known that the HIV virus captures a number of host proteins during its particle formation. By fusing a bait protein to the HIV-1 GAG protein, interaction partners become trapped within virus-like particles that bud from mammalian cells. Standard proteomic approaches are used next to reveal the content of these particles. Fittingly, the team named the method ‘Virotrap’.

The Virotrap approach is exceptional as protein networks can be characterized under natural conditions. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. The researchers showed the method was suitable for detection of known binary interactions as well as mass spectrometry-based identification of novel protein partners.

Virotrap is a textbook example of bringing research teams with complementary expertise together. Cross-pollination with the labs of Jan Tavernier (VIB/UGent) and Kris Gevaert (VIB/UGent) enabled the development of this platform.

Jan Tavernier: “Virotrap represents a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. It is complementary to the arsenal of existing interactomics methods, but also holds potential for other fields, like drug target characterization. We also developed a small molecule-variant of Virotrap that could successfully trap protein partners for small molecule baits.”

Kris Gevaert: “Virotrap can also impact our understanding of disease pathways. We were actually surprised to see that this virus-based system could be used to study antiviral pathways, like Toll-like receptor signaling. Understanding these protein machines in their natural environment is essential if we want to modulate their activity in pathology.“

 

Trapping mammalian protein complexes in viral particles

Sven Eyckerman, Kevin Titeca, …Kris GevaertJan Tavernier
Nature Communications Apr 2016; 7(11416)   http://dx.doi.org:/10.1038/ncomms11416

Cell lysis is an inevitable step in classical mass spectrometry–based strategies to analyse protein complexes. Complementary lysis conditions, in situ cross-linking strategies and proximal labelling techniques are currently used to reduce lysis effects on the protein complex. We have developed Virotrap, a viral particle sorting approach that obviates the need for cell homogenization and preserves the protein complexes during purification. By fusing a bait protein to the HIV-1 GAG protein, we show that interaction partners become trapped within virus-like particles (VLPs) that bud from mammalian cells. Using an efficient VLP enrichment protocol, Virotrap allows the detection of known binary interactions and MS-based identification of novel protein partners as well. In addition, we show the identification of stimulus-dependent interactions and demonstrate trapping of protein partners for small molecules. Virotrap constitutes an elegant complementary approach to the arsenal of methods to study protein complexes.

Proteins mostly exert their function within supramolecular complexes. Strategies for detecting protein–protein interactions (PPIs) can be roughly divided into genetic systems1 and co-purification strategies combined with mass spectrometry (MS) analysis (for example, AP–MS)2. The latter approaches typically require cell or tissue homogenization using detergents, followed by capture of the protein complex using affinity tags3 or specific antibodies4. The protein complexes extracted from this ‘soup’ of constituents are then subjected to several washing steps before actual analysis by trypsin digestion and liquid chromatography–MS/MS analysis. Such lysis and purification protocols are typically empirical and have mostly been optimized using model interactions in single labs. In fact, lysis conditions can profoundly affect the number of both specific and nonspecific proteins that are identified in a typical AP–MS set-up. Indeed, recent studies using the nuclear pore complex as a model protein complex describe optimization of purifications for the different proteins in the complex by examining 96 different conditions5. Nevertheless, for new purifications, it remains hard to correctly estimate the loss of factors in a standard AP–MS experiment due to washing and dilution effects during treatments (that is, false negatives). These considerations have pushed the concept of stabilizing PPIs before the actual homogenization step. A classical approach involves cross-linking with simple reagents (for example, formaldehyde) or with more advanced isotope-labelled cross-linkers (reviewed in ref. 2). However, experimental challenges such as cell permeability and reactivity still preclude the widespread use of cross-linking agents. Moreover, MS-generated spectra of cross-linked peptides are notoriously difficult to identify correctly. A recent lysis-independent solution involves the expression of a bait protein fused to a promiscuous biotin ligase, which results in labelling of proteins proximal to the activity of the enzyme-tagged bait protein6. When compared with AP–MS, this BioID approach delivers a complementary set of candidate proteins, including novel interaction partners78. Such particular studies clearly underscore the need for complementary approaches in the co-complex strategies.

The evolutionary stress on viruses promoted highly condensed coding of information and maximal functionality for small genomes. Accordingly, for HIV-1 it is sufficient to express a single protein, the p55 GAG protein, for efficient production of virus-like particles (VLPs) from cells910. This protein is highly mobile before its accumulation in cholesterol-rich regions of the membrane, where multimerization initiates the budding process11. A total of 4,000–5,000 GAG molecules is required to form a single particle of about 145 nm (ref. 12). Both VLPs and mature viruses contain a number of host proteins that are recruited by binding to viral proteins. These proteins can either contribute to the infectivity (for example, Cyclophilin/FKBPA13) or act as antiviral proteins preventing the spreading of the virus (for example, APOBEC proteins14).

We here describe the development and application of Virotrap, an elegant co-purification strategy based on the trapping of a bait protein together with its associated protein partners in VLPs that are budded from the cell. After enrichment, these particles can be analysed by targeted (for example, western blotting) or unbiased approaches (MS-based proteomics). Virotrap allows detection of known binary PPIs, analysis of protein complexes and their dynamics, and readily detects protein binders for small molecules.

Concept of the Virotrap system

Classical AP–MS approaches rely on cell homogenization to access protein complexes, a step that can vary significantly with the lysis conditions (detergents, salt concentrations, pH conditions and so on)5. To eliminate the homogenization step in AP–MS, we reasoned that incorporation of a protein complex inside a secreted VLP traps the interaction partners under native conditions and protects them during further purification. We thus explored the possibility of protein complex packaging by the expression of GAG-bait protein chimeras (Fig. 1) as expression of GAG results in the release of VLPs from the cells910. As a first PPI pair to evaluate this concept, we selected the HRAS protein as a bait combined with the RAF1 prey protein. We were able to specifically detect the HRAS–RAF1 interaction following enrichment of VLPs via ultracentrifugation (Supplementary Fig. 1a). To prevent tedious ultracentrifugation steps, we designed a novel single-step protocol wherein we co-express the vesicular stomatitis virus glycoprotein (VSV-G) together with a tagged version of this glycoprotein in addition to the GAG bait and prey. Both tagged and untagged VSV-G proteins are probably presented as trimers on the surface of the VLPs, allowing efficient antibody-based recovery from large volumes. The HRAS–RAF1 interaction was confirmed using this single-step protocol (Supplementary Fig. 1b). No associations with unrelated bait or prey proteins were observed for both protocols.

Figure 1: Schematic representation of the Virotrap strategy.

http://www.nature.com/ncomms/2016/160428/ncomms11416/images_article/ncomms11416-f1.jpg

 

Expression of a GAG-bait fusion protein (1) results in submembrane multimerization (2) and subsequent budding of VLPs from cells (3). Interaction partners of the bait protein are also trapped within these VLPs and can be identified after purification by western blotting or MS analysis (4).

Virotrap for the detection of binary interactions

We next explored the reciprocal detection of a set of PPI pairs, which were selected based on published evidence and cytosolic localization15. After single-step purification and western blot analysis, we could readily detect reciprocal interactions between CDK2 and CKS1B, LCP2 and GRAP2, and S100A1 and S100B (Fig. 2a). Only for the LCP2 prey we observed nonspecific association with an irrelevant bait construct. However, the particle levels of the GRAP2 bait were substantially lower as compared with those of the GAG control construct (GAG protein levels in VLPs; Fig. 2a, second panel of the LCP2 prey). After quantification of the intensities of bait and prey proteins and normalization of prey levels using bait levels, we observed a strong enrichment for the GAG-GRAP2 bait (Supplementary Fig. 2).

…..

Virotrap for unbiased discovery of novel interactions

For the detection of novel interaction partners, we scaled up VLP production and purification protocols (Supplementary Fig. 5 and Supplementary Note 1 for an overview of the protocol) and investigated protein partners trapped using the following bait proteins: Fas-associated via death domain (FADD), A20 (TNFAIP3), nuclear factor-κB (NF-κB) essential modifier (IKBKG), TRAF family member-associated NF-κB activator (TANK), MYD88 and ring finger protein 41 (RNF41). To obtain specific interactors from the lists of identified proteins, we challenged the data with a combined protein list of 19 unrelated Virotrap experiments (Supplementary Table 1 for an overview). Figure 3 shows the design and the list of candidate interactors obtained after removal of all proteins that were found in the 19 control samples (including removal of proteins from the control list identified with a single peptide). The remaining list of confident protein identifications (identified with at least two peptides in at least two biological repeats) reveals both known and novel candidate interaction partners. All candidate interactors including single peptide protein identifications are given in Supplementary Data 2 and also include recurrent protein identifications of known interactors based on a single peptide; for example, CASP8 for FADD and TANK for NEMO. Using alternative methods, we confirmed the interaction between A20 and FADD, and the associations with transmembrane proteins (insulin receptor and insulin-like growth factor receptor 1) that were captured using RNF41 as a bait (Supplementary Fig. 6). To address the use of Virotrap for the detection of dynamic interactions, we activated the NF-κB pathway via the tumour necrosis factor (TNF) receptor (TNFRSF1A) using TNFα (TNF) and performed Virotrap analysis using A20 as bait (Fig. 3). This resulted in the additional enrichment of receptor-interacting kinase (RIPK1), TNFR1-associated via death domain (TRADD), TNFRSF1A and TNF itself, confirming the expected activated complex20.

Figure 3: Use of Virotrap for unbiased interactome analysis

http://www.nature.com/ncomms/2016/160428/ncomms11416/images_article/ncomms11416-f3.jpg

Figure 4: Use of Virotrap for detection of protein partners of small molecules.

http://www.nature.com/ncomms/2016/160428/ncomms11416/images_article/ncomms11416-f4.jpg

….

Lysis conditions used in AP–MS strategies are critical for the preservation of protein complexes. A multitude of lysis conditions have been described, culminating in a recent report where protein complex stability was assessed under 96 lysis/purification protocols5. Moreover, the authors suggest to optimize the conditions for every complex, implying an important workload for researchers embarking on protein complex analysis using classical AP–MS. As lysis results in a profound change of the subcellular context and significantly alters the concentration of proteins, loss of complex integrity during a classical AP–MS protocol can be expected. A clear evolution towards ‘lysis-independent’ approaches in the co-complex analysis field is evident with the introduction of BioID6 and APEX25 where proximal proteins, including proteins residing in the complex, are labelled with biotin by an enzymatic activity fused to a bait protein. A side-by-side comparison between classical AP–MS and BioID showed overlapping and unique candidate binding proteins for both approaches78, supporting the notion that complementary methods are needed to provide a comprehensive view on protein complexes. This has also been clearly demonstrated for binary approaches15 and is a logical consequence of the heterogenic nature underlying PPIs (binding mechanism, requirement for posttranslational modifications, location, affinity and so on).

In this report, we explore an alternative, yet complementary method to isolate protein complexes without interfering with cellular integrity. By trapping protein complexes in the protective environment of a virus-like shell, the intact complexes are preserved during the purification process. This constitutes a new concept in co-complex analysis wherein complex stability is physically guaranteed by a protective, physical structure. A comparison of our Virotrap approach with AP–MS shows complementary data, with specific false positives and false negatives for both methods (Supplementary Fig. 7).

The current implementation of the Virotrap platform implies the use of a GAG-bait construct resulting in considerable expression of the bait protein. Different strategies are currently pursued to reduce bait expression including co-expression of a native GAG protein together with the GAG-bait protein, not only reducing bait expression but also creating more ‘space’ in the particles potentially accommodating larger bait protein complexes. Nevertheless, the presence of the bait on the forming GAG scaffold creates an intracellular affinity matrix (comparable to the early in vitro affinity columns for purification of interaction partners from lysates26) that has the potential to compete with endogenous complexes by avidity effects. This avidity effect is a powerful mechanism that aids in the recruitment of cyclophilin to GAG27, a well-known weak interaction (Kd=16 μM (ref. 28)) detectable as a background association in the Virotrap system. Although background binding may be increased by elevated bait expression, weaker associations are readily detectable (for example, MAL—MYD88-binding study; Fig. 2c).

The size of Virotrap particles (around 145 nm) suggests limitations in the size of the protein complex that can be accommodated in the particles. Further experimentation is required to define the maximum size of proteins or the number of protein complexes that can be trapped inside the particles.

….

In conclusion, Virotrap captures significant parts of known interactomes and reveals new interactions. This cell lysis-free approach purifies protein complexes under native conditions and thus provides a powerful method to complement AP–MS or other PPI data. Future improvements of the system include strategies to reduce bait expression to more physiological levels and application of advanced data analysis options to filter out background. These developments can further aid in the deployment of Virotrap as a powerful extension of the current co-complex technology arsenal.

 

New Autism Blood Biomarker Identified

Researchers at UT Southwestern Medical Center have identified a blood biomarker that may aid in earlier diagnosis of children with autism spectrum disorder, or ASD

http://www.technologynetworks.com/Proteomics/news.aspx?ID=191268

 

In a recent edition of Scientific Reports, UT Southwestern researchers reported on the identification of a blood biomarker that could distinguish the majority of ASD study participants versus a control group of similar age range. In addition, the biomarker was significantly correlated with the level of communication impairment, suggesting that the blood test may give insight into ASD severity.

“Numerous investigators have long sought a biomarker for ASD,” said Dr. Dwight German, study senior author and Professor of Psychiatry at UT Southwestern. “The blood biomarker reported here along with others we are testing can represent a useful test with over 80 percent accuracy in identifying ASD.”

ASD1 –  was 66 percent accurate in diagnosing ASD. When combined with thyroid stimulating hormone level measurements, the ASD1-binding biomarker was 73 percent accurate at diagnosis

 

A Search for Blood Biomarkers for Autism: Peptoids

Sayed ZamanUmar Yazdani,…, Laura Hewitson & Dwight C. German
Scientific Reports 2016; 6(19164) http://dx.doi.org:/10.1038/srep19164

Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social interaction and communication, and restricted, repetitive patterns of behavior. In order to identify individuals with ASD and initiate interventions at the earliest possible age, biomarkers for the disorder are desirable. Research findings have identified widespread changes in the immune system in children with autism, at both systemic and cellular levels. In an attempt to find candidate antibody biomarkers for ASD, highly complex libraries of peptoids (oligo-N-substituted glycines) were screened for compounds that preferentially bind IgG from boys with ASD over typically developing (TD) boys. Unexpectedly, many peptoids were identified that preferentially bound IgG from TD boys. One of these peptoids was studied further and found to bind significantly higher levels (>2-fold) of the IgG1 subtype in serum from TD boys (n = 60) compared to ASD boys (n = 74), as well as compared to older adult males (n = 53). Together these data suggest that ASD boys have reduced levels (>50%) of an IgG1 antibody, which resembles the level found normally with advanced age. In this discovery study, the ASD1 peptoid was 66% accurate in predicting ASD.

….

Peptoid libraries have been used previously to search for autoantibodies for neurodegenerative diseases19 and for systemic lupus erythematosus (SLE)21. In the case of SLE, peptoids were identified that could identify subjects with the disease and related syndromes with moderate sensitivity (70%) and excellent specificity (97.5%). Peptoids were used to measure IgG levels from both healthy subjects and SLE patients. Binding to the SLE-peptoid was significantly higher in SLE patients vs. healthy controls. The IgG bound to the SLE-peptoid was found to react with several autoantigens, suggesting that the peptoids are capable of interacting with multiple, structurally similar molecules. These data indicate that IgG binding to peptoids can identify subjects with high levels of pathogenic autoantibodies vs. a single antibody.

In the present study, the ASD1 peptoid binds significantly lower levels of IgG1 in ASD males vs. TD males. This finding suggests that the ASD1 peptoid recognizes antibody(-ies) of an IgG1 subtype that is (are) significantly lower in abundance in the ASD males vs. TD males. Although a previous study14 has demonstrated lower levels of plasma IgG in ASD vs. TD children, here, we additionally quantified serum IgG levels in our individuals and found no difference in IgG between the two groups (data not shown). Furthermore, our IgG levels did not correlate with ASD1 binding levels, indicating that ASD1 does not bind IgG generically, and that the peptoid’s ability to differentiate between ASD and TD males is related to a specific antibody(-ies).

ASD subjects underwent a diagnostic evaluation using the ADOS and ADI-R, and application of the DSM-IV criteria prior to study inclusion. Only those subjects with a diagnosis of Autistic Disorder were included in the study. The ADOS is a semi-structured observation of a child’s behavior that allows examiners to observe the three core domains of ASD symptoms: reciprocal social interaction, communication, and restricted and repetitive behaviors1. When ADOS subdomain scores were compared with peptoid binding, the only significant relationship was with Social Interaction. However, the positive correlation would suggest that lower peptoid binding is associated with better social interaction, not poorer social interaction as anticipated.

The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.

It is interesting that in serum samples from older men, the ASD1 binding is similar to that in the ASD boys. This is consistent with the observation that with aging there is a reduction in the strength of the immune system, and the changes are gender-specific25. Recent studies using parabiosis26, in which blood from young mice reverse age-related impairments in cognitive function and synaptic plasticity in old mice, reveal that blood constituents from young subjects may contain important substances for maintaining neuronal functions. Work is in progress to identify the antibody/antibodies that are differentially binding to the ASD1 peptoid, which appear as a single band on the electrophoresis gel (Fig. 4).

……..

The ADI-R is a structured parental interview that measures the core features of ASD symptoms in the areas of reciprocal social interaction, communication and language, and patterns of behavior. Of the three ADI-R subdomains, only the Communication domain was related to ASD1 peptoid binding, and this correlation was negative suggesting that low peptoid binding is associated with greater communication problems. These latter data are similar to the findings of Heuer et al.14 who found that children with autism with low levels of plasma IgG have high scores on the Aberrant Behavior Checklist (p < 0.0001). Thus, peptoid binding to IgG1 may be useful as a severity marker for ASD allowing for further characterization of individuals, but further research is needed.

 

  • Titration of IgG binding to ASD1 using serum pooled from 10 TD males and 10 ASD males demonstrates ASD1’s ability to differentiate between the two groups. (B)Detecting IgG1 subclass instead of total IgG amplifies this differentiation. (C) IgG1 binding of individual ASD (n=74) and TD (n=60) male serum samples (1:100 dilution) to ASD1 significantly differs with TD>ASD. In addition, IgG1 binding of older adult male (AM) serum samples (n=53) to ASD1 is significantly lower than TD males, and not different from ASD males. The three groups were compared with a Kruskal-Wallis ANOVA, H = 10.1781, p<0.006. **p<0.005. Error bars show SEM. (D) Receiver-operating characteristic curve for ASD1’s ability to discriminate between ASD and TD males.

http://www.nature.com/article-assets/npg/srep/2016/160114/srep19164/images_hires/m685/srep19164-f3.jpg

 

Association between peptoid binding and ADOS and ADI-R subdomains

Higher scores in any domain on the ADOS and ADI-R are indicative of more abnormal behaviors and/or symptoms. Among ADOS subdomains, there was no significant relationship between Communication and peptoid binding (z = 0.04, p = 0.966), Communication + Social interaction (z = 1.53, p = 0.127), or Stereotyped Behaviors and Restrictive Interests (SBRI) (z = 0.46, p = 0.647). Higher scores on the Social Interaction domain were significantly associated with higher peptoid binding (z = 2.04, p = 0.041).

Among ADI-R subdomains, higher scores on the Communication domain were associated with lower levels of peptoid binding (z = −2.28, p = 0.023). There was not a significant relationship between Social Interaction (z = 0.07, p = 0.941) or Restrictive/Repetitive Stereotyped Behaviors (z = −1.40, p = 0.162) and peptoid binding.

 

 

Computational Model Finds New Protein-Protein Interactions

Researchers at University of Pittsburgh have discovered 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia.

http://www.technologynetworks.com/Proteomics/news.aspx?id=190995

Using a computational model they developed, researchers at the University of Pittsburgh School of Medicine have discovered more than 500 new protein-protein interactions (PPIs) associated with genes linked to schizophrenia. The findings, published online in npj Schizophrenia, a Nature Publishing Group journal, could lead to greater understanding of the biological underpinnings of this mental illness, as well as point the way to treatments.

There have been many genome-wide association studies (GWAS) that have identified gene variants associated with an increased risk for schizophrenia, but in most cases there is little known about the proteins that these genes make, what they do and how they interact, said senior investigator Madhavi Ganapathiraju, Ph.D., assistant professor of biomedical informatics, Pitt School of Medicine.

“GWAS studies and other research efforts have shown us what genes might be relevant in schizophrenia,” she said. “What we have done is the next step. We are trying to understand how these genes relate to each other, which could show us the biological pathways that are important in the disease.”

Each gene makes proteins and proteins typically interact with each other in a biological process. Information about interacting partners can shed light on the role of a gene that has not been studied, revealing pathways and biological processes associated with the disease and also its relation to other complex diseases.

Dr. Ganapathiraju’s team developed a computational model called High-Precision Protein Interaction Prediction (HiPPIP) and applied it to discover PPIs of schizophrenia-linked genes identified through GWAS, as well as historically known risk genes. They found 504 never-before known PPIs, and noted also that while schizophrenia-linked genes identified historically and through GWAS had little overlap, the model showed they shared more than 100 common interactors.

“We can infer what the protein might do by checking out the company it keeps,” Dr. Ganapathiraju explained. “For example, if I know you have many friends who play hockey, it could mean that you are involved in hockey, too. Similarly, if we see that an unknown protein interacts with multiple proteins involved in neural signaling, for example, there is a high likelihood that the unknown entity also is involved in the same.”

Dr. Ganapathiraju and colleagues have drawn such inferences on protein function based on the PPIs of proteins, and made their findings available on a website Schizo-Pi. This information can be used by biologists to explore the schizophrenia interactome with the aim of understanding more about the disease or developing new treatment drugs.

Schizophrenia interactome with 504 novel protein–protein interactions

MK GanapathirajuM Thahir,…,  CE LoscherEM Bauer & S Chaparala
npj Schizophrenia 2016;  2(16012)   http://dx.doi.org:/10.1038/npjschz.2016.12

(GWAS) have revealed the role of rare and common genetic variants, but the functional effects of the risk variants remain to be understood. Protein interactome-based studies can facilitate the study of molecular mechanisms by which the risk genes relate to schizophrenia (SZ) genesis, but protein–protein interactions (PPIs) are unknown for many of the liability genes. We developed a computational model to discover PPIs, which is found to be highly accurate according to computational evaluations and experimental validations of selected PPIs. We present here, 365 novel PPIs of liability genes identified by the SZ Working Group of the Psychiatric Genomics Consortium (PGC). Seventeen genes that had no previously known interactions have 57 novel interactions by our method. Among the new interactors are 19 drug targets that are targeted by 130 drugs. In addition, we computed 147 novel PPIs of 25 candidate genes investigated in the pre-GWAS era. While there is little overlap between the GWAS genes and the pre-GWAS genes, the interactomes reveal that they largely belong to the same pathways, thus reconciling the apparent disparities between the GWAS and prior gene association studies. The interactome including 504 novel PPIs overall, could motivate other systems biology studies and trials with repurposed drugs. The PPIs are made available on a webserver, called Schizo-Pi at http://severus.dbmi.pitt.edu/schizo-pi with advanced search capabilities.

Schizophrenia (SZ) is a common, potentially severe psychiatric disorder that afflicts all populations.1 Gene mapping studies suggest that SZ is a complex disorder, with a cumulative impact of variable genetic effects coupled with environmental factors.2 As many as 38 genome-wide association studies (GWAS) have been reported on SZ out of a total of 1,750 GWAS publications on 1,087 traits or diseases reported in the GWAS catalog maintained by the National Human Genome Research Institute of USA3 (as of April 2015), revealing the common variants associated with SZ.4 The SZ Working Group of the Psychiatric Genomics Consortium (PGC) identified 108 genetic loci that likely confer risk for SZ.5 While the role of genetics has been clearly validated by this study, the functional impact of the risk variants is not well-understood.6,7 Several of the genes implicated by the GWAS have unknown functions and could participate in possibly hitherto unknown pathways.8 Further, there is little or no overlap between the genes identified through GWAS and ‘candidate genes’ proposed in the pre-GWAS era.9

Interactome-based studies can be useful in discovering the functional associations of genes. For example,disrupted in schizophrenia 1 (DISC1), an SZ related candidate gene originally had no known homolog in humans. Although it had well-characterized protein domains such as coiled-coil domains and leucine-zipper domains, its function was unknown.10,11 Once its protein–protein interactions (PPIs) were determined using yeast 2-hybrid technology,12 investigators successfully linked DISC1 to cAMP signaling, axon elongation, and neuronal migration, and accelerated the research pertaining to SZ in general, and DISC1 in particular.13 Typically such studies are carried out on known protein–protein interaction (PPI) networks, or as in the case of DISC1, when there is a specific gene of interest, its PPIs are determined by methods such as yeast 2-hybrid technology.

Knowledge of human PPI networks is thus valuable for accelerating discovery of protein function, and indeed, biomedical research in general. However, of the hundreds of thousands of biophysical PPIs thought to exist in the human interactome,14,15 <100,000 are known today (Human Protein Reference Database, HPRD16 and BioGRID17 databases). Gold standard experimental methods for the determination of all the PPIs in human interactome are time-consuming, expensive and may not even be feasible, as about 250 million pairs of proteins would need to be tested overall; high-throughput methods such as yeast 2-hybrid have important limitations for whole interactome determination as they have a low recall of 23% (i.e., remaining 77% of true interactions need to be determined by other means), and a low precision (i.e., the screens have to be repeated multiple times to achieve high selectivity).18,19Computational methods are therefore necessary to complete the interactome expeditiously. Algorithms have begun emerging to predict PPIs using statistical machine learning on the characteristics of the proteins, but these algorithms are employed predominantly to study yeast. Two significant computational predictions have been reported for human interactome; although they have had high false positive rates, these methods have laid the foundation for computational prediction of human PPIs.20,21

We have created a new PPI prediction model called High-Confidence Protein–Protein Interaction Prediction (HiPPIP) model. Novel interactions predicted with this model are making translational impact. For example, we discovered a PPI between OASL and DDX58, which on validation showed that an increased expression of OASL could boost innate immunity to combat influenza by activating the RIG-I pathway.22 Also, the interactome of the genes associated with congenital heart disease showed that the disease morphogenesis has a close connection with the structure and function of cilia.23Here, we describe the HiPPIP model and its application to SZ genes to construct the SZ interactome. After computational evaluations and experimental validations of selected novel PPIs, we present here 504 highly confident novel PPIs in the SZ interactome, shedding new light onto several uncharacterized genes that are associated with SZ.

We developed a computational model called HiPPIP to predict PPIs (see Methods and Supplementary File 1). The model has been evaluated by computational methods and experimental validations and is found to be highly accurate. Evaluations on a held-out test data showed a precision of 97.5% and a recall of 5%. 5% recall out of 150,000 to 600,000 estimated number of interactions in the human interactome corresponds to 7,500–30,000 novel PPIs in the whole interactome. Note that, it is likely that the real precision would be higher than 97.5% because in this test data, randomly paired proteins are treated as non-interacting protein pairs, whereas some of them may actually be interacting pairs with a small probability; thus, some of the pairs that are treated as false positives in test set are likely to be true but hitherto unknown interactions. In Figure 1a, we show the precision versus recall of our method on ‘hub proteins’ where we considered all pairs that received a score >0.5 by HiPPIP to be novel interactions. In Figure 1b, we show the number of true positives versus false positives observed in hub proteins. Both these figures also show our method to be superior in comparison to the prediction of membrane-receptor interactome by Qi et al’s.24 True positives versus false positives are also shown for individual hub proteins by our method in Figure 1cand by Qi et al’s.23 in Figure 1d. These evaluations showed that our predictions contain mostly true positives. Unlike in other domains where ranked lists are commonly used such as information retrieval, in PPI prediction the ‘false positives’ may actually be unlabeled instances that are indeed true interactions that are not yet discovered. In fact, such unlabeled pairs predicted as interactors of the hub gene HMGB1 (namely, the pairs HMGB1-KL and HMGB1-FLT1) were validated by experimental methods and found to be true PPIs (See the Figures e–g inSupplementary File 3). Thus, we concluded that the protein pairs that received a score of ⩾0.5 are highly confident to be true interactions. The pairs that receive a score less than but close to 0.5 (i.e., in the range of 0.4–0.5) may also contain several true PPIs; however, we cannot confidently say that all in this range are true PPIs. Only the PPIs predicted with a score >0.5 are included in the interactome.

Figure 1

http://www.nature.com/article-assets/npg/npjschz/2016/npjschz201612/images_hires/w582/npjschz201612-f1.jpg

Computational evaluation of predicted protein–protein interactions on hub proteins: (a) precision recall curve. (b) True positive versus false positives in ranked lists of hub type membrane receptors for our method and that by Qi et al. True positives versus false positives are shown for individual membrane receptors by our method in (c) and by Qi et al. in (d). Thick line is the average, which is also the same as shown in (b). Note:x-axis is recall in (a), whereas it is number of false positives in (bd). The range of y-axis is observed by varying the threshold from 1.0–0 in (a), and to 0.5 in (bd).

SZ interactome

By applying HiPPIP to the GWAS genes and Historic (pre-GWAS) genes, we predicted over 500 high confidence new PPIs adding to about 1400 previously known PPIs.

Schizophrenia interactome: network view of the schizophrenia interactome is shown as a graph, where genes are shown as nodes and PPIs as edges connecting the nodes. Schizophrenia-associated genes are shown as dark blue nodes, novel interactors as red color nodes and known interactors as blue color nodes. The source of the schizophrenia genes is indicated by its label font, where Historic genes are shown italicized, GWAS genes are shown in bold, and the one gene that is common to both is shown in italicized and bold. For clarity, the source is also indicated by the shape of the node (triangular for GWAS and square for Historic and hexagonal for both). Symbols are shown only for the schizophrenia-associated genes; actual interactions may be accessed on the web. Red edges are the novel interactions, whereas blue edges are known interactions. GWAS, genome-wide association studies of schizophrenia; PPI, protein–protein interaction.

http://www.nature.com/article-assets/npg/npjschz/2016/npjschz201612/images_hires/m685/npjschz201612-f2.jpg

 

Webserver of SZ interactome

We have made the known and novel interactions of all SZ-associated genes available on a webserver called Schizo-Pi, at the addresshttp://severus.dbmi.pitt.edu/schizo-pi. This webserver is similar to Wiki-Pi33 which presents comprehensive annotations of both participating proteins of a PPI side-by-side. The difference between Wiki-Pi which we developed earlier, and Schizo-Pi, is the inclusion of novel predicted interactions of the SZ genes into the latter.

Despite the many advances in biomedical research, identifying the molecular mechanisms underlying the disease is still challenging. Studies based on protein interactions were proven to be valuable in identifying novel gene associations that could shed new light on disease pathology.35 The interactome including more than 500 novel PPIs will help to identify pathways and biological processes associated with the disease and also its relation to other complex diseases. It also helps identify potential drugs that could be repurposed to use for SZ treatment.

Functional and pathway enrichment in SZ interactome

When a gene of interest has little known information, functions of its interacting partners serve as a starting point to hypothesize its own function. We computed statistically significant enrichment of GO biological process terms among the interacting partners of each of the genes using BinGO36 (see online at http://severus.dbmi.pitt.edu/schizo-pi).

 

Protein aggregation and aggregate toxicity: new insights into protein folding, misfolding diseases and biological evolution

Massimo Stefani · Christopher M. Dobson

Abstract The deposition of proteins in the form of amyloid fibrils and plaques is the characteristic feature of more than 20 degenerative conditions affecting either the central nervous system or a variety of peripheral tissues. As these conditions include Alzheimer’s, Parkinson’s and the prion diseases, several forms of fatal systemic amyloidosis, and at least one condition associated with medical intervention (haemodialysis), they are of enormous importance in the context of present-day human health and welfare. Much remains to be learned about the mechanism by which the proteins associated with these diseases aggregate and form amyloid structures, and how the latter affect the functions of the organs with which they are associated. A great deal of information concerning these diseases has emerged, however, during the past 5 years, much of it causing a number of fundamental assumptions about the amyloid diseases to be reexamined. For example, it is now apparent that the ability to form amyloid structures is not an unusual feature of the small number of proteins associated with these diseases but is instead a general property of polypeptide chains. It has also been found recently that aggregates of proteins not associated with amyloid diseases can impair the ability of cells to function to a similar extent as aggregates of proteins linked with specific neurodegenerative conditions. Moreover, the mature amyloid fibrils or plaques appear to be substantially less toxic than the prefibrillar aggregates that are their precursors. The toxicity of these early aggregates appears to result from an intrinsic ability to impair fundamental cellular processes by interacting with cellular membranes, causing oxidative stress and increases in free Ca2+ that eventually lead to apoptotic or necrotic cell death. The ‘new view’ of these diseases also suggests that other degenerative conditions could have similar underlying origins to those of the amyloidoses. In addition, cellular protection mechanisms, such as molecular chaperones and the protein degradation machinery, appear to be crucial in the prevention of disease in normally functioning living organisms. It also suggests some intriguing new factors that could be of great significance in the evolution of biological molecules and the mechanisms that regulate their behaviour.

The genetic information within a cell encodes not only the specific structures and functions of proteins but also the way these structures are attained through the process known as protein folding. In recent years many of the underlying features of the fundamental mechanism of this complex process and the manner in which it is regulated in living systems have emerged from a combination of experimental and theoretical studies [1]. The knowledge gained from these studies has also raised a host of interesting issues. It has become apparent, for example, that the folding and unfolding of proteins is associated with a whole range of cellular processes from the trafficking of molecules to specific organelles to the regulation of the cell cycle and the immune response. Such observations led to the inevitable conclusion that the failure to fold correctly, or to remain correctly folded, gives rise to many different types of biological malfunctions and hence to many different forms of disease [2]. In addition, it has been recognised recently that a large number of eukaryotic genes code for proteins that appear to be ‘natively unfolded’, and that proteins can adopt, under certain circumstances, highly organised multi-molecular assemblies whose structures are not specifically encoded in the amino acid sequence. Both these observations have raised challenging questions about one of the most fundamental principles of biology: the close relationship between the sequence, structure and function of proteins, as we discuss below [3].

It is well established that proteins that are ‘misfolded’, i.e. that are not in their functionally relevant conformation, are devoid of normal biological activity. In addition, they often aggregate and/or interact inappropriately with other cellular components leading to impairment of cell viability and eventually to cell death. Many diseases, often known as misfolding or conformational diseases, ultimately result from the presence in a living system of protein molecules with structures that are ‘incorrect’, i.e. that differ from those in normally functioning organisms [4]. Such diseases include conditions in which a specific protein, or protein complex, fails to fold correctly (e.g. cystic fibrosis, Marfan syndrome, amyotonic lateral sclerosis) or is not sufficiently stable to perform its normal function (e.g. many forms of cancer). They also include conditions in which aberrant folding behaviour results in the failure of a protein to be correctly trafficked (e.g. familial hypercholesterolaemia, α1-antitrypsin deficiency, and some forms of retinitis pigmentosa) [4]. The tendency of proteins to aggregate, often to give species extremely intractable to dissolution and refolding, is of course also well known in other circumstances. Examples include the formation of inclusion bodies during overexpression of heterologous proteins in bacteria and the precipitation of proteins during laboratory purification procedures. Indeed, protein aggregation is well established as one of the major difficulties associated with the production and handling of proteins in the biotechnology and pharmaceutical industries [5].

Considerable attention is presently focused on a group of protein folding diseases known as amyloidoses. In these diseases specific peptides or proteins fail to fold or to remain correctly folded and then aggregate (often with other components) so as to give rise to ‘amyloid’ deposits in tissue. Amyloid structures can be recognised because they possess a series of specific tinctorial and biophysical characteristics that reflect a common core structure based on the presence of highly organised βsheets [6]. The deposits in strictly defined amyloidoses are extracellular and can often be observed as thread-like fibrillar structures, sometimes assembled further into larger aggregates or plaques. These diseases include a range of sporadic, familial or transmissible degenerative diseases, some of which affect the brain and the central nervous system (e.g. Alzheimer’s and Creutzfeldt-Jakob diseases), while others involve peripheral tissues and organs such as the liver, heart and spleen (e.g. systemic amyloidoses and type II diabetes) [7, 8]. In other forms of amyloidosis, such as primary or secondary systemic amyloidoses, proteinaceous deposits are found in skeletal tissue and joints (e.g. haemodialysis-related amyloidosis) as well as in several organs (e.g. heart and kidney). Yet other components such as collagen, glycosaminoglycans and proteins (e.g. serum amyloid protein) are often present in the deposits protecting them against degradation [9, 10, 11]. Similar deposits to those in the amyloidoses are, however, found intracellularly in other diseases; these can be localised either in the cytoplasm, in the form of specialised aggregates known as aggresomes or as Lewy or Russell bodies or in the nucleus (see below).

The presence in tissue of proteinaceous deposits is a hallmark of all these diseases, suggesting a causative link between aggregate formation and pathological symptoms (often known as the amyloid hypothesis) [7, 8, 12]. At the present time the link between amyloid formation and disease is widely accepted on the basis of a large number of biochemical and genetic studies. The specific nature of the pathogenic species, and the molecular basis of their ability to damage cells, are however, the subject of intense debate [13, 14, 15, 16, 17, 18, 19, 20]. In neurodegenerative disorders it is very likely that the impairment of cellular function follows directly from the interactions of the aggregated proteins with cellular components [21, 22]. In the systemic non-neurological diseases, however, it is widely believed that the accumulation in vital organs of large amounts of amyloid deposits can by itself cause at least some of the clinical symptoms [23]. It is quite possible, however, that there are other more specific effects of aggregates on biochemical processes even in these diseases. The presence of extracellular or intracellular aggregates of a specific polypeptide molecule is a characteristic of all the 20 or so recognised amyloid diseases. The polypeptides involved include full length proteins (e.g. lysozyme or immunoglobulin light chains), biological peptides (amylin, atrial natriuretic factor) and fragments of larger proteins produced as a result of specific processing (e.g. the Alzheimer βpeptide) or of more general degradation [e.g. poly(Q) stretches cleaved from proteins with poly(Q) extensions such as huntingtin, ataxins and the androgen receptor]. The peptides and proteins associated with known amyloid diseases are listed in Table 1. In some cases the proteins involved have wild type sequences, as in sporadic forms of the diseases, but in other cases these are variants resulting from genetic mutations associated with familial forms of the diseases. In some cases both sporadic and familial diseases are associated with a given protein; in this case the mutational variants are usually associated with early-onset forms of the disease. In the case of the neurodegenerative diseases associated with the prion protein some forms of the diseases are transmissible. The existence of familial forms of a number of amyloid diseases has provided significant clues to the origins of the pathologies. For example, there are increasingly strong links between the age at onset of familial forms of disease and the effects of the mutations involved on the propensity of the affected proteins to aggregate in vitro. Such findings also support the link between the process of aggregation and the clinical manifestations of disease [24, 25].

The presence in cells of misfolded or aggregated proteins triggers a complex biological response. In the cytosol, this is referred to as the ‘heat shock response’ and in the endoplasmic reticulum (ER) it is known as the ‘unfolded protein response’. These responses lead to the expression, among others, of the genes for heat shock proteins (Hsp, or molecular chaperone proteins) and proteins involved in the ubiquitin-proteasome pathway [26]. The evolution of such complex biochemical machinery testifies to the fact that it is necessary for cells to isolate and clear rapidly and efficiently any unfolded or incorrectly folded protein as soon as it appears. In itself this fact suggests that these species could have a generally adverse effect on cellular components and cell viability. Indeed, it was a major step forward in understanding many aspects of cell biology when it was recognised that proteins previously associated only with stress, such as heat shock, are in fact crucial in the normal functioning of living systems. This advance, for example, led to the discovery of the role of molecular chaperones in protein folding and in the normal ‘housekeeping’ processes that are inherent in healthy cells [27, 28]. More recently a number of degenerative diseases, both neurological and systemic, have been linked to, or shown to be affected by, impairment of the ubiquitin-proteasome pathway (Table 2). The diseases are primarily associated with a reduction in either the expression or the biological activity of Hsps, ubiquitin, ubiquitinating or deubiquitinating enzymes and the proteasome itself, as we show below [29, 30, 31, 32], or even to the failure of the quality control mechanisms that ensure proper maturation of proteins in the ER. The latter normally leads to degradation of a significant proportion of polypeptide chains before they have attained their native conformations through retrograde translocation to the cytosol [33, 34].

….

It is now well established that the molecular basis of protein aggregation into amyloid structures involves the existence of ‘misfolded’ forms of proteins, i.e. proteins that are not in the structures in which they normally function in vivo or of fragments of proteins resulting from degradation processes that are inherently unable to fold [4, 7, 8, 36]. Aggregation is one of the common consequences of a polypeptide chain failing to reach or maintain its functional three-dimensional structure. Such events can be associated with specific mutations, misprocessing phenomena, aberrant interactions with metal ions, changes in environmental conditions, such as pH or temperature, or chemical modification (oxidation, proteolysis). Perturbations in the conformational properties of the polypeptide chain resulting from such phenomena may affect equilibrium 1 in Fig. 1 increasing the population of partially unfolded, or misfolded, species that are much more aggregation-prone than the native state.

Fig. 1 Overview of the possible fates of a newly synthesised polypeptide chain. The equilibrium ① between the partially folded molecules and the natively folded ones is usually strongly in favour of the latter except as a result of specific mutations, chemical modifications or partially destabilising solution conditions. The increased equilibrium populations of molecules in the partially or completely unfolded ensemble of structures are usually degraded by the proteasome; when this clearance mechanism is impaired, such species often form disordered aggregates or shift equilibrium ② towards the nucleation of pre-fibrillar assemblies that eventually grow into mature fibrils (equilibrium ③). DANGER! indicates that pre-fibrillar aggregates in most cases display much higher toxicity than mature fibrils. Heat shock proteins (Hsp) can suppress the appearance of pre-fibrillar assemblies by minimising the population of the partially folded molecules by assisting in the correct folding of the nascent chain and the unfolded protein response target incorrectly folded proteins for degradation.

……

Little is known at present about the detailed arrangement of the polypeptide chains themselves within amyloid fibrils, either those parts involved in the core βstrands or in regions that connect the various β-strands. Recent data suggest that the sheets are relatively untwisted and may in some cases at least exist in quite specific supersecondary structure motifs such as β-helices [6, 40] or the recently proposed µ-helix [41]. It seems possible that there may be significant differences in the way the strands are assembled depending on characteristics of the polypeptide chain involved [6, 42]. Factors including length, sequence (and in some cases the presence of disulphide bonds or post-translational modifications such as glycosylation) may be important in determining details of the structures. Several recent papers report structural models for amyloid fibrils containing different polypeptide chains, including the Aβ40 peptide, insulin and fragments of the prion protein, based on data from such techniques as cryo-electron microscopy and solid-state magnetic resonance spectroscopy [43, 44]. These models have much in common and do indeed appear to reflect the fact that the structures of different fibrils are likely to be variations on a common theme [40]. It is also emerging that there may be some common and highly organised assemblies of amyloid protofilaments that are not simply extended threads or ribbons. It is clear, for example, that in some cases large closed loops can be formed [45, 46, 47], and there may be specific types of relatively small spherical or ‘doughnut’ shaped structures that can result in at least some circumstances (see below).

…..

The similarity of some early amyloid aggregates with the pores resulting from oligomerisation of bacterial toxins and pore-forming eukaryotic proteins (see below) also suggest that the basic mechanism of protein aggregation into amyloid structures may not only be associated with diseases but in some cases could result in species with functional significance. Recent evidence indicates that a variety of micro-organisms may exploit the controlled aggregation of specific proteins (or their precursors) to generate functional structures. Examples include bacterial curli [52] and proteins of the interior fibre cells of mammalian ocular lenses, whose β-sheet arrays seem to be organised in an amyloid-like supramolecular order [53]. In this case the inherent stability of amyloid-like protein structure may contribute to the long-term structural integrity and transparency of the lens. Recently it has been hypothesised that amyloid-like aggregates of serum amyloid A found in secondary amyloidoses following chronic inflammatory diseases protect the host against bacterial infections by inducing lysis of bacterial cells [54]. One particularly interesting example is a ‘misfolded’ form of the milk protein α-lactalbumin that is formed at low pH and trapped by the presence of specific lipid molecules [55]. This form of the protein has been reported to trigger apoptosis selectively in tumour cells providing evidence for its importance in protecting infants from certain types of cancer [55]. ….

Amyloid formation is a generic property of polypeptide chains ….

It is clear that the presence of different side chains can influence the details of amyloid structures, particularly the assembly of protofibrils, and that they give rise to the variations on the common structural theme discussed above. More fundamentally, the composition and sequence of a peptide or protein affects profoundly its propensity to form amyloid structures under given conditions (see below).

Because the formation of stable protein aggregates of amyloid type does not normally occur in vivo under physiological conditions, it is likely that the proteins encoded in the genomes of living organisms are endowed with structural adaptations that mitigate against aggregation under these conditions. A recent survey involving a large number of structures of β-proteins highlights several strategies through which natural proteins avoid intermolecular association of β-strands in their native states [65].  Other surveys of protein databases indicate that nature disfavours sequences of alternating polar and nonpolar residues, as well as clusters of several consecutive hydrophobic residues, both of which enhance the tendency of a protein to aggregate prior to becoming completely folded [66, 67].

……

Precursors of amyloid fibrils can be toxic to cells

It was generally assumed until recently that the proteinaceous aggregates most toxic to cells are likely to be mature amyloid fibrils, the form of aggregates that have been commonly detected in pathological deposits. It therefore appeared probable that the pathogenic features underlying amyloid diseases are a consequence of the interaction with cells of extracellular deposits of aggregated material. As well as forming the basis for understanding the fundamental causes of these diseases, this scenario stimulated the exploration of therapeutic approaches to amyloidoses that focused mainly on the search for molecules able to impair the growth and deposition of fibrillar forms of aggregated proteins. ….

Structural basis and molecular features of amyloid toxicity

The presence of toxic aggregates inside or outside cells can impair a number of cell functions that ultimately lead to cell death by an apoptotic mechanism [95, 96]. Recent research suggests, however, that in most cases initial perturbations to fundamental cellular processes underlie the impairment of cell function induced by aggregates of disease-associated polypeptides. Many pieces of data point to a central role of modifications to the intracellular redox status and free Ca2+ levels in cells exposed to toxic aggregates [45, 89, 97, 98, 99, 100, 101]. A modification of the intracellular redox status in such cells is associated with a sharp increase in the quantity of reactive oxygen species (ROS) that is reminiscent of the oxidative burst by which leukocytes destroy invading foreign cells after phagocytosis. In addition, changes have been observed in reactive nitrogen species, lipid peroxidation, deregulation of NO metabolism [97], protein nitrosylation [102] and upregulation of heme oxygenase-1, a specific marker of oxidative stress [103]. ….

Results have recently been reported concerning the toxicity towards cultured cells of aggregates of poly(Q) peptides which argues against a disease mechanism based on specific toxic features of the aggregates. These results indicate that there is a close relationship between the toxicity of proteins with poly(Q) extensions and their nuclear localisation. In addition they support the hypotheses that the toxicity of poly(Q) aggregates can be a consequence of altered interactions with nuclear coactivator or corepressor molecules including p53, CBP, Sp1 and TAF130 or of the interaction with transcription factors and nuclear coactivators, such as CBP, endowed with short poly(Q) stretches ([95] and references therein)…..

Concluding remarks
The data reported in the past few years strongly suggest that the conversion of normally soluble proteins into amyloid fibrils and the toxicity of small aggregates appearing during the early stages of the formation of the latter are common or generic features of polypeptide chains. Moreover, the molecular basis of this toxicity also appears to display common features between the different systems that have so far been studied. The ability of many, perhaps all, natural polypeptides to ‘misfold’ and convert into toxic aggregates under suitable conditions suggests that one of the most important driving forces in the evolution of proteins must have been the negative selection against sequence changes that increase the tendency of a polypeptide chain to aggregate. Nevertheless, as protein folding is a stochastic process, and no such process can be completely infallible, misfolded proteins or protein folding intermediates in equilibrium with the natively folded molecules must continuously form within cells. Thus mechanisms to deal with such species must have co-evolved with proteins. Indeed, it is clear that misfolding, and the associated tendency to aggregate, is kept under control by molecular chaperones, which render the resulting species harmless assisting in their refolding, or triggering their degradation by the cellular clearance machinery [166, 167, 168, 169, 170, 171, 172, 173, 175, 177, 178].

Misfolded and aggregated species are likely to owe their toxicity to the exposure on their surfaces of regions of proteins that are buried in the interior of the structures of the correctly folded native states. The exposure of large patches of hydrophobic groups is likely to be particularly significant as such patches favour the interaction of the misfolded species with cell membranes [44, 83, 89, 90, 91, 93]. Interactions of this type are likely to lead to the impairment of the function and integrity of the membranes involved, giving rise to a loss of regulation of the intracellular ion balance and redox status and eventually to cell death. In addition, misfolded proteins undoubtedly interact inappropriately with other cellular components, potentially giving rise to the impairment of a range of other biological processes. Under some conditions the intracellular content of aggregated species may increase directly, due to an enhanced propensity of incompletely folded or misfolded species to aggregate within the cell itself. This could occur as the result of the expression of mutational variants of proteins with decreased stability or cooperativity or with an intrinsically higher propensity to aggregate. It could also occur as a result of the overproduction of some types of protein, for example, because of other genetic factors or other disease conditions, or because of perturbations to the cellular environment that generate conditions favouring aggregation, such as heat shock or oxidative stress. Finally, the accumulation of misfolded or aggregated proteins could arise from the chaperone and clearance mechanisms becoming overwhelmed as a result of specific mutant phenotypes or of the general effects of ageing [173, 174].

The topics discussed in this review not only provide a great deal of evidence for the ‘new view’ that proteins have an intrinsic capability of misfolding and forming structures such as amyloid fibrils but also suggest that the role of molecular chaperones is even more important than was thought in the past. The role of these ubiquitous proteins in enhancing the efficiency of protein folding is well established [185]. It could well be that they are at least as important in controlling the harmful effects of misfolded or aggregated proteins as in enhancing the yield of functional molecules.

 

Nutritional Status is Associated with Faster Cognitive Decline and Worse Functional Impairment in the Progression of Dementia: The Cache County Dementia Progression Study1

Sanders, Chelseaa | Behrens, Stephaniea | Schwartz, Sarahb | Wengreen, Heidic | Corcoran, Chris D.b; d | Lyketsos, Constantine G.e | Tschanz, JoAnn T.a; d;
Journal of Alzheimer’s Disease 2016; 52(1):33-42,     http://content.iospress.com/articles/journal-of-alzheimers-disease/jad150528   http://dx.doi.org:/10.3233/JAD-150528

Nutritional status may be a modifiable factor in the progression of dementia. We examined the association of nutritional status and rate of cognitive and functional decline in a U.S. population-based sample. Study design was an observational longitudinal study with annual follow-ups up to 6 years of 292 persons with dementia (72% Alzheimer’s disease, 56% female) in Cache County, UT using the Mini-Mental State Exam (MMSE), Clinical Dementia Rating Sum of Boxes (CDR-sb), and modified Mini Nutritional Assessment (mMNA). mMNA scores declined by approximately 0.50 points/year, suggesting increasing risk for malnutrition. Lower mMNA score predicted faster rate of decline on the MMSE at earlier follow-up times, but slower decline at later follow-up times, whereas higher mMNA scores had the opposite pattern (mMNA by time β= 0.22, p = 0.017; mMNA by time2 β= –0.04, p = 0.04). Lower mMNA score was associated with greater impairment on the CDR-sb over the course of dementia (β= 0.35, p <  0.001). Assessment of malnutrition may be useful in predicting rates of progression in dementia and may provide a target for clinical intervention.

 

Shared Genetic Risk Factors for Late-Life Depression and Alzheimer’s Disease

Ye, Qing | Bai, Feng* | Zhang, Zhijun
Journal of Alzheimer’s Disease 2016; 52(1): 1-15.                                      http://dx.doi.org:/10.3233/JAD-151129

Background: Considerable evidence has been reported for the comorbidity between late-life depression (LLD) and Alzheimer’s disease (AD), both of which are very common in the general elderly population and represent a large burden on the health of the elderly. The pathophysiological mechanisms underlying the link between LLD and AD are poorly understood. Because both LLD and AD can be heritable and are influenced by multiple risk genes, shared genetic risk factors between LLD and AD may exist. Objective: The objective is to review the existing evidence for genetic risk factors that are common to LLD and AD and to outline the biological substrates proposed to mediate this association. Methods: A literature review was performed. Results: Genetic polymorphisms of brain-derived neurotrophic factor, apolipoprotein E, interleukin 1-beta, and methylenetetrahydrofolate reductase have been demonstrated to confer increased risk to both LLD and AD by studies examining either LLD or AD patients. These results contribute to the understanding of pathophysiological mechanisms that are common to both of these disorders, including deficits in nerve growth factors, inflammatory changes, and dysregulation mechanisms involving lipoprotein and folate. Other conflicting results have also been reviewed, and few studies have investigated the effects of the described polymorphisms on both LLD and AD. Conclusion: The findings suggest that common genetic pathways may underlie LLD and AD comorbidity. Studies to evaluate the genetic relationship between LLD and AD may provide insights into the molecular mechanisms that trigger disease progression as the population ages.

 

Association of Vitamin B12, Folate, and Sulfur Amino Acids With Brain Magnetic Resonance Imaging Measures in Older Adults: A Longitudinal Population-Based Study

B Hooshmand, F Mangialasche, G Kalpouzos…, et al.
AMA Psychiatry. Published online April 27, 2016.    http://dx.doi.org:/10.1001/jamapsychiatry.2016.0274

Importance  Vitamin B12, folate, and sulfur amino acids may be modifiable risk factors for structural brain changes that precede clinical dementia.

Objective  To investigate the association of circulating levels of vitamin B12, red blood cell folate, and sulfur amino acids with the rate of total brain volume loss and the change in white matter hyperintensity volume as measured by fluid-attenuated inversion recovery in older adults.

Design, Setting, and Participants  The magnetic resonance imaging subsample of the Swedish National Study on Aging and Care in Kungsholmen, a population-based longitudinal study in Stockholm, Sweden, was conducted in 501 participants aged 60 years or older who were free of dementia at baseline. A total of 299 participants underwent repeated structural brain magnetic resonance imaging scans from September 17, 2001, to December 17, 2009.

Main Outcomes and Measures  The rate of brain tissue volume loss and the progression of total white matter hyperintensity volume.

Results  In the multi-adjusted linear mixed models, among 501 participants (300 women [59.9%]; mean [SD] age, 70.9 [9.1] years), higher baseline vitamin B12 and holotranscobalamin levels were associated with a decreased rate of total brain volume loss during the study period: for each increase of 1 SD, β (SE) was 0.048 (0.013) for vitamin B12 (P < .001) and 0.040 (0.013) for holotranscobalamin (P = .002). Increased total homocysteine levels were associated with faster rates of total brain volume loss in the whole sample (β [SE] per 1-SD increase, –0.035 [0.015]; P = .02) and with the progression of white matter hyperintensity among participants with systolic blood pressure greater than 140 mm Hg (β [SE] per 1-SD increase, 0.000019 [0.00001]; P = .047). No longitudinal associations were found for red blood cell folate and other sulfur amino acids.

Conclusions and Relevance  This study suggests that both vitamin B12 and total homocysteine concentrations may be related to accelerated aging of the brain. Randomized clinical trials are needed to determine the importance of vitamin B12supplementation on slowing brain aging in older adults.

 

 

Notes from Kurzweill

This vitamin stops the aging process in organs, say Swiss researchers

A potential breakthrough for regenerative medicine, pending further studies

http://www.kurzweilai.net/this-vitamin-stops-the-aging-process-in-organs-say-swiss-researchers

Improved muscle stem cell numbers and muscle function in NR-treated aged mice: Newly regenerated muscle fibers 7 days after muscle damage in aged mice (left: control group; right: fed NR). (Scale bar = 50 μm). (credit: Hongbo Zhang et al./Science) http://www.kurzweilai.net/images/improved-muscle-fibers.png

EPFL researchers have restored the ability of mice organs to regenerate and extend life by simply administering nicotinamide riboside (NR) to them.

NR has been shown in previous studies to be effective in boosting metabolism and treating a number of degenerative diseases. Now, an article by PhD student Hongbo Zhang published in Science also describes the restorative effects of NR on the functioning of stem cells for regenerating organs.

As in all mammals, as mice age, the regenerative capacity of certain organs (such as the liver and kidneys) and muscles (including the heart) diminishes. Their ability to repair them following an injury is also affected. This leads to many of the disorders typical of aging.

Mitochondria —> stem cells —> organs

To understand how the regeneration process deteriorates with age, Zhang teamed up with colleagues from ETH Zurich, the University of Zurich, and universities in Canada and Brazil. By using several biomarkers, they were able to identify the molecular chain that regulates how mitochondria — the “powerhouse” of the cell — function and how they change with age. “We were able to show for the first time that their ability to function properly was important for stem cells,” said Auwerx.

Under normal conditions, these stem cells, reacting to signals sent by the body, regenerate damaged organs by producing new specific cells. At least in young bodies. “We demonstrated that fatigue in stem cells was one of the main causes of poor regeneration or even degeneration in certain tissues or organs,” said Zhang.

How to revitalize stem cells

Which is why the researchers wanted to “revitalize” stem cells in the muscles of elderly mice. And they did so by precisely targeting the molecules that help the mitochondria to function properly. “We gave nicotinamide riboside to 2-year-old mice, which is an advanced age for them,” said Zhang.

“This substance, which is close to vitamin B3, is a precursor of NAD+, a molecule that plays a key role in mitochondrial activity. And our results are extremely promising: muscular regeneration is much better in mice that received NR, and they lived longer than the mice that didn’t get it.”

Parallel studies have revealed a comparable effect on stem cells of the brain and skin. “This work could have very important implications in the field of regenerative medicine,” said Auwerx. This work on the aging process also has potential for treating diseases that can affect — and be fatal — in young people, like muscular dystrophy (myopathy).

So far, no negative side effects have been observed following the use of NR, even at high doses. But while it appears to boost the functioning of all cells, it could include pathological ones, so further in-depth studies are required.

Abstract of NAD+ repletion improves mitochondrial and stem cell function and enhances life span in mice

Adult stem cells (SCs) are essential for tissue maintenance and regeneration yet are susceptible to senescence during aging. We demonstrate the importance of the amount of the oxidized form of cellular nicotinamide adenine dinucleotide (NAD+) and its impact on mitochondrial activity as a pivotal switch to modulate muscle SC (MuSC) senescence. Treatment with the NAD+ precursor nicotinamide riboside (NR) induced the mitochondrial unfolded protein response (UPRmt) and synthesis of prohibitin proteins, and this rejuvenated MuSCs in aged mice. NR also prevented MuSC senescence in the Mdx mouse model of muscular dystrophy. We furthermore demonstrate that NR delays senescence of neural SCs (NSCs) and melanocyte SCs (McSCs), and increased mouse lifespan. Strategies that conserve cellular NAD+ may reprogram dysfunctional SCs and improve lifespan in mammals.

references:

Hongbo Zhang, Dongryeol Ryu, Yibo Wu, Karim Gariani, Xu Wang, Peiling Luan, Davide D’amico, Eduardo R. Ropelle, Matthias P. Lutolf, Ruedi Aebersold, Kristina Schoonjans, Keir J. Menzies, Johan Auwerx. NAD repletion improves mitochondrial and stem cell function and enhances lifespan in mice. Science, 2016 DOI: 10.1126/science.aaf2693

 

Enhancer–promoter interactions are encoded by complex genomic signatures on looping chromatin

Sean WhalenRebecca M Truty & Katherine S Pollard
Nature Genetics 2016; 48:488–496
    
    doi:10.1038/ng.3539

Discriminating the gene target of a distal regulatory element from other nearby transcribed genes is a challenging problem with the potential to illuminate the causal underpinnings of complex diseases. We present TargetFinder, a computational method that reconstructs regulatory landscapes from diverse features along the genome. The resulting models accurately predict individual enhancer–promoter interactions across multiple cell lines with a false discovery rate up to 15 times smaller than that obtained using the closest gene. By evaluating the genomic features driving this accuracy, we uncover interactions between structural proteins, transcription factors, epigenetic modifications, and transcription that together distinguish interacting from non-interacting enhancer–promoter pairs. Most of this signature is not proximal to the enhancers and promoters but instead decorates the looping DNA. We conclude that complex but consistent combinations of marks on the one-dimensional genome encode the three-dimensional structure of fine-scale regulatory interactions.

Read Full Post »

Imaging  Living Cells and Devices

Curator: Danut Dragoi, PhD

 

Imaging living cells is for a good number of years a hot place in Biology, Physics, Chemistry as well as Engineering and Technology for producing specific devices to visualize living cells. In this presentation is shown my opinion on this topic regarding actual status of applied technology for visualizing living cells as well as small small areas of interest.

Slide #1

Slide1

Slide #2

Slide2

As an overview, slide #2 describes: higher resolution imaging of living cells based on advanced CT and MicroCT scanners, and their actual technological trend,  advanced optical microscopy, optical magnetic imaging of living cells, and conclusion.

Slide #3

Slide3

Slide #3 describes a schematic of a computing tomography applied to a single cell, see the inside URL address. The work is in progress as a SBIR application of a group of researchers from Arizona State.The partial section of the cell is supposed to  reveal the contents of the cell, which is very important in Biology and Medicine.

Slide #4

Slide4

Slide #4 describes the principle of computed tomography for relative small objects that are expose by a soft x-ray source on the left, an x-ray detector screen that takes the x-ray projection radiography for the sample on the right. The sample is rotated discretely a small number of degrees and pictures recorded. Depending on the absorption of the sample, the reconstructed 3D object is possible. The resolution of the reconstructed object is a function of the number of pixels as well as the pitch distance d (in the slide 0.127 microns). Because the sample is rotated, the precision of the axis of rotation is very important and becomes a challenging task for small objects.

Slide #5

Slide5

Slide #5 shows a sample taken from the URL address given below the picture. It represents an insect and the future CT development is expected to produce similar images for mono living cells.

Slide #6

Slide6

For many Bio-labs the reverse optical microscope is the working horse. The slide above shows a such microscope with a culture cell inside a transparent box. The picture can be found at the address shown inside the slide.

Slide #7

Slide7

Slide #7 describes an innovative digital microscope from Keyence in which we can observe any object entirely in focus, a 20x greater depth-of-field than an optical microscope, we can view objects from any angle, and measure lengths directly on screen.

Slide #8

Slide8

Slide #8 shows an actual innovative digital microscope from Keyence, see the website address at the bottom of the slide.

Slide #9

Slide9

As we know, the samples visualized by a common optical microscope have to be flat on the surface to be visualized because there is no clear image above and below the focal plane, which is the surface of the sample. For a con-focal microscope the situation is changed. Objects can be visualized at different depths and image files recorded can reconstruct as 3D image object.

Slide #10

Slide10

Slide #10 describes the principle of a con-focal microscope, in which a green laser on left side excites molecules of the specimen at a given depth of focusing, the molecules emit on red light (less energetic than green light) that go all way to the photo-multiplier, which has a small pinhole aperture in front of it that limits the entrance of red rays (parasitic light) from out of range area. More details can be found at the URL address given at the bottom of the slide.

Slide #11

Confocal microscopy Leica

Slide #11 shows a sample of a living specimen taken with a Leica micro-system, see the website address inside the slide.

Slide #12

Slide12

Slide #12 shows the principle of fluorescent microscope and how it works. A light source is filtered to allow blue light (energetic photons for excitation of the molecules of the specimen), the green light emitted is going through objective and ocular lenses and further to the photo-multiplier or digital camera.

Slide #13

Slide13

For their discovery of fluorescent microscopy, Eric Betzig, William Moerner and Stefan Hell won the Nobel Prize in Chemistry on 2014,  for “the development of super-resolved fluorescence microscopy,” which brings optical microscopy into the nano-dimension.

Slide #14

Slide14

Slide #14 introduces the improvement on micro CT scanners for imaging living cells which now is on R & D under heavy development.  The goal is to visualize the interior of living cells. Challenging tasks are: miniaturization, respond to customer needs, low cost, and versatility.

Slide #15

Slide15

Slide #15 shows the schematic for an optical magnetic imaging microscope for visualizing living cells with one dimension less than 500 nm. The website address gven describes in details the working principle.

Slide #16

Slide16

Slide #16 shows the picture of a hand held microscope that is useful on finding spot cancer in moments, ses the website.

Slide #17

Slide17

Slide #17 shows a hand held MRI that connects to an iPhone. It is useful device for detecting cancer cells.

Slide #18

Slide18

Slide #18 shows in comparison a portable NMR device, left side, and a Lab NMR instrument whose height is greater than 5 Ft. The spectrum in the left side is that of Toluene and a capillary sample holder is shown also next to the magnetic device.

Slide #19

Slide19

Slide #19 shows that the hand held MRI can recognize complex molecules,  can diagnose cancer faster, can be connected to a smartphone, and be accurate on precise measurements.

Slide #20

Slide20

An optical dental camera is shown in slide #20. It is less then $100 and a USB cable can connect to a computer. It is very useful for every family in checking the status of the teeth and gums.

Slide #21

Slide21

For detecting dental cavities the x-ray source packaged as a camera and the sensor that connects to a computer are very useful tools in a dental office.

Slide #22

Slide22

Slide #22 shows the conclusions of the presentation, in which we summarize: the automated con-focal microscope partially satisfies the actual needs for imaging of living cells, the optical magnetic imaging microscope for living cells is a promising technique,
a higher resolution is needed on all actual microscopes,  the advanced CT and Micro CT scanners provide a new avenue on the investigation of living cells, more research needed
on hand-held MRI, which is a new solution for complex molecules recognition including cancer

Read Full Post »

Prostate Cancer: Diagnosis and Novel Treatment – Articles of Note  @PharmaceuticalIntelligence.com

Curators: Larry H. Bernstein, MD, FCAP, Aviva Lev-Ari, PhD, RN

 

Tookad appears to be more than OK!

 

VIEW VIDEO

 

Weizmann-developed drug may be speedy prostate cancer cure, studies show

In a trial, a photosynthesis-based therapy eliminates cancer in over 80% of patients – and could be used to attack other cancers, too. After 2-year clinical trial, therapy approved for marketing in Mexico; application submitted for Europe.
http://www.timesofisrael.com/weizmann-developed-drug-cures-prostate-cancer-in-90-minutes-studies-show

cancer-cells-541954_1920-635x357

By David Shamah Apr 3, 2016, 5:05 pm

http://cdn.timesofisrael.com/uploads/2016/04/cancer-cells-541954_1920-635×357.jpg

Scientists at the Weizmann Institute may have found the cure for prostate cancer, at least if it is caught in its early stages – via a drug that doctors inject into cancerous cells and treat with infrared laser illumination.

Using a therapy lasting 90 minutes, the drug, called Tookad Soluble, targets and destroys cancerous prostate cells, studies show, allowing patients to check out of the hospital the same day without the debilitating effects of chemical or radiation therapy or the invasive surgery that is usually used to treat this disease.

The drug has been tested in Europe and in several Latin American countries, and is being marketed by Steba Biotech, an Israeli biotech start-up with R&D facilities in Ness Ziona. The drug and its accompanying therapy were developed in the lab of Weizmann Institute professors Yoram Salomon of the Biological Regulation Department and Avigdor Scherz of the Plant and Environmental Sciences Department.

Based on principles of photosynthesis, the drug uses infrared illumination to activate elements that choke off cancer cells, but spares the healthy ones.

The therapy was recently approved for marketing in Mexico, after a two-year Phase III clinical trial in which 80 patients from Mexico, Peru and Panama who suffered from early-stage prostate cancer were treated with the Tookad system. Two years after treatment, over 80% of the study’s subjects remained cancer-free.

A similar study being undertaken in Europe showed similar results, Steba Biotech said, and the company had submitted a marketing authorization application to the European Medicine Agency for authorization of Tookad as a treatment of localized prostate cancer.

The approved therapy was developed by Salomon and Scherz using a clever twist on photosynthesis called photodynamic therapy, in which elements are activated when they are exposed to a specific wavelength of light.

Tookad was first synthesized in Scherz’s lab from bacteriochlorophyll, the photosynthetic pigment of a type of aquatic bacteria that draw their energy supply from sunlight. Photosynthesis style, the infrared light activates Tookad (via thin optic fibers that are inserted into the cancerous prostatic tissue) which consists of oxygen and nitric oxide radicals that initiate occlusion and destruction of the tumor blood vessels.

These elements are toxic to the cancer cells and once the Tookad formula is activated, they invade the cancer cells, preventing them from absorbing oxygen and choking them until they are dead. The Tookad solution, having done its job, is supposed to then be ejected from the body, with no lingering consequences – and no more cancer.

With the drug approved for prostate cancer – and able to reach cancerous cells that are deep within the body via a minimally invasive procedure – Steba believes it may be able to treat other forms of cancer. In fact, the company said, it is also pursuing early stage studies of Tookad in esophageal cancer, urothelial carcinoma, advanced prostate cancer, renal carcinoma, and triple negative breast cancer in collaboration with Memorial Sloan Kettering Cancer Center, the Weizmann Institute, and Oxford University.

“The use of near-infrared illumination, together with the rapid clearance of the drug from the body and the unique non-thermal mechanism of action, makes it possible to safely treat large, deeply embedded cancerous tissue using a minimally invasive procedure,” according to Steba.

The Weizmann Institute has been working with Steba researchers for some 20 years to develop Tookad, said Amir Naiberg, CEO of the Yeda Research and Development Company, the Weizmann Institute’s technology transfer arm and the licensor of the therapy. “The commitment made by the shareholders of Steba and their personal relationship and effective collaboration with Weizmann Institute scientists and Yeda have enabled this tremendous accomplishment.”

“We are excited to bring a unique and innovative solution to physicians and patients for the management of low-risk prostate cancer in Mexico and subsequently to other Latin American countries,” said Raphael Harari, chief executive officer of Steba Biotech. “This approval is recognition of the tremendous effort deployed over the years by the scientists of Steba Biotech and the Weizmann Institute to develop a therapy that can control effectively low-risk prostate cancer while preserving patients’ quality of life.”

 

 

Original Study

http://www.timesofisrael.com/weizmann-developed-drug-cures-prostate-cancer-in-90-minutes-studies-show/?utm_source=Start-Up+Daily&utm_campaign=db10147d27-2016_04_04_SUI4_4_2016&utm_medium=email&utm_term=0_fb879fad58-db10147d27-54672313

Other articles on Prostate Cancer were published in this Open Access Online Scientific Journal, including the following:

Articles by Larry H. Bernstein

Nanoscale Photodynamic Therapy

http://pharmaceuticalintelligence.com/2016/02/05/nanoscale-photodynamic-therapy

Laser Therapy Opens Blood-Brain Barrier
http://pharmaceuticalintelligence.com/2016/03/17/laser-therapy-opens-blood-brain-barrier

Single Cell Shines Light on Cell Malignant Transformation  
http://pharmaceuticalintelligence.com/2016/01/29/single-cell-shines-light-on-cell-malignant-transformation

Low Energy Photon Intra-Operative Radiotherapy System
http://pharmaceuticalintelligence.com/2015/11/10/low-energy-photon-intra-operative-radiotherapy-system

Articles by the Team @PharmaceuticalIntelligence.com

Castration Resistant Prostate Cancer

University of Liverpool Scientists Report New Urine Test To Detect Potential Biomarkers of Prostate Cancer

Who and when should we screen for prostate cancer?

Reactive Oxygen species in prostate cancer?

Following (or not) the guidelines for use of imaging in management of prostate cancer

Controlling focused-treatment of Prostate cancer with MRI

Combining Nanotube Technology and Genetically Engineered Antibodies to Detect Prostate Cancer Biomarkers

In Search of Clarity on Prostate Cancer Screening, Post-Surgical Followup, and Prediction of Long Term Remission

Prostate Cancer Molecular Diagnostic Market – the Players are: SRI Int’l, Genomic Health w/Cleveland Clinic, Myriad Genetics w/UCSF, GenomeDx and BioTheranostics

Early Detection of Prostate Cancer: American Urological Association (AUA) Guideline

A Blood Test to Identify Aggressive Prostate Cancer: a Discovery @ SRI International, Menlo Park, CA

Prostate Cancer: Androgen-driven “Pathomechanism” in Early-onset Forms of the Disease

Prostate Cancer and Nanotecnology

Prostate Cancer Cells: Histone Deacetylase Inhibitors Induce Epithelial-to-Mesenchymal Transition

Imaging agent to detect Prostate cancer-now a reality

Scientists use natural agents for prostate cancer bone metastasis treatment

Today’s fundamental challenge in Prostate cancer screening

Prostate Cancers Plunged After USPSTF Guidance, Will It Happen Again?

Nanoparticle delivery to cancer drug targets

Perspectives on Anti-metastatic Effects in Cancer Research 2015

Identifying Cancers and Resistance

Peptides and anti-Cancer activity

Breakthrough work in cancer*

Imaging Technology in Cancer Surgery

Immunotherapy in Cancer: A Series of Twelve Articles in the Frontier of Oncology by Larry H Bernstein, MD, FCAP

Urological Cancers of Men

Current Advanced Research Topics in MRI-based Management of Cancer Patients

A Synthesis of the Beauty and Complexity of How We View Cancer

The importance of spatially-localized and quantified image interpretation in cancer management

Cancer Metastasis

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing

In Focus: Identity of Cancer Stem Cells

On the road to improve prostate biopsy

State of the art in oncologic imaging of Prostate.

New clinical results supports Imaging-guidance for targeted prostate biopsy

The Incentive for “Imaging based cancer patient’ management”

Topics in Pathology :Liquid Biopsy Assay May Predict Drug Resistance

Opening Ceremony and Award Presentations from the 2015 AACR Meeting in Philadelphia PA

Read Full Post »

Conduction, graphene, elements and light

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

New 2D material could upstage graphene   Mar 25, 2016

Can function as a conductor or semiconductor, is extremely stable, and uses light, inexpensive earth-abundant elements
http://www.kurzweilai.net/new-2d-material-could-upstage-graphene
The atoms in the new structure are arranged in a hexagonal pattern as in graphene, but that is where the similarity ends. The three elements forming the new material all have different sizes; the bonds connecting the atoms are also different. As a result, the sides of the hexagons formed by these atoms are unequal, unlike in graphene. (credit: Madhu Menon)

A new one-atom-thick flat material made up of silicon, boron, and nitrogen can function as a conductor or semiconductor (unlike graphene) and could upstage graphene and advance digital technology, say scientists at the University of Kentucky, Daimler in Germany, and the Institute for Electronic Structure and Laser (IESL) in Greece.

Reported in Physical Review B, Rapid Communications, the new Si2BN material was discovered in theory (not yet made in the lab). It uses light, inexpensive earth-abundant elements and is extremely stable, a property many other graphene alternatives lack, says University of Kentucky Center for Computational Sciences physicist Madhu Menon, PhD.

Limitations of other 2D semiconducting materials

A search for new 2D semiconducting materials has led researchers to a new class of three-layer materials called transition-metal dichalcogenides (TMDCs). TMDCs are mostly semiconductors and can be made into digital processors with greater efficiency than anything possible with silicon. However, these are much bulkier than graphene and made of materials that are not necessarily earth-abundant and inexpensive.

Other graphene-like materials have been proposed but lack the strengths of the new material. Silicene, for example, does not have a flat surface and eventually forms a 3D surface. Other materials are highly unstable, some only for a few hours at most.

The new Si2BN material is metallic, but by attaching other elements on top of the silicon atoms, its band gap can be changed (from conductor to semiconductor, for example) — a key advantage over graphene for electronics applications and solar-energy conversion.

The presence of silicon also suggests possible seamless integration with current silicon-based technology, allowing the industry to slowly move away from silicon, rather than precipitously, notes Menon.

https://youtu.be/lKc_PbTD5go

Abstract of Prediction of a new graphenelike Si2BN solid

While the possibility to create a single-atom-thick two-dimensional layer from any material remains, only a few such structures have been obtained other than graphene and a monolayer of boron nitride. Here, based upon ab initiotheoretical simulations, we propose a new stable graphenelike single-atomic-layer Si2BN structure that has all of its atoms with sp2 bonding with no out-of-plane buckling. The structure is found to be metallic with a finite density of states at the Fermi level. This structure can be rolled into nanotubes in a manner similar to graphene. Combining first- and second-row elements in the Periodic Table to form a one-atom-thick material that is also flat opens up the possibility for studying new physics beyond graphene. The presence of Si will make the surface more reactive and therefore a promising candidate for hydrogen storage.

 

Nano-enhanced textiles clean themselves with light

Catalytic uses for industrial-scale chemical processes in agrochemicals, pharmaceuticals, and natural products also seen
http://www.kurzweilai.net/nano-enhanced-textiles-clean-themselves-with-light
Close-up of nanostructures grown on cotton textiles. Image magnified 150,000 times. (credit: RMIT University)

Researchers at at RMIT University in Australia have developed a cheap, efficient way to grow special copper- and silver-based nanostructures on textiles that can degrade organic matter when exposed to light.

Don’t throw out your washing machine yet, but the work paves the way toward nano-enhanced textiles that can spontaneously clean themselves of stains and grime simply by being put under a light or worn out in the sun.

The nanostructures absorb visible light (via localized surface plasmon resonance — collective electron-charge oscillations in metallic nanoparticles that are excited by light), generating high-energy (“hot”) electrons that cause the nanostructures to act as catalysts for chemical reactions that degrade organic matter.

Steps involved in fabricating copper- and silver-based cotton fabrics: 1. Sensitize the fabric with tin. 2. Form palladium seeds that act as nucleation (clustering) sites. 3. Grow metallic copper and silver nanoparticles on the surface of the cotton fabric. (credit: Samuel R. Anderson et al./Advanced Materials Interfaces)

The challenge for researchers has been to bring the concept out of the lab by working out how to build these nanostructures on an industrial scale and permanently attach them to textiles. The RMIT team’s novel approach was to grow the nanostructures directly onto the textiles by dipping them into specific solutions, resulting in development of stable nanostructures within 30 minutes.

When exposed to light, it took less than six minutes for some of the nano-enhanced textiles to spontaneously clean themselves.

The research was described in the journal Advanced Materials Interfaces.

Scaling up to industrial levels

Rajesh Ramanathan, a RMIT postdoctoral fellow and co-senior author, said the process also had a variety of applications for catalysis-based industries such as agrochemicals, pharmaceuticals, and natural productsand could be easily scaled up to industrial levels. “The advantage of textiles is they already have a 3D structure, so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” he said.

Cotton textile fabric with copper-based nanostructures. The image is magnified 200 times. (credit: RMIT University)

“Our next step will be to test our nano-enhanced textiles with organic compounds that could be more relevant to consumers, to see how quickly they can handle common stains like tomato sauce or wine,” Ramanathan said.

“There’s more work to do to before we can start throwing out our washing machines, but this advance lays a strong foundation for the future development of fully self-cleaning textiles.”


Abstract of Robust Nanostructured Silver and Copper Fabrics with Localized Surface Plasmon Resonance Property for Effective Visible Light Induced Reductive Catalysis

Inspired by high porosity, absorbency, wettability, and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the modes of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.

 

New type of molecular tag makes MRI 10,000 times more sensitive

Could detect biochemical processes in opaque tissue without requiring PET radiation or CT x-rays
http://www.kurzweilai.net/new-type-of-molecular-tag-makes-mri-10000-times-more-sensitive

Duke scientists have discovered a new class of inexpensive, long-lived molecular tags that enhance MRI signals by 10,000 times. To activate the tags, the researchers mix them with a newly developed catalyst (center) and a special form of hydrogen (gray), converting them into long-lived magnetic resonance “lightbulbs” that might be used to track disease metabolism in real time. (credit: Thomas Theis, Duke University)

Duke University researchers have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions, such as those involved in cancer and heart disease, and in real time.

Let’s review how MRI (magnetic resonance imaging) works: MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. By generating a strong magnetic field (such as 3 Tesla) and a series of radio-frequency waves, MRI induces these hydrogen magnets in atoms to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs (such as the brain), blood vessels, and tumors inside the body.


MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique. That makes it impossible to detect small numbers of molecules (without using unattainably more massive magnetic fields).

So to take MRI a giant step further in sensitivity, the Duke researchers created a new class of molecular “tags” that can track disease metabolism in real time, and can last for more than an hour, using a technique called hyperpolarization.* These tags are biocompatible and inexpensive to produce, allowing for using existing MRI machines.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

Sensitive tissue detection without radiation

The new molecular tags open up a new world for medicine and research by making it possible to detect what’s happening in optically opaque tissue instead of requiring expensive positron emission tomography (PET), which uses a radioactive tracer chemical to look at organs in the body and only works for (typically) about 20 minutes, or CT x-rays, according to the researchers.

This research was reported in the March 25 issue of Science Advances. It was supported by the National Science Foundation, the National Institutes of Health, the Department of Defense Congressionally Directed Medical Research Programs Breast Cancer grant, the Pratt School of Engineering Research Innovation Seed Fund, the Burroughs Wellcome Fellowship, and the Donors of the American Chemical Society Petroleum Research Fund.

* For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said. But while promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular “lightbulbs” burn out in a matter of seconds.

“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”

So the researchers synthesized a series of molecules containing diazarines — a chemical structure composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly. Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.


Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Conventional magnetic resonance (MR) faces serious sensitivity limitations, which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2singlet spin order, with signal decay time constants of 5.8 and 23 min, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for more than an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.

references:

[Seems like they have a great idea, now all they need to do is confirm very specific uses or types of cancers/diseases or other processes they can track or target. Will be interesting to see if they can do more than just see things, maybe they can use this to target and destroy bad things in the body also. Keep up the good work….. this sounds like a game changer.]

 

Scientists time-reverse developed stem cells to make them ‘embryonic’ again

May help avoid ethically controversial use of human embryos for research and support other research goals
http://www.kurzweilai.net/scientists-time-reverse-developed-stem-cells-to-make-them-embryonic-again
Researchers have reversed “primed” (developed) “epiblast” stem cells (top) from early mouse embryos using the drug MM-401, causing the treated cells (bottom) to revert to the original form of the stem cells. (credit: University of Michigan)

University of Michigan Medical School researchers have discovered a way to convert mouse stem cells (taken from an embryo) that have  become “primed” (reached the stage where they can  differentiate, or develop into every specialized cell in the body) to a “naïve” (unspecialized) state by simply adding a drug.

This breakthrough has the potential to one day allow researchers to avoid the ethically controversial use of human embryos left over from infertility treatments. To achieve this breakthrough, the researchers treated the primedembryonic stem cells (“EpiSC”) with a drug called MM-401* (a leukemia drug) for a short period of time.

Embryonic stem cells are able to develop into any type of cell, except those of the placenta (credit: Mike Jones/CC)

…..

* The drug, MM-401, specifically targets epigenetic chemical markers on histones, the protein “spools” that DNA coils around to create structures called chromatin. These epigenetic changes signal the cell’s DNA-reading machinery and tell it where to start uncoiling the chromatin in order to read it.

A gene called Mll1 is responsible for the addition of these epigenetic changes, which are like small chemical tags called methyl groups. Mll1 plays a key role in the uncontrolled explosion of white blood cells in leukemia, which is why researchers developed the drug MM-401 to interfere with this process. But Mll1 also plays a role in cell development and the formation of blood cells and other cells in later-stage embryos.

Stem cells do not turn on the Mll1 gene until they are more developed. The MM-401 drug blocks Mll1’s normal activity in developing cells so the epigenetic chemical markers are missing. These cells are then unable to continue to develop into different types of specialized cells but are still able to revert to healthy naive pluripotent stem cells.


Abstract of MLL1 Inhibition Reprograms Epiblast Stem Cells to Naive Pluripotency

The interconversion between naive and primed pluripotent states is accompanied by drastic epigenetic rearrangements. However, it is unclear whether intrinsic epigenetic events can drive reprogramming to naive pluripotency or if distinct chromatin states are instead simply a reflection of discrete pluripotent states. Here, we show that blocking histone H3K4 methyltransferase MLL1 activity with the small-molecule inhibitor MM-401 reprograms mouse epiblast stem cells (EpiSCs) to naive pluripotency. This reversion is highly efficient and synchronized, with more than 50% of treated EpiSCs exhibiting features of naive embryonic stem cells (ESCs) within 3 days. Reverted ESCs reactivate the silenced X chromosome and contribute to embryos following blastocyst injection, generating germline-competent chimeras. Importantly, blocking MLL1 leads to global redistribution of H3K4me1 at enhancers and represses lineage determinant factors and EpiSC markers, which indirectly regulate ESC transcription circuitry. These findings show that discrete perturbation of H3K4 methylation is sufficient to drive reprogramming to naive pluripotency.


Abstract of Naive Pluripotent Stem Cells Derived Directly from Isolated Cells of the Human Inner Cell Mass

Conventional generation of stem cells from human blastocysts produces a developmentally advanced, or primed, stage of pluripotency. In vitro resetting to a more naive phenotype has been reported. However, whether the reset culture conditions of selective kinase inhibition can enable capture of naive epiblast cells directly from the embryo has not been determined. Here, we show that in these specific conditions individual inner cell mass cells grow into colonies that may then be expanded over multiple passages while retaining a diploid karyotype and naive properties. The cells express hallmark naive pluripotency factors and additionally display features of mitochondrial respiration, global gene expression, and genome-wide hypomethylation distinct from primed cells. They transition through primed pluripotency into somatic lineage differentiation. Collectively these attributes suggest classification as human naive embryonic stem cells. Human counterparts of canonical mouse embryonic stem cells would argue for conservation in the phased progression of pluripotency in mammals.

 

 

How to kill bacteria in seconds using gold nanoparticles and light

March 24, 2016

 

zapping bacteria ft Could treat bacterial infections without using antibiotics, which could help reduce the risk of spreading antibiotics resistance

Researchers at the University of Houston have developed a new technique for killing bacteria in 5 to 25 seconds using highly porous gold nanodisks and light, according to a study published today in Optical Materials Express. The method could one day help hospitals treat some common infections without using antibiotics

Read Full Post »

Functional magnetic resonance imaging

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Demystifying BOLD fMRI Data

What does blood oxygen level–dependent functional magnetic resonance imaging actually tell us about brain activity?

By Tim Vernimmen | February 17, 2016   http://www.the-scientist.com/?articles.view/articleNo/45366/title/Demystifying-BOLD-fMRI-Data

BOLD signal in no task (“resting state”) fMRI YOUTUBE, ZEUS CHIRIPA
http://www.the-scientist.com/images/News/February2016/yC7leMG%20-%20Imgur.gif

he relevance and reliability of blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) data have been hotly debated for years, not least because it is still unclear what aspects of brain activity the technique is picking up. “In many ways, this would seem to be an unacceptable method for neuroscience,” said Ed Bullmore from the University of Cambridge, at a Royal Society-organizedgathering of neuroscients late last month. “But if you’re interested in humans, there isn’t much of a choice.” Bullmore and colleagues had convened in Buckinghamshire, U.K., to discuss what, exactly, BOLD fMRI results can tell us.

“What we do know, of course, is what MRI measures,” said Robert Turner, director emeritus of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany. MRI measures the magnetization of hydrogen protons in water molecules excited by pulses of radio waves that lead their spins to temporarily align. “Over the next few tens of milliseconds,” Turner noted, “their orientations fan out again, and the magnetization we measure will quickly decrease.”

But what can this tell us about brain activity?

When hemoglobins—the iron-rich oxygen-carrying proteins in our blood—run out of oxygen, Turner explained, “they become paramagnetic,” disturbing the local magnetic field. This makes the protons spin out of phase more rapidly.” One might think this means BOLD fMRI highlights oxygen consumption by active neurons, but in reality, such activity is rarely measured.

What BOLD does reveal is what usually happens next: fresh blood rushes into the area, flushing out paramagnetic deoxyhemoglobin and replacing it with new, oxygenated hemoglobin. Since this does not interfere with the proton spins, the result is a larger fMRI signal. So BOLD fMRI reflects a combination of changes in blood flow and oxygen consumption within the brain—not neuronal activity itself.

“This means that if BOLD shows you a large blob of activity, that doesn’t necessarily mean that all the neurons in that region are spiking,” said David Attwell of University College London, one of the meeting’s organizers. “So what we really need to know is how neurons are influencing bloodflow.”

To find out, Attwell and his colleagues are studying postmortem slices of rodent brain to better understand the interactions between neurons, blood vessels, and supporting cells such as astrocytes and pericytes. These cells wrap around the vasculature and likely affect its response to local neural activity.

Research on living animals, on the other hand, has suggested that endothelial cells lining the brain’s blood vessels may also play an active role in coordinating such responses, as they are known to do elsewhere in the body. “The wave of vessel dilation resulting in increased bloodflow travels much faster and farther than could be explained by astrocytes and pericytes alone,” said Elizabeth Hillman of Columbia University in New York City, whose lab has developed an optical method to look into rat brains directly. “Moreover, if we disable parts of the endothelium, we can see that wave come to a halt.”

More recently, the Hillman lab unexpectedly uncovered what seems to be a convincing link between neural and vascular activity. “While trying to disprove that resting state activity in the brain could teach us about neural connections we have actually been able to observe seemingly spontaneous neural activity that correlates with bloodflow quite tightly,” Hillman told The Scientist, “which would be hard to show with the very precise single-neuron measurements many neuroscientists prefer, but when you zoom out and look at the larger picture, the synchrony is hard to deny—and believe me, we’ve tried very hard to explain these results away.”

If these unpublished findings stand up to the scrutiny of Hillman’s colleagues,  this would be reassuring news for neuroscientists using BOLD fMRI to study neural activity.

But in some brains, BOLD may not work at all, Hillman cautioned. “In the developing brain of young animals, for example, we find that BOLD activity is very unusual,” she said. “Initially, the bloodflow response doesn’t seem to be attuned to neural activity at all, so fMRI may be as good as blind.”

Diseased brains can also skew results. “Pathology may affect the BOLD signal in the absence of any changes in neurons themselves,” said Bojana Stefanovic of Toronto’s Sunnybrook Research Institute. In patients who suffered a stroke, for example, the amount of water may be reduced where cells have died, and increased by oedema in some of the surrounding tissues. The brain’s bloodflow may also be altered by disruptions to the vasculature, for example, or the formation of scar tissue.

The best way to deal with this depends on the research question, Stefanovic told The Scientist. “There’s this idea that if we can link BOLD to neuronal activity—that would be nirvana,” she said. “Clinicians, however, are looking for measures with a clear link to symptoms. And, fortunately, there is no shortage of disease effects BOLD can sense.”

Cognitive neuroscientist Geraint Rees of University College London sounded a similar note. “If whatever BOLD is measuring reproducibly correlates to the behavior I’m interested in, such as attention or consciousness, I am less worried about the physiological details behind it,” he said. “Which does not mean, of course, I don’t consider them interesting—otherwise, I wouldn’t be here.”

Meanwhile, researchers are developing methods to measure human neural activity more directly, learning more about BOLD fMRI data along the way. “Thanks to over 30 Parkinson’s patients who agreed to play an investment game while undergoing surgery for the placement of a deep-brain stimulation probe, we were able to directly measure the striatal dopamine response we only knew from rodents and human BOLD,” said Read Montague of the Virginia Tech Carilion Research Institute. “Surprisingly, we found that while BOLD responds to expected reward and actual outcome separately, the dopamine response integrates them into one ‘better or worse’ signal.”Montague’s team would next like to explore whether the same is true for people without Parkinson’s disease, which is known to affect dopaminergic neurons.

For now, however, the researchers’ results demonstrate the benefits of applying other techniques in parallel with BOLD fMRI. Not only might this approach reveal insights BOLD cannot, it might also help neuroscientists better understand the results of past fMRI experiments.

Interpreting BOLD: a dialogue between cognitive and cellular neuroscience

Kavli Royal Society Centre, Chicheley Hall, Newport Pagnell, Buckinghamshire, MK16 9JJ

Overview

Theo Murphy international scientific meeting organised by Dr Anusha Mishra, Professor David Attwell FRS, Dr Zebulun Kurth-Nelson, Dr Catherine N. Hall and Dr Clare Howarth

Functional imaging reveals statistical patterns of coordination between brain areas. (Copyright: Crossley N.A. et al., PNAS 2013 110:11583-8.)

Cognitive neuroscientists use BOLD signals to non-invasively study brain activity, although the neurophysiological underpinnings of these signals are poorly understood. By bringing together scientists using BOLD/fMRI as a tool with those studying the underlying neurovascular coupling mechanisms, the aim of this meeting was to create a novel dialogue to understand how BOLD relates to brain activity and inform future neurovascular and cognitive research.

 

Using an achiasmic human visual system to quantify the relationship between the fMRI BOLD signal and neural response

 

Achiasma in humans causes gross mis-wiring of the retinal-fugal projection, resulting in overlapped cortical representations of left and right visual hemifields. We show that in areas V1-V3 this overlap is due to two co-located but non-interacting populations of neurons, each with a receptive field serving only one hemifield. Importantly, the two populations share the same local vascular control, resulting in a unique organization useful for quantifying the relationship between neural and fMRI BOLD responses without direct measurement of neural activity. Specifically, we can non-invasively double local neural responses by stimulating both neuronal populations with identical stimuli presented symmetrically across the vertical meridian to both visual hemifields, versus one population by stimulating in one hemifield. Measurements from a series of such doubling experiments show that the amplitude of BOLD response is proportional to approximately 0.5 power of the underlying neural response. Reanalyzing published data shows that this inferred relationship is general.

DOI: http://dx.doi.org/10.7554/eLife.09600.001

 

eLife digest

When a part of the brain becomes active, more oxygen-rich blood flows to it to keep its neurons supplied with energy. This flow of blood can be measured using a technique called functional magnetic resonance imaging (fMRI). Yet, it was not known exactly how the magnitude of the signal recorded from the oxygenated blood flow – dubbed the BOLD (blood oxygenation level dependent) signal – relates to the level of neural activity.

In most people, the brain area that processes fundamental visual information – called the visual cortex – receives signals from both eyes, sent via the optic nerves. The two eyes’ optic nerves are bridged together with a structure called the optic chiasm, which ensures that each side of the brain gets input from both eyes for one side of the visual field. However, in rare cases, a person may lack an optic chiasm, and instead each side of the brain processes information about both sides of the visual field seen by one eye. This condition is known as achiasma.

Bao et al. have now used fMRI and behavioral experiments to study the brain activity of a volunteer who lacks an optic chiasm. This revealed that each half of the visual field stimulates different neurons in the same brain hemisphere of an achiasmic visual cortex. The two sets of neurons do not interact with each other, but they do share the same local blood supply. Moreover, these sets of neurons are organized in such a way as to preserve normal vision, and can be controlled independently using visual stimulation.

If both sets of neurons are stimulated with the same visual input at the same time, they together trigger twice as much neural activity as when just one set is stimulated. This also causes an increased BOLD signal as more blood flows to that region of the brain. Bao et al. were therefore able to infer a mathematical relationship between neural activity and the BOLD signal. This revealed that the magnitude of the BOLD signal is proportional to the square root of the underlying neural activity. Reanalyzing previously published BOLD data from other fMRI studies of healthy humans and monkeys supports this conclusion.

Bao et al.’s study provides scientists with a human model for noninvasively studying the origins and neural underpinnings of fMRI measurements, which may change how we analyze and interpret brain-imaging results in the future. The biggest challenge that researchers will likely face is in recruiting individuals with this rare condition of achiasma.

DOI:http://dx.doi.org/10.7554/eLife.09600.002

 

Functional magnetic resonance imaging (fMRI) based on the blood oxygenation level dependent (BOLD) signal has provided unprecedented insights into the workings of the human brain. The quantitative relationship between neural signals and the fMRI BOLD response is not precisely known and remains an active area of investigation. Most studies using the BOLD signal to infer brain activity rely on analytical methods (e.g., the general linear model) that assume a linear relationship between the BOLD signal and neural response, despite noticeable deviations from linearity (Boynton et al., 1996).

The BOLD signal is indirectly related to local neural response through mechanisms associated with oxygen metabolism and blood flow (Davis et al., 1998; Hoge et al., 1999; Thompson et al., 2003;Griffeth and Buxton, 2011). The neural response that is associated with information processing is itself multi-faceted. It comprises several interacting components, including subthreshold and suprathreshold electrical activities, the transport, release and reuptake of neurotransmitters, and various maintenance activities. Each of these components has its own metabolic and hemodynamic consequences. The common extracellular measurements of neural response include single- and multi-unit spiking activities and local field potential (LFP). While seminal studies have demonstrated a close relationship between the BOLD signal and these extracellular measurements of neural response (Logothetis et al., 2001; Mukamel et al., 2005), the quantitative nature of this relationship has not been sufficiently characterized. More importantly, since the relationship between these extracellular measurements and the intracellular components of neural activity is complex, the measured relationship between the BOLD signal to any specific extracellular components (e.g., power in the gamma band of LFP) may not reflect the relationship between the BOLD signal and the totality of neural response.

Most applications of fMRI, particularly in human neuroscience, sidestep any need for explicitly estimating neural activity and instead rely on establishing a direct relationship between the BOLD response and the stimulus condition. The general approach is to assume the BOLD responses evoked at different times and in different stimulus conditions sum linearly. Boynton and colleagues (1996) studied how the BOLD signal varied with the contrast and duration of stimulus presentation in the striate cortex and found that the system is approximately linear, in the sense that the BOLD response evoked by a 12 s stimulus was well approximated by summing the responses from two consecutive 6-s stimulations, even though predictions based on stimulations of much shorter durations (e.g., 3 s) failed to accurately predict the long-duration stimulus response. While this and similar studies (Cohen, 1997; Dale and Buckner, 1997; Heckman et al., 2007) have clearly noted the lack of linearity, their general message of an approximately linear system has nevertheless been used to justify the broad application of the general linear model (GLM) in fMRI data analyses. While the neural response is not explicitly involved in this type of analysis, it is always in the background — any nonlinearity observed in the BOLD response, e.g., in surround suppression or adaptation (Grill-Spector and Malach, 2001; Kourtzi and Huberle, 2005; Larsson and Smith, 2012) is often attributed to the underlying nonlinear neural response. The implicit assumption in common practice is that the relationship between the BOLD response and the neural response is essentially linear, a view that is widespread (Logothetis and Wandell, 2004) but under-examined.

An extensive set of biophysical models has been proposed to express either the steady-states (Davis et al., 1998; Griffeth and Buxton, 2011) or the dynamics of the BOLD response (Buxton et al., 1998;Mandeville et al., 1999; Feng et al., 2001; Toronov et al., 2003; Blockley et al., 2009; Kim and Ress, 2016) in terms of more basic physiological components, such as blood flow, blood volume, oxygen saturation, and oxygen extraction fraction in different vascular compartments. These biophysical models are foundational in our understanding of the BOLD signal, yet they do not provide any explicit and quantitative linkage between the neural response and the physiological components that are the inputs to these models. Friston et al. (2000) (see also Stephan et al., 2007), proposed a linkage between the evoked neural response and the blood-flow parameter of the Balloon model by Buxton et al. (1998). While the resulting model is a powerful tool for inferring effective connectivity between brain regions from the BOLD signal, direct empirical support for this specific linkage is limited.

How could we empirically determine the quantitative relationship between the BOLD signal and the neural response, and do so when the constituents of the neural response are not comprehensively defined? A condition known as achiasma or non-decussating retinal-fugal fibre syndrome may provide an excellent model system for this purpose. This congenital condition prevents the normal crossing of optic nerve fibers from the nasal hemi-retina to the brain hemisphere contralateral to the eye (Apkarian et al., 1994; 1995). The result is a full representation of the entire visual field (as opposed to only half the visual field) in each cerebral hemisphere (Williams et al., 1994; Victor et al., 2000; Hoffmann et al., 2012; Davies-Thompson et al., 2013; Kaule et al., 2014). Specifically, the representations of the two visual hemifields are superimposed in the low-level visual areas (V1-V3) ipsilateral to each eye, such that two points in the visual field located symmetrically across the vertical meridian are mapped to the same point on the cortex (Hoffmann et al., 2012). In other words, there are two pRFs for every point on this person’s low-level visual cortex. The two pRFs are symmetrically located across the vertical meridian. Prior to the current study, it was not known if these pRFs were represented by one or two neural populations, or if these neural populations interacted.

In the current study, we found that the two pRFs are each represented by an independent population of neurons. The result is an in-vivo system with two independent populations of spatially intermingled neurons that share the same local control of blood vasculature. Because their population receptive fields (pRFs) do not overlap, an experimenter can independently stimulate each population by presenting a stimulus to its respective receptive field. Such a system is ideal for characterizing the relationship between neural and BOLD responses. Even though we may not know the constituents of the neural response, it will be reasonable to assume that the local neural response evoked by presenting identical stimuli to both pRFs, thereby activating both neuronal populations equally, is twice the neural response evoked by presenting the stimulus to just one of the pRFs. Measuring BOLD responses under these conditions allows us to not only directly test for linearity between the BOLD signal and neural response but also quantify the relationship between them, up to an arbitrary scaling factor. This approach does not require us to know the constituents of neural activity, and it is non-invasive.

To determine the relationship between neural response and the corresponding fMRI BOLD signal, we measured BOLD responses in the cortical areas V1-V3 of our achiasmic subject to luminance-defined stimuli. We presented stimuli of different contrasts to either one or both of the pRFs. From this data set, we used a model-free non-parametric method to infer the quantitative relationship between the BOLD signal (B) and neural response (Z). We found that the resulting B vs. Z function is well approximated by a power function with an exponent close to 0.5. The exponent stayed the same for short and long stimulus durations. We successfully cross-validated this result by comparing the inferred neural responses from this and twelve other fMRI studies to the single-unit responses obtained from non-human primates in similar contrast-response experiments.

 

Article

…..

Figure 4.fMRI BOLD signal as a function of neural response.

(A) Five pairs of BOLD response amplitudes evoked in V1-V3 with the single- and double-sided stimulations, each with two stimulus durations, 6-s (left column) and 1-s (right column). If the neural response to a single-sided stimulus isZi, then the neural response to the corresponding double-sided stimulus will be 2Zi, given our empirical determinations of co-localization and independence of the neuronal populations in an achiasmic visual cortex. (B) The BOLD vs. neural response (BvZ) functions for V1-V3 as inferred by the stitching procedure for the two stimulus durations. The inferred functions can be well fitted with power-law functions (i.e. straight lines in log-log coordinates). These functions are nonlinear, with a log-log slope significantly shallower than unity (the background gray lines). (C) The exponents (γ) of the power-law fit of the BvZ functions for V1-V3. Error bars denote 95% CI. The red line indicatesγ = 0.5. γ estimated from V2 and V3 (γ ~ 0.5) were not significantly different, while that obtained from V1 was biased upward, due to a violation of the co-localization assumption (see Discussion) required for inferring the BvZ function using the summation experiment. We thus inferred the (true) BvZ function of V1-V3 using the average γ estimated from V2 and V3 only.

DOI: http://dx.doi.org/10.7554/eLife.09600.011

…..

Comparing BOLD amplitude and spiking activity

Spike rate is one of the most common measures of neural response, and the BOLD response has been related to spike rate (Heeger et al., 2000; Heeger and Ress, 2002; Logothetis and Wandell, 2004). To cross-validate our finding and to make contact with the broader literature, we used the inferred BvZ function (with γ inferred from V2 and V3) to estimate the neural response Z from the BOLD amplitude data of the single-sided conditions in the BOLD summation experiment, which were typical contrast response measurements. The inferred neural activity in V1 for both the 6-s and 1-s stimuli matched extremely well with the average primate V1 contrast response function measured in terms of single-unit spiking activity by Albrecht (1995) (Figure 5A). Contrary to earlier reports based on the same single-unit data (Heeger et al., 2000), linearly scaling our BOLD amplitude data does not fit the single-unit spiking data. The nonlinearity in our data cannot be attributed to anticipatory and other endogenous responses that might be induced by the task structure (Sirotin and Das, 2009) (Figure 3—figure supplement 3). This is because our subject was engaged in a demanding central fixation task (orientation discrimination) that was asynchronous with the blocked contrast stimuli.
Figure 5.Comparisons between neural response inferred from the BvZ function (B = kZγ) and single-unit spiking activity.  http://dx.doi.org/10.7554/eLife.09600.014

…….

We found that the fMRI BOLD response amplitude is proportional to the local neural response raised to a power of about 0.5. We reached this conclusion by measuring, in the visual cortex of an achiasmic subject, fMRI BOLD amplitudes at five levels of neural activity and also at twice those levels. Our ability to double the local neural response relies on the presence of two co-localized but independent populations of neurons in the visual cortex of the achiasmic subject. The two neuronal populations are equally excitable, and each population has a distinct and non-overlapping population receptive field. We used fMRI retinotopy and localized stimulation to demonstrate co-localization and equal excitability. We used a sensitive contrast detection task and a long-duration fMRI adaptation task to demonstrate independence. Taken together, our results demonstrate that the achiasmic human visual cortex provides a versatile in vivo model for investigating the relationship between evoked neural response and the associated fMRI BOLD signal.

Read Full Post »

2016 – US and International Cardiology Conference Calendar

Reporter: Aviva Lev-Ari, PhD, RN

 

2016

EVENT TITLE ACRONYM DATE LOCATION
CATCH-UP Cardiac Assist Device Therapy Course Mar 18, 2016 toMar 19, 2016 New York, N.Y.
3-D Echo Workshop at Echo 2016 ECHO Mar 21, 2016 toMar 23, 2016 New York, N.Y.
Global Summit on Innovations in Interventions Apr 01, 2016
American College of Cardiology ACC Apr 02, 2016 to Apr 04, 2016 Chicago, Ill.
LINC Middle East 2016 Apr 07, 2016 to Apr 08, 2016 Dubai, United Arab Emirates
Echo Fiesta – Review Course for Adult Echo Apr 07, 2016 to Apr 10, 2016 San Antonio, Texas
Venous Symposium VS Apr 14, 2016 to Apr 16, 2016 New York, N.Y.
Korean Cardiology Related Societies Joint Scientific Congress Apr 15, 2016 to Apr 16, 2016 Daegu, Korea
National Interventional Council – Cardiological Society of India CSI-NIC Apr 15, 2016 to Apr 17, 2016 Hyderabad, India
Advanced Revascularization (ARCH) ARCH Apr 21, 2016 to Apr 23, 2016 St. Louis, Mo.
ICCA Stroke Apr 22, 2016 to Apr 23, 2016 Prague, Czech Republic
Charing Cross Interventional Symposium CX Apr 26, 2016 to Apr 29, 2016 London, U.K.
TCT Asia Pacific (TCTAP) TCTAP Apr 29, 2016 Seoul, South Korea
Heart Rhythm Society (HRS) Scientific Sessions HRS May 04, 2016 toMay 07, 2016 San Francisco, Calif.
Society for Cardiovascular Angiography and Interventions (SCAI) SCAI May 04, 2016 toMay 07, 2016 Orlando, Fla.
Global Embolization Symposium and Technologies (GEST) GEST May 05, 2016 toMay 08, 2016 New York, N.Y.
AATS Aortic Symposium May 12, 2016 toMay 13, 2016 New York, N.Y.
American Association for Thoracic Surgery (AATS) AATS May 14, 2016 toMay 16, 2016 Baltimore, Md.
Euro PCR May 17, 2016 toMay 20, 2016 Paris, France
Pacific Northwest Endovascular Conference (PNEC) May 25, 2016 Seattle, Wash.
Basics to Advanced Echocardiography in Nashville May 25, 2016 toMay 28, 2016 Nashville, Tenn.
CardioAlex May 31, 2016 to Jun 03, 2016 Alexandria, Egypt
Interventional Cardiology Montreal Live Symposium Jun 01, 2016 to Jun 03, 2016 Montreal, Canada
New Cardiovascular Horizons NCVH Jun 01, 2016 to Jun 03, 2016 New Orleans, La.
World Heart Federation – World Congress of Cardiology WCC Jun 04, 2016 to Jun 07, 2016 Mexico City, Mexico
SOLACI Jun 08, 2016 to Jun 10, 2016 Rio De Janeiro, Brazil
CardioStim Jun 08, 2016 to Jun 11, 2016 Nice, France
Vascular Annual Meeting Jun 09, 2016 to Jun 11, 2016 National Harbor, Md.
American Society of Echocardiography ASE Jun 10, 2016 to Jun 13, 2016 Seattle, Wash.
American Society of Echocardiography (ASE) 2016 ASE Jun 11, 2016to Jun 13, 2016 Washington State Convention Center, Seattle, Wash.
Society of Nuclear Medicine and Molecular Imaging SNMMI Jun 11, 2016to Jun 15, 2016 San Diego, Calif.
Complex Coronary, Valvular and Vascular Cases (CCVVC) CCVVC Jun 14, 2016 to Jun 17, 2016 New York, N.Y.
Global Summit on Innovations in Interventions (GI2) GI2 Jun 15, 2016 to Jun 17, 2016 Rio De Janeiro, Brazil
Transcatheter Valve Therapies (TVT) TVT Jun 16, 2016 to Jun 18, 2016 Chicago, Ill.
European Endovascular and Interventional Cardiology Conference (EICC) EICC Jun 17, 2016 Athens, Greece
Catheter Interventions in Congenital, Structural and Valvular Heart Disease (CSI) CSI Jun 22, 2016 to Jun 25, 2016 Frankfurt, Germany
Society of Cardiovascular Computed Tomography (SCCT) SCCT Jun 23, 2016 to Jun 26, 2016 Orlando, Fla.
Complex Cardiovascular Catheter Therapeutics (C3) C3 Jun 28, 2016 to Jul 01, 2016 Orlando, Fla.
Chicago EndoVascular Conference Jul 18, 2016to Jul 21, 2016 Chicago, Ill.
Complex Interventional Cardiovascular Therapy (CICT) CICT Jul 29, 2016to Jul 30, 2016 San Francisco, Calif.
International Academy of Cardiology Scientific Sessions / World Congress on Heart Disease Jul 30, 2016to Aug 01, 2016 Boston, Mass.
AHRA Jul 31, 2016to Aug 03, 2016 Nashville, Tenn.
International Society for Applied Cardiovascular Biology (ISACB) Bi-annual Meeting ISACB Sep 07, 2016 to Sep 10, 2016 Banff, Alberta, Canada
World Molecular Imaging Congress WMIC Sep 07, 2016 to Sep 10, 2016 New York, N.Y.
Echo Cardio Bordeaux Sep 21, 2016 to Sep 23, 2016 Bordeaux, France
International Society of Hypertension Sep 24, 2016 to Sep 29, 2016 Seoul, South Korea
European Society for Vascular Surgery XXX Annual Meeting ESVS Sep 28, 2016 to Sep 30, 2016 Copenhagen, Denmark
Asian Federation of Cardiology AFCC Oct 14, 2016 to Oct 16, 2016 Myanmar
Transcatheter Cardiovascular Therapeutics (TCT) TCT Oct 29, 2016 toNov 02, 2016 Washington, D.C.
British Society of Echocardiography Nov 11, 2016 toNov 12, 2016 London, U.K.
American Heart Association AHA Nov 12, 2016 toNov 16, 2016 New Orleans, La.
Radiological Society of North America (RSNA) RSNA Nov 27, 2016 toDec 02, 2016 Chicago, Ill.
World Molecular Imaging Congress WMIC Sep 13, 2017 to Sep 16, 2017 Philadelphia, Pa.
World Congress of Internal Medicine Oct 18, 2018 to Oct 22, 2018 Cape Town, South Africa

SOURCE

http://www.dicardiology.com/events

Read Full Post »

President Carter’s Status

Author: Larry H. Bernstein, MD, FCAP

 

 

Most Experts Not Surprised by Carter’s Status 

But early response does not mean ‘cure’

http://www.medpagetoday.com/HematologyOncology/SkinCancer/55076

 

http://clf1.medpagetoday.com/media/images/55xxx/55076.jpg

by Charles Bankhead
Staff Writer, MedPage Today

 

Former President Jimmy Carter’s announcement that he is free of metastatic melanoma surprised many people but, not most melanoma specialists contacted by MedPage Today.

With the evolution of modern radiation therapy techniques and targeted drugs, more patients with metastatic melanoma achieve complete and partial remissions, including remission of small brain metastases like the ones identified during the evaluation and initial treatment of Carter. However, the experts — none of whom have direct knowledge of Carter’s treatment or medical records — cautioned that early remission offers no assurance that the former president is out of the woods.

“If I had a patient of my own with four small brain mets undergoing [stereotactic radiation therapy], I would tell them that I fully expected the radiation to take care of those four lesions,” said Vernon K. Sondak, MD, of Moffitt Cancer Center in Tampa. “The fact that President Carter reports that it has done just that is not a surprise to me at all.

“I would also tell my patient that the focused radiation only treats the known cancer in the brain, and that if other small areas of cancer are present, they will likely eventually grow large enough to need radiation or other treatment as well, and that periodic brain scans will be required to monitor for this possibility.”

Carter also is being treated with the immune checkpoint inhibitor pembrolizumab (Keytruda), which is known to stimulate immune cells that then migrate to tumor sites to eradicate the lesions, noted Anna Pavlick, DO, of NYU Langone Medical Center in New York City.

“Melanoma is no longer a death sentence, and we are really changing what happens to patients,” said Pavlick. “It really is amazing.”

Carter’s melanoma story began to emerge in early August when he had surgery to remove what was described as “a small mass” from his liver. Following the surgery, Carter announced that his doctors had discovered four small melanoma lesions in his brain, confirming a suspicion the specialists had shared with him at the time of the surgery.

Carter subsequently underwent focused radiation therapy to eradicate the brain lesions and initiated a 12-week course of treatment with pembrolizumab. The radiation therapy-targeted therapy combination was a logical option for Carter, given observations that the PD-L1 inhibitor has synergy with radiation, noted Stergios Moschos, MD, of the University of North Carolina Lineberger Comprehensive Cancer Center at Chapel Hill.

“I have seen this in other patients with metastatic melanoma,” said Gary K. Schwartz, MD, of Columbia University Medical Center in New York City. “It is remarkable but absolutely possible within the realm of immunotherapy today.”

Although Carter’s announcement is undeniably good news, the optimism should be tempered by a long-term perspective, suggested Nagla Abdel Karim, MD, PhD, of the University of Cincinnati Medical Center.

“We do have similar stories; however, we would be careful to call it a ‘complete remission’ and ‘disease control’ and not a ‘cure,’ so far,” said Karim. “We would resume therapy and follow-up any autoimmune side effects. Most important is the quality of life, which he seems to enjoy, and we are very happy with that.”

Darrell S. Rigel, MD, also of NYU Langone Medical Center, represented the lone dissenter among specialists who responded to MedPage Today‘s request for comments.

“I’m happy for him, but it’s very unusual, especially in older men, who usually have a worse prognosis,” said Rigel. “He is on a new drug that may have a little more promise, but there is no definitive cure at this point.”

 

 

Read Full Post »

Tunable light sources

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Putting Tunable Light Sources to the Test

Common goals of spectroscopy applications such as studying the chemical or biological properties of a material often dictate the requirements of the measurement system’s lamp, power supply and monochromator.

JEFF ENG AND JOHN PARK, PH.D., NEWPORT CORP.   http://www.photonics.com/Article.aspx?AID=58302

Many common spectroscopic measurements require the coordinated operation of a detection instrument and light source, as well as data acquisition and processing. Integration of individual components can be challenging and various applications may have different requirements. Conventional lamp-based tunable light sources are a popular choice for applications requiring a measurement system with this degree of capability.

Many types of tunable light sources are available, with differences in individual component performance translating to the performance of the system as a whole. Tunable light sources are finding themselves to be an especially ideal system for one application in particular: quantum efficiency and spectral responsivity characterization of photonic sensors, such as solar cells.

Xenon and mercury xenon lamps, two examples of DC arc lamps.

http://www.photonics.com/images/Web/Articles/2016/2/10/Light_Lamps.jpg

Xenon and mercury xenon lamps, two examples of DC arc lamps.

The tunable light source’s (TLS) versatility as both a broadband and high-resolution monochromatic light source makes the unit suitable for a variety of applications, such as the study of wavelength-dependent chemical or biological properties or wavelength-induced physical changes of materials. These light sources can also be used in color analysis and reflectivity measurements of materials for quality purposes.

Among their unique attributes, the TLS can produce monochromatic light from the UV to near-infrared (NIR). Lamp-based TLSs feature two major components: a light source and a monochromator. Common lamps used in TLSs are the DC arc lamp and quartz tungsten halogen (QTH) lamp. While both of these lamps have a broad emission spectrum, arc and QTH lamps differ in the characteristic wavelength emissions or relatively smooth shape of their spectral output curves, respectively. A stable power supply for the lamp is a critical component since most applications require high light output power stability1.

Smooth spectral output vs. monochromator throughput

DC arc lamps are excellent sources of continuous wave, broadband light. They consist of two electrodes (an anode and a cathode) separated by a gas such as neon, argon, mercury or xenon. Light is generated by ionizing the gas between the electrodes. The bright broadband emission from this short arc between the anode and cathode makes these lamps high-intensity point sources, capable of being collimated with the proper lens configuration.

DC arc lamps also offer the advantages of long lifetime, superior monochromator throughput (particularly in the UV range) and a smaller divergence angle. They are particularly well-suited for fiber coupling applications2. (See Figure 1.)

A xenon arc lamp housed in an Oriel Research lamp housing.

Figure 1. A xenon arc lamp housed in an Oriel Research lamp housing. Photo courtesy of Newport Corp.

Xenon (Xe) arc lamps, in particular, have a relatively smooth emission curve in the UV to visible spectrums, with characteristic wavelengths emitted from 380 to 750 nm. However, strong xenon peaks are emitted between 750 to 1000 nm.

Their sunlike emission spectrum and about 5800 K color temperature make them a popular choice for solar simulation applications. (See Figure 2.)

Arc lamps can have the following specialty characteristics:

Ozone-free: Wavelength emissions below about 260 nm create toxic ozone. Ideally, an arc lamp is operated outdoors or in a room with adequate ventilation to protect the user from the ozone created.

UV-enhanced: For applications requiring additional UV light intensity, UV-enhanced lamps should be used. These lamps provide the same visible to NIR performance of an arc lamp while providing high-intensity UV output due to changes in the material of the lamp’s glass envelope.

High-stability: High-stability arc lamps are made of a higher quality cathode than that typically used for arc lamp construction. As a result, no arc wander occurs, allowing the lamp to maintain consistent output intensity throughout its lifetime.

The spectral output of 3000-W Xe and 250-W QTH lamps used in Oriel’s Tunable Light Sources.

Figure 2. The spectral output of 3000-W Xe and 250-W QTH lamps used in Oriel’s Tunable Light Sources.Photo courtesy of Newport Corp.

QTH lamps produce light by heating a filament wire with an electric current. The hot filament wire is surrounded by a vacuum or inert gas to prevent oxidation. QTH lamps are not very efficient at converting electricity to light, but they offer very accurate color reproduction due to their continuous blackbody spectrum. These lamps are a popular alternative to arc lamps due to their higher output intensity stability and lack of intense UV light emission, spectral emission lines in their output curve and toxic ozone production. These advantages over traditional DC arc lamps make QTH lamps preferable for radiometric and photometric applications as well as excitation sources of visible to NIR light. QTH lamps are also easier to handle and install, and produce a smooth output spectrum. Selecting the most appropriate lamp type is a matter of deciding which performance criteria are most important.

Constant current vs. constant power

The power supply is a vital component for operating a DC arc or QTH lamp with minimum light ripple. The lamps are operated in either constant current or constant power mode and are used in applications such as radiometric measurements, where a stable light output is required for accurate measurement. Providing stable electrical power to the lamp is important since fluctuations in the wavelength and output intensity of the light source impact the accuracy of measurement.

There is very little difference in the short-term output stability when operating an arc lamp or QTH lamp in constant current or constant power mode. However, the differences appear as the lamp ages. For arc lamps, even with a stable power supply, deposits on the inside of the lamp envelope are visible as the electrodes degrade, which causes an unstable arc position, changing the electrical characteristics of the arc lamp. The distance between the cathode and anode of the arc lamp increases, raising the lamp’s operating voltage. For QTH lamps, deposits on the inside of the lamp envelope are visible as the lamp filament degrades, changing the electrical and spectral characteristics of the lamp.

In power mode, the lamp is operated at a constant power setting. As the voltage cannot be changed, the current is raised or lowered to maintain the power at the same level. As the lamp ages, the radiant output decreases. However, lamp lifetime is prolonged.

In current mode, the lamp is operated at a constant current setting. As the voltage cannot be changed, the power is raised or lowered to maintain the current at the same level. As the lamp ages, the input power required for operation is increased. This results in greater output power, which, to some extent, may help compensate for a darkening lamp envelope. However, the lamp’s lifetime is greatly reduced due to the increase in power.

Although power supplies are highly regulated, there are factors beyond the control of the power supply that may affect light output. Some of these factors include lamp aging, ambient temperature fluctuations and filament erosion. For applications in which high stability light output intensity is especially critical, optical feedback control of power supply is suggested in order to compensate for such factors3. (See Figure 3.)

Oriel’s OPS Series Power Supplies offer the option of operating a lamp in constant power, constant current or intensity operation modes.

Figure 3. Oriel’s OPS Series Power Supplies offer the option of operating a lamp in constant power, constant current or intensity operation modes. Photo courtesy of Newport Corp.

Diffraction gratings narrow the wavelength band

Monochromators use diffraction gratings to spatially isolate and select a narrow band of wavelengths from a wider wavelength emitting light source. They are a valuable piece of equipment because they can be used to create quasi-monochromatic light and also take high precision spectral measurements. A high precision stepper motor is typically used to select the desired wavelength and switch between diffraction gratings quickly, without sacrificing instrument performance.

Determining which slit width to use is based on the trade-off between light throughput and the resolution required for measurement. A larger slit width allows for more light throughput. However, more light throughput results in poorer resolution. When choosing a slit width at which to operate the monochromator, both the input and output ports must be set to the same slit width. (See Figure 4.) Focused light enters the monochromator through the entrance slit, and is redirected by the collimating mirror toward the grating. The grating directs the light toward the focusing mirror, which then redirects the chosen wavelength toward the exit slit. At the exit slits, quasi-monochromatic light is emitted4.

A fixed width slit being installed into an Oriel Cornerstone 130 monochromator.

Figure 4. A fixed width slit being installed into an Oriel Cornerstone 130 monochromator.Photo courtesy of Newport Corp.

Measuring quantum efficiencies

Measuring quantum efficiency (QE) over a range of different wavelengths to measure a device’s QE at each photon energy level is an ideal task for a tunable light source. The QE of a photoelectric material for photons with energy below the band gap is zero. The QE value of a light-sensing device such as a solar cell indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. The principle of QE measurement is to count the proportion of carriers extracted from the material’s valence band to the number of photons impinging on the surface. To do this, it is necessary to shine a calibrated, tunable light on the cell, while simultaneously measuring the output current. The key to accurate measurement of the QE/internal photon to current efficiency is to accurately know how much scanning light is incident on the device under test and how much current is generated. Thus, measurement of light output with a NIST (National Institute of Standards and Technology) traceable calibrated detector is necessary prior to testing since illumination of an absolute optical power is required.

External quantum efficiency (EQE) is the ratio of the number of photons incident on a solar cell to the number of generated charge carriers. Internal quantum efficiency (IQE) also considers the internal efficiency — that is, the losses associated with the photons absorbed by nonactive layers of the cell. By comparison, EQE is much more straightforward to measure, and gives a direct parameter of how much output current will be contributed to the output circuit per incident photon at a given wavelength. IQE is a more in-depth parameter, taking into account the photoelectric efficiency of all composite layers of a material. In an IQE measurement, these losses from nonactive layers of the material are measured in order to calculate a net quantum efficiency — a much truer efficiency measurement.

Understanding the conversion efficiency as a function of the wavelength of light impingent on the cell makes QE measurement critical for materials research and solar cell design. With this data, the solar cell composition and topography can be modified to optimize conversion over the broadest possible range of wavelengths.

As a formula, it is given by IQE = EQE/(1 − R), where R is the reflectivity, direct and diffuse, of the solar cell. The IQE is an indication of the capacity of the active layers of the solar cell to make good use of the absorbed photons. It is always higher than the EQE, but should never exceed 100 percent, with the exception of multiple-exciton generation. Figure 5 illustrates how the tunable light source is used to illuminate the solar cell to perform an IQE measurement. The software controls all components of the measurement system, including the monochromator and data acquisition5.

A sample QE measurement system using the components of a tunable light source.

Figure 5. A sample QE measurement system using the components of a tunable light source. Photo courtesy of Newport Corp.

To measure quantum efficiency in 10-nm wavelength steps, the slit size of the monochromator is typically hundreds of micron in width. The slit width is reduced approximately half if 5-nm wavelength increments are desired. However, output power of the monochromator is reduced by more than 50 percent if the slit width is halved. Lowering optical power impacts QE measurement since a solar cell responds to this diminished optical power with low output current. This can result in a poor signal-to-noise ratio, making a QE measurement error more likely. The detection of low current requires very sensitive equipment with the ability to measure current down to the pico-ampere level. To make for an easier signal measurement, optical power is typically increased. A DC arc source is the better choice for QE measurements made in 5-nm increments or lower due to the lamp’s arc size resulting in better monochromator throughput. However, a QTH lamp is the better choice if greater than 0.1 percent light stability is required, with the trade-off of not being able to measure in as precise wavelength increments as if an arc lamp was used.

Balance between optical power and resolution is an important consideration as it impacts the quality of the QE measurement. The selection of lamp type and monochromator specifications are important considerations for TLS design. To be considered a suitable component for the majority of spectroscopic applications, high-output power and stability, long lifetime of the lamp, and broadband spectral emission with high resolution capability are required for the TLS.

Meet the authors

John Park, new product development manager at Newport Corp., has designed and developed numerous spectroscopy instruments for the photonics industry for over 10 years. He holds two granted patents and is a graduate from University of California, Irvine, with a Ph.D. in electrical engineering; email: john.park@newport.com. Jeff Eng is a product specialist for Oriel Spectroscopy Products at Newport Corp. His work experience includes application support, business-to-business sales and marketing activity of photonic light sources and detectors. He is a graduate of Rutgers University; email: jeff.eng@newport.com.

References

1. Newport Corp., Oriel Instruments TLS datasheet. Tunable Xe arch lamp sources.http://assets.newport.com/webDocuments-EN/images/39191.pdf.

2. Newport Corp., Oriel Instruments handbook: The Book of Photon Tools, light source section.

3. Newport Corp., Oriel Instruments OPS datasheet. OPS-A series arc lamp power supplies.http://assets.newport.com/webDocuments-EN/images/OPS-A%20Series%20Power%20Supply%20Datasheet.pdf.

4. J. M. Lerner and A. Thevenon (1988). The Optics of Spectroscopy. Edison, N.J.: Optical Systems/Instruments SA Inc.

5. K. Emery (2005). Handbook of Photovoltaic Science and Engineering, eds. A. Luque and S. Hegedus. Chapter 16: Measurement and characterization of solar cells and modules. Hoboken, N.J.: John Wiley & Sons Ltd.

Read Full Post »

Focused Ultrasounds and Their Applications in Medicine

Reporter: Danut Dragoi, PhD

Any waves focused in a point of material that support their propagation produce heating effects that are useful in medical applications.

Doctors in Los Angeles applied this heating principle for acoustic waves. They use high intensity focused ultrasounds to kill certain cancer tumors that allows the patient to go home on the same day. Surgeons at the Keck Medical Center of the University of Southern California became the first doctors to use this procedure on a patient with the help of high intensity focused ultrasound, or HIFU, and new robotic technology.

The principle of focused wave is not new, but the technology to apply it is. In many places of the world the research on ultrasound applications is producing important results. Doctors from Europe imported equipment to apply this technique. An excellent review and description of how HIFU technique is working given here .

We need to highlight that the temperature increases exponentially with the distance close to the focus point inside the human body where instantaneous protein destruction occurs. As remarked in the review paper mentioned in the previous link the various methods of focusing ultra sound (US) waves have been another important issue.

The simplest and cheapest (often most accurate) method may be a self-focusing, for instance, a spherically curved US source (transducer). An US transducer constructed according to this method, has a beam focus fixed at the position determined from the geometrical specifications of the transducer. To compensate for its lack of versatility, a flat US transducer with an interchangeable acoustic lens system was devised. The acoustic lens enables variation of focusing properties such as focal length and focal geometry. However, a drawback of the lens system is that US waves undergo sonic attenuation and the sonic signal has to be guided  due to absorption by the lens.

Recently, a phased array US transducer technique was adopted for HIFU therapy. By sending temporally different sets of electronic signals to each specific transducer component, this technique enables beam steering and focusing, which can move a focal spot in virtually any direction within physically allowed ranges.

HIFU clinical applications are listed here. Among important clinical applications, there are listed:

  • prostate tumors: with several devices under ultrasonic guidance and commercially available as (Ablatherm®, Sonablate ®), Fibroids with MRgHIFU procedures and available as Exablate ® (Insightec + GE)-
  • FDA 2004 and Philips CE approved Dec 2009, breast cancer on clinical research, bone tumors on clinical research, brain on small clinical studies with limitation: skull (bone) acoustic interface and no motion,
  • liver using Haifu® under ultrasonic guidance MRgHIFU procedures: small clinical studies with limitation on aeric and bone interfaces and motions.

From technological point of view, the most important element of a HIFU is the piezoelectric transducer that takes an alternative voltage of high frequency and convert the electrical energy into acoustic energy.

The physics of generation of ultrasounds is shown in the link here. The electronic circuits behind the HIFU devices is refined over a period of about two decades reaching today with commercial devices available not only for research but also for private clinics around the world.

The precision of focusing the acoustic power into a small region of the human soft tissue depends on the working distance of the HFU device as well as high accuracy of controlling the image of the targeted area. Successes of this technology is reported in here.

SOURCE

http://www.voanews.com/content/high-intensity-focused-ultrasound-used-to-kill-cancer-tumor/2459185.html

Read Full Post »

anti-neutrophil cytoplasmic antibodies (ANCA)-associated vasculitis (AAV)

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Positron Emission Tomography scanning in Anti-Neutrophil Cytoplasmic Antibodies-Associated Vasculitis

Kemna, Michael J. BSc; Vandergheynst, Frédéric MD; Vöö, Stefan MD, PhD; Blocklet, et al.

Tools for evaluation of disease activity in patients with anti-neutrophil cytoplasmic antibodies (ANCA)-associated vasculitis (AAV) include scoring clinical manifestations, determination of biochemical parameters of inflammation, and obtaining tissue biopsies. These tools, however, are sometimes inconclusive. 2-deoxy-2-[18F]-fluoro-D-glucose (FDG) positron emission tomography (PET) scans are commonly used to detect inflammatory or malignant lesions. Our objective is to explore the ability of PET scanning to assess the extent of disease activity in patients with AAV.

Consecutive PET scans made between December 2006 and March 2014 in Maastricht (MUMC) and between July 2008 and June 2013 in Brussels (EUH) to assess disease activity in patients with AAV were retrospectively included. Scans were re-examined and quantitatively scored using maximum standard uptake values (SUVmax). PET findings were compared with C-reactive protein (CRP) and ANCA positivity at the time of scanning.

Forty-four scans were performed in 33 patients during a period of suspected active disease. All but 2 scans showed PET-positive sites, most commonly the nasopharynx (n = 22) and the lung (n = 22). Forty-one clinically occult lesions were found, including the thyroid gland (n = 4 patients), aorta (n = 8), and bone marrow (n = 7). The amount of hotspots, but not the highest observed SUVmax value, was higher if CRP levels were elevated. Seventeen follow-up scans were made in 13 patients and showed decreased SUVmax values.

FDG PET scans in AAV patients with active disease show positive findings in multiple sites of the body even when biochemical parameters are inconclusive, including sites clinically unsuspected and difficult to assess otherwise.

 

Granulomatosis with polyangiitis (GPA; Wegener’s) is an inflammatory disease entity affecting small to medium vessels. It is, together with microscopic polyangiitis (MPA) and eosinophilic granulomatosis with polyangiitis (EGPA; Churg Strauss Syndrome), characterized by the presence of anti-neutrophil cytoplasmic antibodies (ANCA) and they are frequently grouped together under the term ANCA-associated vasculitis (AAV).1

Early diagnosis and assessment of the extent of disease activity are important for adequate therapeutic decisions.1 Multiple tools may be helpful, such as biochemical parameters of inflammation, imaging techniques, and tissue biopsies. Even though these tools suffice to diagnose active disease in most episodes, the results can sometimes be inconclusive. In particular, it is sometimes problematic to determine whether symptoms are due to active disease, vasculitic damage, and/or treatment-related side-effects.

2-deoxy-2-[18F]-fluoro-D-glucose (FDG) positron emission tomography (PET) scanning is used for detecting high glucose metabolism in malignancies, infectious, and auto-immune diseases.2–4 Co-registration with computed tomography (CT) allows the increased FDG uptake to be localized to the underlying anatomy. PET scanning has been proven to be a useful diagnostic tool in large vessel vasculitis.5–8 PET scanning can visualize glucose-consuming inflamed vessels, provided that their diameter is >4 mm. The limited spatial resolution was previously thought to be insufficient to detect the involvement of small- and medium-size vessels.6,7 Recent studies, however, have shown that PET scans show abnormalities in patients with ANCA-associated vasculitis.9–11 This novel imaging technique may therefore be a useful tool for diagnosing active disease and, in addition, to assess the severity and the extent of the disease. The latter may be relevant to detect occult diagnostic biopsy sites as previously demonstrated in sarcoidosis.12

The objective of our study is to explore the ability of PET scanning to assess the extent of disease activity in patients with AAV.

 

Study Population

Consecutive PET scans were performed in patients with AAV at Maastricht University Medical Center (MUMC) between December 2006 and March 2014 and at Erasme University Hospital (EUH) in Brussels between July 2008 and June 2013 and were retrospectively included. All patients fulfilled a diagnosis of GPA according to the 2012 revised International Chapel Hill Consensus Conference Nomenclature.13 Patients were previously treated according to the recommendations of the European League Against Rheumatism (EULAR).14 Disease states were defined according to the EULAR recommendations.15 A PET scan was performed in patients with clinically suspected disease activity (diagnosis or relapse), whereas other tools for evaluation of activity were inconclusive. The possibility of an active bacterial or viral infection was excluded by culture, serology, and persistence of symptoms despite empirical antibiotic treatment. This study was carried out in compliance with the Helsinki Declaration.

Diagnostic Parameters

An extensive diagnostic work-up was done in all cases, including analysis of clinical features, laboratory assessment, imaging techniques, and, if appropriate, a biopsy. Laboratory assessment included high-sensitivity C-reactive protein (CRP, cutoff value ≥10 ng/mL) levels, ANCA levels, and urine analysis at the time of scanning. Hematuria was defined as ≥10 erythrocytes in a urinary sediment, combined with dysmorphic erythrocytes and/or red blood cell casts. In Maastricht, ANCA levels were determined using the Fluorescent-Enzyme Immuno-Assay (FEIA) method.16 FEIA detection for both proteinase-3 (PR3) and myeloperoxidase (MPO) antibodies were fully automated as performed in a UniCAP 100 (Pharmacia Diagnostics). Values ≥10 AU were considered positive.

 

A whole-body [18F]-FDG-PET/CT scan was performed in both centers. In Maastricht, a Gemini_ PET-CT (Philips Medical Systems) scanner with time-of-flight (TOF) capability was used, together with a 64-slice Brilliance CT scanner. This scanner has a transverse and axial Field of View (FOV) of 57.6 and 18 cm, respectively. The spatial resolution is around 5 mm. In Brussels, a Gemini_ PET-CT (Philips Medical Systems) scanner was used without TOF capability, but with the same PET FOV and spatial resolution, together with a 16-slice Brilliance CT scanner.

 

 

Patient Characteristics

Thirty-three patients were included; an overview of the patient characteristics is shown in Table 1. Twenty patients were positive for PR3-ANCA at diagnosis, 9 patients for MPO-ANCA, and 4 patients were ANCA-negative.

Table 1

Table 1
Image Tools

Forty-four PET scans were made during an episode of suspected disease activity (Table 2). Eleven scans were performed at diagnosis and 33 scans at a suspected relapse. The suspected relapses occurred after a median of 68 (30–113) months since diagnosis. In 5 patients, ≥2 consecutive episodes occurred during which a PET scan was performed. These patients were in remission between episodes.

Table 2

Table 2
Image Tools

Results of PET Scans During Suspected Disease Activity

All PET scans during an episode of suspected disease activity except 2 revealed enhanced non-physiological FDG uptake. Table 3 shows the anatomic location of the positive sites and the corresponding median SUVmax values. The majority of these sites disclosed a SUVmax value between >2.5 and <6. Examples of PET/CT images of patients with AAV are shown in Figures 1 and Figures 2.

Table 3

Table 3
Image Tools

In our study, PET scans in AAV patients revealed positive findings in multiple sites of the body, including sites not clinically suspected and difficult to assess otherwise. PET scans may show FDG-positive findings during episodes in which other tools for evaluation of disease activity are inconclusive.

Similar to our findings using Gallium-67 [67Ga] scintigraphy17 in patients with GPA, PET scans seem to be a sensitive tool to assess disease activity. In our current study, all but 2 scans showed non-physiological FDG uptake during an episode of clinically suspected disease activity. Compared with gallium scanning, however, PET scanning offers additional information. First, Gallium scintigraphy suffers from practical limitations, such as the required interval between time of injection of the radiopharmaceuticals and time of scanning (48–72 hours) and the high radiation exposure. Second, the spatial resolution is higher in PET scans. Third, a low-dose CT scan may be used concomitantly to correlate the FDG uptake with the precise anatomical location. In sarcoidosis, PET scans are of value in detecting occult diagnostic biopsy sites.12 In our cohort, 41 clinically occult sites were found on the PET scan, and in 1 patient this resulted in a diagnostic biopsy.9

Whether hotspots on the PET scan can be attributed to activity of vasculitis is sometimes difficult to assess. A biopsy of PET-positive lesions would result in a definitive diagnosis. However, such a strategy is not realistic, as it does not correspond to routine clinical practice and was not performed in the current study. As we observed a favorable outcome after intensifying immunosuppressive treatment, we hypothesize that these patients indeed had active disease at the time of scanning. It is important to note that PET scans do not differentiate active vasculitis from infections, as observed in 2 of our patients with PET-positive findings due to an underlying fungal infection. In one of these patients, a biopsy of a clinically occult lesion led to the discovery of cryptococcal myositis and masquerading vasculitis.18 The differentiation between infections and ANCA-mediated disease activity remains an area of uncertainty, especially because there is strong evidence that infections may be an important trigger in the multifactorial etiology of ANCA-associated vasculitis.19In the future, more sensitive diagnostic modalities, such as the combination of PET scanning with magnetic resonance imaging (PET/MRI), may identify the infectious foci, which started the cascade leading to the (re)activation of vasculitis.

Most importantly, PET scans revealed abnormalities during episodes of active disease in which ANCA were sometimes not detected and CRP levels not increased. However, more hotspots were observed if the CRP levels were elevated. In contrast, the highest observed SUVmax values were not related to CRP levels. These findings suggest that the disease may be more extensive, but not more severe, if biochemical parameters of inflammation are increased.

Read Full Post »

« Newer Posts - Older Posts »