Feeds:
Posts
Comments

Physical and Gradients Fields Effects on Living Cells

Author: Danut Dragoi, PhD

All physical fields and their gradients, electric, magnetic, gravitational, see link in here, have an influence on living cells. A special field called contact field associated with chemical potential which is the working horse of Pharmaceutical Industry, has a fast and efficient influence on living cells. If the first category of physical fields is acting from a distance from the target, the second category is invasive due to assumed ideal contact with the target cells. Each field acting on living cells has a timing effect, which is more higher the less interaction is. The largest effects is remarked on their fields gradient effects, where mechanical forces associated with the gradients act on micro-objects/organelles irrespective of the electrical or magnetization states, except the chemical potentials.

The gradient force of low interacting fields is more effective when the micro object in the living cells is an ion such as Na+, K+, Ca++ or a small molecule. The high molecular weight of living cells bearing an electrical charge is most likely less influenced by physical fields and gradients because of their large mass. However, some surface charge interaction with the fields should be considered because of surface states creation and macromolecules slightly modify their functionality in human body. Based on the principle of Quantum Mechanics, the moving ions, and nutrients in a submicroscopic volume like a cylinder or sphere that can define an organelle / ribosome can be quantized. By the actual knowledge no in-vivo studies of Applied Quantum Mechanics are devised yet, mainly because of lack of methodology and technology.

Table below shows bio-objects, see link in here, at slide # 30,  by their sizes in nanometers range that are prone to be quantized.

How big is a CEll

Image SOURCEhttp://www.slideshare.net/BiologyIB/cells-powerpoint

It is suggested that the moving electrical charges in living cells can be influenced by electrical, magnetical and electro-magnetical fields (fields dependent of time). For example a low interaction between a magnetic field and red blood cells that has a magnetic ions, Fe+++ (the ferrous ion that relates with carrying Oxygen), see link in here, or Fe++( when is not carrying Oxygen), see link in here, has a suggested long time interaction that implies the magnetic field should be non-variable or periodic with a very long time periods. A way to increase the interaction of magnetic field in living cells is to make it variable. In this case the time period is short (NB-interacting time) when the frequency of the field is high. When the frequency of the field is high, an electrical field is induced and the effect is no longer magnetic.

An application for variable magnetic fields was found in TMS (Transcranial Magnetic Stimulation), see link in here. The application of magnetic fields in-vivo suggests a weak effect on moving small organelles and electrical charges that usually are parts of chemical bonds.

Regarding the chirality form of living molecules in human body the spinorial effects induced by a magnetic field, even low intensity, should have an answer through quantum mechanics, in which a wave function, associated to the bio-micro-object like DNA, has two components, spin up and spin down. Since the human cells embryo are evolving in a constant Earth magnetic field and relative long interaction time, the cells select the right spin direction according with a low Gibbs free energy, see link in here.

SOURCE

Thermodynamic Modeling for Cancer Cells

http://pharmaceuticalintelligence.com/2016/02/23/a-thermodynamic-modeling-for-cancer-cells/

http://www.hindawi.com/journals/bmri/2013/598461/

http://oregonstate.edu/instruct/bb450/fall12/highlightsecampus/highlightshemoglobin.html

http://health.usnews.com/health-news/patient-advice/articles/2014/12/15/transcranial-magnetic-stimulation-what-is-it-and-who-needs-it

Conduction, graphene, elements and light

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

New 2D material could upstage graphene   Mar 25, 2016

Can function as a conductor or semiconductor, is extremely stable, and uses light, inexpensive earth-abundant elements
http://www.kurzweilai.net/new-2d-material-could-upstage-graphene
The atoms in the new structure are arranged in a hexagonal pattern as in graphene, but that is where the similarity ends. The three elements forming the new material all have different sizes; the bonds connecting the atoms are also different. As a result, the sides of the hexagons formed by these atoms are unequal, unlike in graphene. (credit: Madhu Menon)

A new one-atom-thick flat material made up of silicon, boron, and nitrogen can function as a conductor or semiconductor (unlike graphene) and could upstage graphene and advance digital technology, say scientists at the University of Kentucky, Daimler in Germany, and the Institute for Electronic Structure and Laser (IESL) in Greece.

Reported in Physical Review B, Rapid Communications, the new Si2BN material was discovered in theory (not yet made in the lab). It uses light, inexpensive earth-abundant elements and is extremely stable, a property many other graphene alternatives lack, says University of Kentucky Center for Computational Sciences physicist Madhu Menon, PhD.

Limitations of other 2D semiconducting materials

A search for new 2D semiconducting materials has led researchers to a new class of three-layer materials called transition-metal dichalcogenides (TMDCs). TMDCs are mostly semiconductors and can be made into digital processors with greater efficiency than anything possible with silicon. However, these are much bulkier than graphene and made of materials that are not necessarily earth-abundant and inexpensive.

Other graphene-like materials have been proposed but lack the strengths of the new material. Silicene, for example, does not have a flat surface and eventually forms a 3D surface. Other materials are highly unstable, some only for a few hours at most.

The new Si2BN material is metallic, but by attaching other elements on top of the silicon atoms, its band gap can be changed (from conductor to semiconductor, for example) — a key advantage over graphene for electronics applications and solar-energy conversion.

The presence of silicon also suggests possible seamless integration with current silicon-based technology, allowing the industry to slowly move away from silicon, rather than precipitously, notes Menon.

https://youtu.be/lKc_PbTD5go

Abstract of Prediction of a new graphenelike Si2BN solid

While the possibility to create a single-atom-thick two-dimensional layer from any material remains, only a few such structures have been obtained other than graphene and a monolayer of boron nitride. Here, based upon ab initiotheoretical simulations, we propose a new stable graphenelike single-atomic-layer Si2BN structure that has all of its atoms with sp2 bonding with no out-of-plane buckling. The structure is found to be metallic with a finite density of states at the Fermi level. This structure can be rolled into nanotubes in a manner similar to graphene. Combining first- and second-row elements in the Periodic Table to form a one-atom-thick material that is also flat opens up the possibility for studying new physics beyond graphene. The presence of Si will make the surface more reactive and therefore a promising candidate for hydrogen storage.

 

Nano-enhanced textiles clean themselves with light

Catalytic uses for industrial-scale chemical processes in agrochemicals, pharmaceuticals, and natural products also seen
http://www.kurzweilai.net/nano-enhanced-textiles-clean-themselves-with-light
Close-up of nanostructures grown on cotton textiles. Image magnified 150,000 times. (credit: RMIT University)

Researchers at at RMIT University in Australia have developed a cheap, efficient way to grow special copper- and silver-based nanostructures on textiles that can degrade organic matter when exposed to light.

Don’t throw out your washing machine yet, but the work paves the way toward nano-enhanced textiles that can spontaneously clean themselves of stains and grime simply by being put under a light or worn out in the sun.

The nanostructures absorb visible light (via localized surface plasmon resonance — collective electron-charge oscillations in metallic nanoparticles that are excited by light), generating high-energy (“hot”) electrons that cause the nanostructures to act as catalysts for chemical reactions that degrade organic matter.

Steps involved in fabricating copper- and silver-based cotton fabrics: 1. Sensitize the fabric with tin. 2. Form palladium seeds that act as nucleation (clustering) sites. 3. Grow metallic copper and silver nanoparticles on the surface of the cotton fabric. (credit: Samuel R. Anderson et al./Advanced Materials Interfaces)

The challenge for researchers has been to bring the concept out of the lab by working out how to build these nanostructures on an industrial scale and permanently attach them to textiles. The RMIT team’s novel approach was to grow the nanostructures directly onto the textiles by dipping them into specific solutions, resulting in development of stable nanostructures within 30 minutes.

When exposed to light, it took less than six minutes for some of the nano-enhanced textiles to spontaneously clean themselves.

The research was described in the journal Advanced Materials Interfaces.

Scaling up to industrial levels

Rajesh Ramanathan, a RMIT postdoctoral fellow and co-senior author, said the process also had a variety of applications for catalysis-based industries such as agrochemicals, pharmaceuticals, and natural productsand could be easily scaled up to industrial levels. “The advantage of textiles is they already have a 3D structure, so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” he said.

Cotton textile fabric with copper-based nanostructures. The image is magnified 200 times. (credit: RMIT University)

“Our next step will be to test our nano-enhanced textiles with organic compounds that could be more relevant to consumers, to see how quickly they can handle common stains like tomato sauce or wine,” Ramanathan said.

“There’s more work to do to before we can start throwing out our washing machines, but this advance lays a strong foundation for the future development of fully self-cleaning textiles.”


Abstract of Robust Nanostructured Silver and Copper Fabrics with Localized Surface Plasmon Resonance Property for Effective Visible Light Induced Reductive Catalysis

Inspired by high porosity, absorbency, wettability, and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the modes of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.

 

New type of molecular tag makes MRI 10,000 times more sensitive

Could detect biochemical processes in opaque tissue without requiring PET radiation or CT x-rays
http://www.kurzweilai.net/new-type-of-molecular-tag-makes-mri-10000-times-more-sensitive

Duke scientists have discovered a new class of inexpensive, long-lived molecular tags that enhance MRI signals by 10,000 times. To activate the tags, the researchers mix them with a newly developed catalyst (center) and a special form of hydrogen (gray), converting them into long-lived magnetic resonance “lightbulbs” that might be used to track disease metabolism in real time. (credit: Thomas Theis, Duke University)

Duke University researchers have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions, such as those involved in cancer and heart disease, and in real time.

Let’s review how MRI (magnetic resonance imaging) works: MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. By generating a strong magnetic field (such as 3 Tesla) and a series of radio-frequency waves, MRI induces these hydrogen magnets in atoms to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs (such as the brain), blood vessels, and tumors inside the body.


MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique. That makes it impossible to detect small numbers of molecules (without using unattainably more massive magnetic fields).

So to take MRI a giant step further in sensitivity, the Duke researchers created a new class of molecular “tags” that can track disease metabolism in real time, and can last for more than an hour, using a technique called hyperpolarization.* These tags are biocompatible and inexpensive to produce, allowing for using existing MRI machines.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

Sensitive tissue detection without radiation

The new molecular tags open up a new world for medicine and research by making it possible to detect what’s happening in optically opaque tissue instead of requiring expensive positron emission tomography (PET), which uses a radioactive tracer chemical to look at organs in the body and only works for (typically) about 20 minutes, or CT x-rays, according to the researchers.

This research was reported in the March 25 issue of Science Advances. It was supported by the National Science Foundation, the National Institutes of Health, the Department of Defense Congressionally Directed Medical Research Programs Breast Cancer grant, the Pratt School of Engineering Research Innovation Seed Fund, the Burroughs Wellcome Fellowship, and the Donors of the American Chemical Society Petroleum Research Fund.

* For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said. But while promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular “lightbulbs” burn out in a matter of seconds.

“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”

So the researchers synthesized a series of molecules containing diazarines — a chemical structure composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly. Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.


Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Conventional magnetic resonance (MR) faces serious sensitivity limitations, which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2singlet spin order, with signal decay time constants of 5.8 and 23 min, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for more than an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.

references:

[Seems like they have a great idea, now all they need to do is confirm very specific uses or types of cancers/diseases or other processes they can track or target. Will be interesting to see if they can do more than just see things, maybe they can use this to target and destroy bad things in the body also. Keep up the good work….. this sounds like a game changer.]

 

Scientists time-reverse developed stem cells to make them ‘embryonic’ again

May help avoid ethically controversial use of human embryos for research and support other research goals
http://www.kurzweilai.net/scientists-time-reverse-developed-stem-cells-to-make-them-embryonic-again
Researchers have reversed “primed” (developed) “epiblast” stem cells (top) from early mouse embryos using the drug MM-401, causing the treated cells (bottom) to revert to the original form of the stem cells. (credit: University of Michigan)

University of Michigan Medical School researchers have discovered a way to convert mouse stem cells (taken from an embryo) that have  become “primed” (reached the stage where they can  differentiate, or develop into every specialized cell in the body) to a “naïve” (unspecialized) state by simply adding a drug.

This breakthrough has the potential to one day allow researchers to avoid the ethically controversial use of human embryos left over from infertility treatments. To achieve this breakthrough, the researchers treated the primedembryonic stem cells (“EpiSC”) with a drug called MM-401* (a leukemia drug) for a short period of time.

Embryonic stem cells are able to develop into any type of cell, except those of the placenta (credit: Mike Jones/CC)

…..

* The drug, MM-401, specifically targets epigenetic chemical markers on histones, the protein “spools” that DNA coils around to create structures called chromatin. These epigenetic changes signal the cell’s DNA-reading machinery and tell it where to start uncoiling the chromatin in order to read it.

A gene called Mll1 is responsible for the addition of these epigenetic changes, which are like small chemical tags called methyl groups. Mll1 plays a key role in the uncontrolled explosion of white blood cells in leukemia, which is why researchers developed the drug MM-401 to interfere with this process. But Mll1 also plays a role in cell development and the formation of blood cells and other cells in later-stage embryos.

Stem cells do not turn on the Mll1 gene until they are more developed. The MM-401 drug blocks Mll1’s normal activity in developing cells so the epigenetic chemical markers are missing. These cells are then unable to continue to develop into different types of specialized cells but are still able to revert to healthy naive pluripotent stem cells.


Abstract of MLL1 Inhibition Reprograms Epiblast Stem Cells to Naive Pluripotency

The interconversion between naive and primed pluripotent states is accompanied by drastic epigenetic rearrangements. However, it is unclear whether intrinsic epigenetic events can drive reprogramming to naive pluripotency or if distinct chromatin states are instead simply a reflection of discrete pluripotent states. Here, we show that blocking histone H3K4 methyltransferase MLL1 activity with the small-molecule inhibitor MM-401 reprograms mouse epiblast stem cells (EpiSCs) to naive pluripotency. This reversion is highly efficient and synchronized, with more than 50% of treated EpiSCs exhibiting features of naive embryonic stem cells (ESCs) within 3 days. Reverted ESCs reactivate the silenced X chromosome and contribute to embryos following blastocyst injection, generating germline-competent chimeras. Importantly, blocking MLL1 leads to global redistribution of H3K4me1 at enhancers and represses lineage determinant factors and EpiSC markers, which indirectly regulate ESC transcription circuitry. These findings show that discrete perturbation of H3K4 methylation is sufficient to drive reprogramming to naive pluripotency.


Abstract of Naive Pluripotent Stem Cells Derived Directly from Isolated Cells of the Human Inner Cell Mass

Conventional generation of stem cells from human blastocysts produces a developmentally advanced, or primed, stage of pluripotency. In vitro resetting to a more naive phenotype has been reported. However, whether the reset culture conditions of selective kinase inhibition can enable capture of naive epiblast cells directly from the embryo has not been determined. Here, we show that in these specific conditions individual inner cell mass cells grow into colonies that may then be expanded over multiple passages while retaining a diploid karyotype and naive properties. The cells express hallmark naive pluripotency factors and additionally display features of mitochondrial respiration, global gene expression, and genome-wide hypomethylation distinct from primed cells. They transition through primed pluripotency into somatic lineage differentiation. Collectively these attributes suggest classification as human naive embryonic stem cells. Human counterparts of canonical mouse embryonic stem cells would argue for conservation in the phased progression of pluripotency in mammals.

 

 

How to kill bacteria in seconds using gold nanoparticles and light

March 24, 2016

 

zapping bacteria ft Could treat bacterial infections without using antibiotics, which could help reduce the risk of spreading antibiotics resistance

Researchers at the University of Houston have developed a new technique for killing bacteria in 5 to 25 seconds using highly porous gold nanodisks and light, according to a study published today in Optical Materials Express. The method could one day help hospitals treat some common infections without using antibiotics

Removing Alzheimer plaques

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Transdermal implant releases antibodies to trigger immune system to clear Alzheimer’s plaques            March 21, 2016

Test with mice over 39 weeks showed dramatic reduction of amyloid beta plaque load in the brain and reduced phosphorylation of the protein tau, two signs of Alzheimer’s
http://www.kurzweilai.net/transdermal-implant-releases-antibodies-to-trigger-immune-system-to-clear-alzheimers-plaques

 

An implant that can prevent Alzheimer’s disease. A new capsule can be implanted under the skin to release antibodies that “tag” amyloid beta, signalling the patient’s immune system to clear it before it forms Alzheimer’s plaques. (credit: École polytechnique fédérale de Lausanne

EPFL scientists have developed an implantable capsule containing genetically engineered cells that can recruit a patient’s immune system to combat Alzheimer’s disease.

Placed under the skin, the capsule releases antibody proteins that make their way to the brain and “tag” amyloid beta proteins, signalling the patient’s own immune system to attack and clear the amyloid beta proteins, which are toxic to neurons.

To be most effective, this treatment has to be given as early as possible, before the first signs of cognitive decline. Currently, this requires repeated vaccine injections, which can cause side effects. The new implant can deliver a steady, safe flow of antibodies.

Protection from immune-system rejection

Cell encapsulation device for long-term subcutaneous therapeutic antibody delivery. (B) Macroscopic view of the encapsulation device, composed of a transparent frame supporting polymer permeable membranes and reinforced with an outer polyester mesh. (C) Dense neovascularization develops around a device containing antibody-secreting C2C12 myoblasts, 8 months after implantation in the mouse subcutaneous tissue. (D and E) Representative photomicrographs showing encapsulated antibody-secreting C2C12 myoblasts surviving at high density within the flat sheet device 39 weeks after implantation. (E) Higher magnification: note that the cells produce a collagen-rich matrix stained in blue with Masson’s trichrome protocol. Asterisk: polypropylene porous membrane. Scale bars = 750 mm (B and C),100 mm (D), 50 mm (E). (credit: Aurelien Lathuiliere et al./BRAIN)

The lab of Patrick Aebischer at EPFL designed the “macroencapsulation device” (capsule) with two permeable membranes sealed together with a polypropylene frame, containing a hydrogel that facilitates cell growth. All the materials used are biocompatible and the device is reproducible for large-scale manufacturing.

The cells of choice are taken from muscle tissue, and the permeable membranes let them interact with the surrounding tissue to get all the nutrients and molecules they need. The cells have to be compatible with the patient to avoid triggering the immune system against them, like a transplant can. To do that, the capsule’s membranes shield the cells from being identified and attacked by the immune system. This protection also means that cells from a single donor can be used on multiple patients.

The researchers tested the device mice in a genetic line commonly used to simulate Alzheimer’s disease over a course of 39 weeks, showing dramatic reduction of amyloid beta plaque load in the brain. The treatment also reduced the phosphorylation of the protein tau, another sign of Alzheimer’s observed in these mice.

“The proof-of-concept work demonstrates clearly that encapsulated cell implants can be used successfully and safely to deliver antibodies to treat Alzheimer’s disease and other neurodegenerative disorders that feature defective proteins,” according to the researchers.

The work is published in the journal BRAIN. It involved a collaboration between EPFL’s Neurodegenerative Studies Laboratory (Brain Mind Institute), the Swiss Light Source (Paul Scherrer Institute), and F. Hoffmann-La Roche. It was funded by the Swiss Commission for Technology and Innovation and F. Hoffmann-La Roche Ltd.


Abstract of A subcutaneous cellular implant for passive immunization against amyloid-β reduces brain amyloid and tau pathologies

Passive immunization against misfolded toxic proteins is a promising approach to treat neurodegenerative disorders. For effective immunotherapy against Alzheimer’s disease, recent clinical data indicate that monoclonal antibodies directed against the amyloid-β peptide should be administered before the onset of symptoms associated with irreversible brain damage. It is therefore critical to develop technologies for continuous antibody delivery applicable to disease prevention. Here, we addressed this question using a bioactive cellular implant to deliver recombinant anti-amyloid-β antibodies in the subcutaneous tissue. An encapsulating device permeable to macromolecules supports the long-term survival of myogenic cells over more than 10 months in immunocompetent allogeneic recipients. The encapsulated cells are genetically engineered to secrete high levels of anti-amyloid-β antibodies. Peripheral implantation leads to continuous antibody delivery to reach plasma levels that exceed 50 µg/ml. In a proof-of-concept study, we show that the recombinant antibodies produced by this system penetrate the brain and bind amyloid plaques in two mouse models of the Alzheimer’s pathology. When encapsulated cells are implanted before the onset of amyloid plaque deposition in TauPS2APP mice, chronic exposure to anti-amyloid-β antibodies dramatically reduces amyloid-β40 and amyloid-β42 levels in the brain, decreases amyloid plaque burden, and most notably, prevents phospho-tau pathology in the hippocampus. These results support the use of encapsulated cell implants for passive immunotherapy against the misfolded proteins, which accumulate in Alzheimer’s disease and other neurodegenerative disorders.

 

A subcutaneous cellular implant for passive immunization against amyloid-β reduces brain amyloid and tau pathologies

 

, , , , , , ,,

Passive immunization using monoclonal antibodies has recently emerged for the treatment of neurological diseases. In particular, monoclonal antibodies can be administered to target the misfolded proteins that progressively aggregate and propagate in the CNS and contribute to the histopathological signature of neurodegenerative diseases. Alzheimer’s disease is the most prevalent proteinopathy, characterized by the deposition of amyloid plaques and neurofibrillary tangles. According to the ‘amyloid cascade hypothesis’, which is supported by strong genetic evidence (Goate and Hardy, 2012), the primary pathogenic event in Alzheimer’s disease is the accumulation and aggregation of amyloid-β into insoluble extracellular plaques in addition to cerebral amyloid angiopathy (Hardy and Selkoe, 2002). High levels of amyloid-β may cause a cascade of deleterious events, including neurofibrillary tangle formation, neuronal dysfunction and death. Anti-amyloid-β antibodies have been developed to interfere with the amyloid-β cascade. Promising data obtained in preclinical studies have validated immunotherapy against Alzheimer’s disease, prompting a series of clinical trials (Bard et al., 2000;Bacskai et al., 2002; Oddo et al., 2004; Wilcock et al., 2004a; Bohrmann et al., 2012). Phase III trials using monoclonal antibodies directed against soluble amyloid-β (bapineuzumab and solanezumab) in patients with mild-to-moderate Alzheimer’s disease showed some effects on biomarkers that are indicative of target engagement. These trials, however, missed the primary endpoints, and it is therefore believed that anti-amyloid-β immunotherapy should be administered at the early presymptomatic stage (secondary prevention) to better potentiate therapeutic effects (Doody et al., 2014; Salloway et al., 2014). For the treatment of Alzheimer’s disease, it is likely that long-term treatment using a high dose of monoclonal antibody will be required. However, bolus administration of anti-amyloid-β antibodies may aggravate dose-dependent adverse effects such as amyloid-related imaging abnormalities (ARIA) (Sperling et al., 2012). In addition, the cost of recombinant antibody production and medical burden associated with repeated subcutaneous or intravenous bolus injections may represent significant constraints, especially in the case of preventive immunotherapy initiated years before the onset of clinical symptoms in patients predisposed to develop Alzheimer’s disease.

Therefore, alternative methods need to be developed for the continuous, long-term administration of antibodies. Here, we used an implant based on a high-capacity encapsulated cell technology (ECT) (Lathuiliere et al., 2014b). The ECT device contains myogenic cells genetically engineered for antibody production. Macromolecules can be exchanged between the implanted cells and the host tissue through a permeable polymer membrane. As the membrane shields the implanted cells from immune rejection in allogeneic conditions, it is possible to use a single donor cell source for multiple recipients. We demonstrate that anti-amyloid immunotherapy using an ECT device implanted in the subcutaneous tissue can achieve therapeutic effects inside the brain. Chronic exposure to anti-amyloid-β monoclonal antibodies produced in vivo using the ECT technology leads to a significant reduction of the amyloid brain pathology in two mouse models of Alzheimer’s disease.

 

Microglial phagocytosis study

The measurement of antibody-mediated amyloid-β phagocytosis was performed as proposed previously (Webster et al., 2001). In this study, we used either purified preparations of full mAb-11 IgG2a antibody, or a purified Fab antibody fragment. A suspension of 530 µM fluorescent fibrillar amyloid-β42 was prepared in 10 mM HEPES (pH 7.4) by stirring overnight at room temperature. The resulting suspension contained 30 µM fluorescein-conjugated amyloid-β42 and 500 µM unconjugated amyloid-β42 (Bachem). IgG-fibrillar amyloid-β42 immune complexes were obtained by preincubating fluorescent fibrillar amyloid-β42 at a concentration of 50 µM in phosphate-buffered saline (PBS) with various concentrations of purified mAb-11 IgG2a or Fab antibody fragment for 30 min at 37 °C. The immune complexes were washed twice by centrifugation for 5 min at 14 000g and resuspended in the initial volume to obtain a fluorescent fibrillar amyloid-β42 solution (total amyloid-β42 concentration: 530 µM). The day before the experiment, 8 × 104 C8-B4 cells were plated in 24-well plates. The medium was replaced with serum-free DMEM before the addition of the peptides. The cells were incubated for 30 min with fibrillar amyloid-β42 or IgG-fibrillar amyloid-β42 added to the culture medium. Next, the cells were washed twice with Hank’s Balanced Salt Solution (HBSS) and subsequently detached by trypsinization, which also eliminates surface-bound fibrillar amyloid-β42. The cells were fixed for 10 min in 4% paraformaldehyde and finally resuspended in PBS. The cell fluorescence was determined with a flow cytometer (Accuri C6; BD Biosciences), and the data were analysed using the FlowJo software (TreeStar Inc.). To determine the effect of the anti-amyloid-β antibodies on amyloid-β phagocytosis, the concentration of fluorescent fibrillar amyloid-β42 was set at 1.5 µM, which is in the linear region of the dose-response curve depicting fibrillar amyloid-β42 phagocytosis in C8-B4 cells (Fig. 2C). All experiments were performed in duplicate.

 

The implantation of genetically engineered cells within a retrievable subcutaneous device leads to the continuous production of monoclonal antibodies in vivo. This technology achieves steady therapeutic monoclonal antibody levels in the plasma, offering an effective alternative to bolus injections for passive immunization against chronic diseases. Peripheral delivery of anti-amyloid-β monoclonal antibody by ECT leads to a significant reduction of amyloid burden in two mouse models of Alzheimer’s disease. The effect of the ECT treatment is more pronounced when passive immunization is preventively administered in TauPS2APP mice, most notably decreasing the phospho-tau pathology.

With the recent development of biomarkers to monitor Alzheimer’s pathology, it is recognized that a steady increase in cerebral amyloid over the course of decades precedes the appearance of the first cognitive symptoms (reviewed in Sperling et al., 2011). The current consensus therefore suggests applying anti-amyloid-β immunotherapy during this long asymptomatic phase to avoid the downstream consequences of amyloid deposition and to leverage neuroprotective effects. Several preventive clinical trials have been recently initiated for Alzheimer’s disease. The Alzheimer’s Prevention Initiative (API) and the Dominantly Inherited Alzheimer Network (DIAN) will test antibody candidates in presymptomatic dominant mutation carriers, while the Anti-Amyloid treatment in the Asymptomatic Alzheimer’s disease (A4) trial enrols asymptomatic subjects after risk stratification. If individuals with a high risk of developing Alzheimer’s disease can be identified using current biomarker candidates, these patients are the most likely to benefit from chronic long-term anti-amyloid-β immunotherapy. However, such a treatment may pose a challenge to healthcare systems, as the production capacity of the antibody and its related cost would become a challenging issue (Skoldunger et al., 2012). Therefore, the development of alternative technologies to chronically administer anti-amyloid-β antibody is an important aspect for therapeutic interventions at preclinical disease stages.

Here, we show that the ECT technology for the peripheral delivery of anti-amyloid-β monoclonal antibodies can significantly reduce cerebral amyloid pathology in two mouse models of Alzheimer’s disease. The subcutaneous tissue is a site of implantation easily accessible and therefore well adapted to preventive treatment. It is, however, challenging to reach therapeutic efficacy, as only a small fraction of the produced anti-amyloid-β monoclonal antibodies are expected to cross the blood–brain barrier, although they can next persist in the brain for several months (Wang et al., 2011; Bohrmann et al., 2012). Our results are consistent with previous reports, which have shown that the systemic administration of anti-amyloid-β antibodies can decrease brain amyloid burden in preclinical Alzheimer’s disease models (Bard et al., 2000, 2003; DeMattos et al., 2001; Wilcock et al., 2004a, b; Buttini et al., 2005; Adolfsson et al., 2012).

Remarkably, striking differences exist among therapeutic anti-amyloid-β antibodies in their ability to clear already existing plaques. Soluble amyloid-β species can saturate the small fraction of pan-amyloid-β antibodies entering the CNS and inhibit further target engagement (Demattos et al., 2012). Therefore, antibodies recognizing soluble amyloid-β may fail to bind and clear insoluble amyloid deposits (Das et al., 2001; Racke et al., 2005; Levites et al., 2006; Bohrmann et al., 2012). Furthermore, antibody-amyloid-β complexes are drained towards blood vessels, promoting cerebral amyloid angiopathy (CAA) and subsequent microhaemorrhages. The mAb-11 antibody used in the present study is similar to gantenerumab, which is highly specific for amyloid plaques and reduces amyloid burden in patients with Alzheimer’s disease (Bohrmann et al., 2012; Demattos et al., 2012; Ostrowitzki et al., 2012). We find that the murine IgG2a mAb-11 antibody efficiently enhances the phagocytosis of amyloid-β fibrils by microglial cells. In addition, ECT administration of the mAb-11 F(ab’)2 fragment lacking the Fc region fails to recruit microglial cells, and leads only to a trend towards clearance of the amyloid plaques. Therefore, our results suggest a pivotal role for microglial cells in the clearance of amyloid plaques following mAb-11 delivery by ECT. Importantly, we do not find any evidence that this treatment may cause microhaemorrhages in the mouse models used in this study. It remains entirely possible that direct binding to amyloid plaques of a F(ab’)2 fragment lacking effector functionality can contribute to therapeutic efficacy, as suggested by previous studies using antibody fragments (Bacskai et al., 2002;Tamura et al., 2005; Wang et al., 2010; Cattepoel et al., 2011). However, compared to IgG2a, the lower plasma levels achieved with F(ab’)2 are likely to limit the efficacy of peripheral ECT-mediated immunization. The exact role of the effector domain and its interaction with immune cells expressing Fc receptors, could be determined by comparing the therapeutic effects of a control antibody carrying a mutated Fc portion, similar to a previous study which addressed this question using deglycosylated anti-amyloid-β antibodies (Wilcock et al., 2006a; Fuller et al., 2014).

Remarkably, continuous administration of mAb-11 initiated before plaque deposition had a dramatic effect on the amyloid pathology in TauPS2APP mice, underlining the efficacy of preventive anti-amyloid-β treatments. In this mouse model, where tau hyperphosphorylation is enhanced by amyloid-β (Grueninger et al., 2010), the treatment decreases the number of AT8- and phospho-S422-positive neurons in the hippocampus. Furthermore, the number of MC1-positive hippocampal neurons is significantly reduced, which also indicates an effect of anti-amyloid-β immunotherapy on the accumulation of misfolded tau. These results highlight the effect of amyloid-β clearance on other manifestations of the Alzheimer’s pathology. In line with these findings, previous studies have shown evidence for a decrease in tau hyperphosphorylation following immunization against amyloid-β, both in animal models and in patients with Alzheimer’s disease (Oddo et al., 2004; Wilcock et al., 2009;Boche et al., 2010; Serrano-Pozo et al., 2010; Salloway et al., 2014).

Similar to the subcutaneous injection of recombinant proteins (Schellekens, 2005), ECT implants can elicit significant immune responses against the secreted recombinant antibody. An anti-drug antibody response was detected in half of the mice treated with the mAb-11-releasing devices, in the absence of any anti-CD4 treatment. The glycosylation profile of the mAb-11 synthesized in C2C12 myoblasts is comparable to standard material produced by myeloma or HEK293 cells (Lathuiliere et al., 2014a). Although we cannot exclude that local release by ECT leads to antibody aggregation and denaturation, it is unlikely that this mode of administration further contributes to compound immunogenicity. Because the Fab regions of the chimeric recombinant mAb-11 IgG2a contain human CDRs, it remains to be determined whether the ECT-mediated delivery of antibodies fully matched with the host species would trigger an anti-drug antibody response.

Further developments will be needed to scale up this delivery system to humans. The possibility of using a single allogeneic cell source for all intended recipients is a crucial advantage of the ECT technology to standardize monoclonal antibody delivery. However, the development of renewable cell sources of human origin will be essential to ECT application in the clinic. Although the ARPE-19 cell line has been successfully adapted to ECT and used in clinical trials (Dunn et al., 1996; Zhang et al., 2011), the development of human myogenic cells (Negroni et al., 2009) is an attractive alternative that is worth exploring. Based on the PK analysis of recombinant mAb-11 antibody subcutaneously injected in mice (Supplementary material), we estimate that the flat sheet devices chronically release mAb-11 at a rate of 6.8 and 11.8 µg/h, to reach a plasma level of 50 µg/ml in the implanted animals. In humans, injected IgG1 has a longer half-life (21–25 days), with a volume of distribution of ∼100 ml/kg and an estimated clearance of 0.2 ml/h/kg. These values indicate that the predicted antibody exposure in humans, based on the rate of mAb-11 secretion achieved by ECT in mice, would be only 10 to 20-fold lower than the typical regimens based on monthly bolus injection of 1 mg/kg anti-amyloid-β monoclonal antibody. Hence, it is realistic to consider ECT for therapeutic monoclonal antibody delivery in humans, as the flat sheet device could be scaled up to contain higher amounts of cells. Furthermore, recent progress to engineer antibodies for increased penetration into the brain will enable lowering dosing of biotherapeutics to achieve therapeutic efficacy (Bien-Ly et al., 2014; Niewoehner et al., 2014). For some applications, intrathecal implantation could be preferred to chronically deliver monoclonal antibodies directly inside the CNS (Aebischer et al., 1996; Marroquin Belaunzaran et al., 2011).

Overall, ECT provides a novel approach for the local and systemic delivery of recombinant monoclonal antibodies in the CNS. It will expand the possible therapeutic options for immunotherapy against neurodegenerative disorders associated with the accumulation of misfolded proteins, including Alzheimer’s and Parkinson’s diseases, dementia with Lewy bodies, frontotemporal lobar dementia and amyotrophic lateral sclerosis (Gros-Louis et al., 2010; Bae et al., 2012; Rosenmann, 2013).

Abbreviations
ECT
encapsulated cell technology
mAb
monoclonal antibody
sjwilliamspa

The most powerful part of the article is the methodology of the transdermal patch of genetically engineered cells which appear to give a constant stream of antibody therapy. This would be better for the authors to have highlighted because the protein delivery system is great work. However it is a shame that the Alzheimer’s field is still relying on these mouse models which are appearing to lead the filed down a path which leads to failed clinical trials. Lilly has just dropped their Alzheimer’s program and unless the filed shifts to a different hypothesis of disease etilology I feel the whole field will be stuck where it is.

References

 

View Abstract

Brain, learning and memory

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

March 23, 2016   Exploring long-range communications in the brain
http://www.kurzweilai.net/exploring-long-range-communications-in-the-brain

Red and green dots reveal a region in the brain that that is very dense with synapses. A optically activated fluorescent protein allows Ofer Yizhar, PhD, and his group to record the activity of the synapses. (credit: Weizmann Institute of Science)

Weizmann Institute of Science researchers have devised a new way to track long-distance communications between nerve cells in different areas of the brain. They used optogenetic techniques (using genetic engineering of neurons and laser light in thin optical fibers to temporarily silence long-range axons, effectively leading to a sustained “disconnect” between two distant brain nodes.

By observing what happens when crucial connections are disabled, the researchers could begin to determine the axons’ role in the brain. Mental and neurological diseases are often thought to result from changes in long-range brain connectivity, so these studies could contribute to a better understanding of the mechanisms behind health and disease in the brain.

The study, published in Nature Neuroscience, “led us to a deeper understanding of the unique properties of the axons and synapses that form the connections between neurons,” said Ofer Yizhar, PhD, in the Weizmann Institute of Science’s Neurobiology Department. “We were able to uncover the responses of axons to various optogenetic manipulations. Understanding these differences will be crucial to unraveling the mechanisms for long-distance communication in the brain.”


Abstract of Biophysical constraints of optogenetic inhibition at presynaptic terminals

We investigated the efficacy of optogenetic inhibition at presynaptic terminals using halorhodopsin, archaerhodopsin and chloride-conducting channelrhodopsins. Precisely timed activation of both archaerhodopsin and halorhodpsin at presynaptic terminals attenuated evoked release. However, sustained archaerhodopsin activation was paradoxically associated with increased spontaneous release. Activation of chloride-conducting channelrhodopsins triggered neurotransmitter release upon light onset. Thus, the biophysical properties of presynaptic terminals dictate unique boundary conditions for optogenetic manipulation.

 

DARPA’s ‘Targeted Neuroplasticity Training’ program aims to accelerate learning ‘beyond normal levels’

The transhumanism-inspired goal: train superspy agents to rapidly master foreign languages and cryptography
New DARPA “TNT” technology will be designed to safely and precisely modulate peripheral nerves to control synaptic plasticity during cognitive skill training. (No mention of NZT.) (credit: DARPA)

DARPA has announced a new program called Targeted Neuroplasticity Training (TNT) aimed at exploring how to use peripheral nerve stimulation and other methods to enhance learning.

DARPA already has research programs underway to use targeted stimulation of the peripheral nervous system as a substitute for drugs to treat diseases and accelerate healing*, to control advanced prosthetic limbs**, and to restore tactile sensation.

But now DARPA plans to to take an even more ambitious step: It aims to enlist the body’s peripheral nerves to achieve something that has long been considered the brain’s domain alone: facilitating learning — specifically, training in a wide range of cognitive skills.

The goal is to reduce the cost and duration of the Defense Department’s extensive training regimen, while improving outcomes. If successful, TNT could accelerate learning and reduce the time needed to train foreign language specialists, intelligence analysts, cryptographers, and others.

“Many of these skills, such as understanding and speaking a new foreign language, can be challenging to learn,” says the DARPA statement. “Current training programs are time consuming, require intensive study, and usually require evidence of a more-than-minimal aptitude for eligibility. Thus, improving cognitive skill learning in healthy adults is of great interest to our national security.”

Going beyond normal levels of learning

The program is also notable because it will not just train; it will advance capabilities beyond normal levels — a transhumanist approach.

“Recent research has shown that stimulation of certain peripheral nerves, easily and painlessly achieved through the skin, can activate regions of the brain involved with learning,” by releasing neurochemicals in the brain that reorganize neural connections in response to specific experiences, explained TNT Program Manager Doug Weber,

“This natural process of synaptic plasticity is pivotal for learning, but much is unknown about the physiological mechanisms that link peripheral nerve stimulation to improved plasticity and learning,” Weber said. “You can think of peripheral nerve stimulation as a way to reopen the so-called ‘Critical Period’ when the brain is more facile and adaptive. TNT technology will be designed to safely and precisely modulate peripheral nerves to control plasticity at optimal points in the learning process.”

The goal is to optimize training protocols that expedite the pace of learning and maximize long-term retention of even the most complicated cognitive skills. DARPA intends to take a layered approach to exploring this new terrain:

  • Fundamental research will focus on gaining a clearer and more complete understanding of how nerve stimulation influences synaptic plasticity, how cognitive skill learning processes are regulated in the brain, and how to boost these processes to safely accelerate skill acquisition while avoiding potential side effects.
  • The engineering side of the program will target development of a non-invasive device that delivers peripheral nerve stimulation to enhance plasticity in brain regions responsible for cognitive functions.

Proposers Day

TNT expects to attract multidisciplinary teams spanning backgrounds such as cognitive neuroscience, neural plasticity, electrophysiology, systems neurophysiology, biomedical engineering, human performance, and computational modeling.

To familiarize potential participants with the technical objectives of TNT, DARPA will host a Proposers Day on Friday, April 8, 2016, at the Westin Arlington Gateway in Arlington, Va. (registration closes on Thursday, March 31, 2016). ADARPA Special Notice announces the Proposers Day and describes the specific capabilities sought. A Broad Agency Announcement with full technical details on TNT will be forthcoming. For more information, please email DARPA-SN-16-20@darpa.mil.

* DARPA’s ElectRx program is looking for “demonstrations of feedback-controlled neuromodulation strategies to establish healthy physiological states,” along with “disruptive biological-interface technologies required to monitor biomarkers and peripheral nerve activity … [and] deliver therapeutic signals to peripheral nerve targets, using in vivo, real-time biosensors and novel neural interfaces using optical, acoustic, electromagnetic, or engineered biology strategies to achieve precise targeting with potentially single-axon resolution.”

** DARPA’s HAPTIX (Hand Proprioception and Touch Interfaces) program “seeks to create a prosthetic hand system that moves and provides sensation like a natural hand. … HAPTIX technologies aim to tap in to the motor and sensory signals of the arm, allowing users to control and sense the prosthesis via the same neural signaling pathways used for intact hands and arms. … The system will include electrodes for measuring prosthesis control signals from muscles and motor nerves, and sensory feedback will be delivered through electrodes placed in sensory nerves.”

 

Fading of Epigenetic Memories across Generations Is Regulated
Neurons involved in working memory fire in bursts, not continuously
http://www.genengnews.com/gen-news-highlights/fading-of-epigenetic-memories-across-generations-is-regulated/81252537

  • Epigenetic “remembering” is better understood than epigenetic “forgetting,” and so it is an open question whether epigenetic forgetting is, like epigenetic remembering, active—a distinct biomolecular process—or passive—a matter of dilution or decay. New research, however, suggests that epigenetic forgetting is an active process, one in which a feedback mechanism determines the duration of transgenerational epigenetic memories.

    The new research comes out of Tel Aviv University, where researchers have been working with the nematode worm Caenorhabditis elegans to elucidate epigenetic mechanisms. In particular, the researchers, led by Oded Rechavi, Ph.D., have been preoccupied with how the effects of stress, trauma, and other environmental exposures are passed from one generation to the next.

    In previous work, Dr. Rechavi’s team enhanced the state of knowledge of small RNA molecules, short sequences of RNA that regulate the expression of genes. The team identified a “small RNA inheritance” mechanism through which RNA molecules produced a response to the needs of specific cells and how they were regulated between generations.

    “We previously showed that worms inherited small RNAs following the starvation and viral infections of their parents. These small RNAs helped prepare their offspring for similar hardships,” Dr. Rechavi explained. “We also identified a mechanism that amplified heritable small RNAs across generations, so the response was not diluted. We found that enzymes called RdRPs [RNA-dependent RNA polymerases] are required for re-creating new small RNAs to keep the response going in subsequent generations.”

    Most inheritable epigenetic responses in C. elegans were found to persist for only a few generations. This created the assumption that epigenetic effects simply “petered out” over time, through a process of dilution or decay. “But this assumption,” said Dr. Rechavi, “ignored the possibility that this process doesn’t simply die out but is regulated instead.”

    This possibility was explored in the current study, in which C. elegans were treated with small RNAs that target the GFP (green fluorescent protein) gene, a reporter gene commonly used in experiments. “By following heritable small RNAs that regulated GFP—that ‘silenced’ its expression—we revealed an active, tunable inheritance mechanism that can be turned ‘on’ or ‘off,'” declared Dr. Rechavi.

    Details of the work appeared March 24 in the journal Cell, in an article entitled, “A Tunable Mechanism Determines the Duration of the Transgenerational Small RNA Inheritance in C. elegans.” The article shows that exposure to double-stranded RNA (dsRNA) activates a feedback loop whereby gene-specific RNA interference (RNAi) responses “dictate the transgenerational duration of RNAi responses mounted against unrelated genes, elicited separately in previous generations.”

    Essentially, amplification of heritable exo-siRNAs occurs at the expense of endo-siRNAs. Also, a feedback between siRNAs and RNAi genes determines heritable silencing duration.

    “RNA-sequencing analysis reveals that, aside from silencing of genes with complementary sequences, dsRNA-induced RNAi affects the production of heritable endogenous small RNAs, which regulate the expression of RNAi factors,” wrote the authors of the Cell paper. “Manipulating genes in this feedback pathway changes the duration of heritable silencing.”

    The scientists also indicated that specific genes, which they named MOTEK (Modified Transgenerational Epigenetic Kinetics), were involved in turning on and off epigenetic transmissions.

    “We discovered how to manipulate the transgenerational duration of epigenetic inheritance in worms by switching ‘on’ and ‘off’ the small RNAs that worms use to regulate genes,” said Dr. Rechavi. “These switches are controlled by a feedback interaction between gene-regulating small RNAs, which are inheritable, and the MOTEK genes that are required to produce and transmit these small RNAs across generations.

    “The feedback determines whether epigenetic memory will continue to the progeny or not, and how long each epigenetic response will last.”

    Although its research was conducted on worms, the team believes that understanding the principles that control the inheritance of epigenetic information is crucial for constructing a comprehensive theory of heredity for all organisms, humans included.

    “We are now planning to study the MOTEK genes to know exactly how these genes affect the duration of epigenetic effects,” said Leah Houri-Ze’evi, a Ph.D. student in Dr. Rechavi’s lab and first author of the paper. “Moreover, we are planning to examine whether similar mechanisms exist in humans.”

    The current study notes that the active control of transgenerational effects could be adaptive, because ancestral responses would be detrimental if the environments of the progeny and the ancestors were different.

    A Tunable Mechanism Determines the Duration of the Transgenerational Small RNA Inheritance in C. elegans

    Leah Houri-Ze’ev, Yael Korem, Hila Sheftel,…, Luba Degani, Uri Alon, Oded Rechavi
    Cell 24 March 2016; Volume 165, Issue 1, p88–99.  http://dx.doi.org/10.1016/j.cell.2016.02.057
    Highlights
  • New RNAi episodes extend the duration of heritable epigenetic effects
  • Amplification of heritable exo-siRNAs occurs at the expense of endo-siRNAs
  • A feedback between siRNAs and RNAi genes determines heritable silencing duration
  • Modified transgenerational epigenetic kinetics (MOTEK) mutants are identified

Figure thumbnail fx1

In C. elegans, small RNAs enable transmission of epigenetic responses across multiple generations. While RNAi inheritance mechanisms that enable “memorization” of ancestral responses are being elucidated, the mechanisms that determine the duration of inherited silencing and the ability to forget the inherited epigenetic effects are not known. We now show that exposure to dsRNA activates a feedback loop whereby gene-specific RNAi responses dictate the transgenerational duration of RNAi responses mounted against unrelated genes, elicited separately in previous generations. RNA-sequencing analysis reveals that, aside from silencing of genes with complementary sequences, dsRNA-induced RNAi affects the production of heritable endogenous small RNAs, which regulate the expression of RNAi factors. Manipulating genes in this feedback pathway changes the duration of heritable silencing. Such active control of transgenerational effects could be adaptive, since ancestral responses would be detrimental if the environments of the progeny and the ancestors were different.

How we are able to keep several things simultaneously in working memory
Pictured is an artist’s interpretation of neurons firing in sporadic, coordinated bursts. “By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” Earl Miller says. (credit: Jose-Luis Olivares/MIT)

Think of a sentence you just read. Like that one. You’re now using your working memory, a critical brain system that’s roughly analogous to RAM memory in a computer.

Neuroscientists have believed that as information is held in working memory, brain cells associated with that information must be firing continuously. Not so — they fire in sporadic, coordinated bursts, says Earl Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences.

That makes sense. These different bursts could help the brain hold multiple items in working memory at the same time, according to the researchers. “By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” says Miller, the senior author of a study that appears in the March 17 issue of Neuron.

Bursts of activity, not averaged activity

So why hasn’t anyone noticed this before? Because previous studies averaged the brain’s activity over seconds or even minutes of performing the task, Miller says. “We looked more closely at this activity, not by averaging across time, but from looking from moment to moment. That revealed that something way more complex is going on.”

To do that, Miller and his colleagues recorded neuron activity in animals as they were shown a sequence of three colored squares, each in a different location. Then, the squares were shown again, but one of them had changed color. The animals were trained to respond when they noticed the square that had changed color — a task requiring them to hold all three squares in working memory for about two seconds.

The researchers found that as items were held in working memory, ensembles of neurons in the prefrontal cortex were active in brief bursts, and these bursts only occurred in recording sites in which information about the squares was stored. The bursting was most frequent at the beginning of the task, when the information was encoded, and at the end, when the memories were read out.

The findings fit well with a model that Lundqvist had developed as an alternative to the model of sustained activity as the neural basis of working memory. According to the new model, information is stored in rapid changes in the synaptic strength of the neurons. The brief bursts serve to “imprint” information in the synapses of these neurons, and the bursts reoccur periodically to reinforce the information as long as it is needed.

The bursts create waves of coordinated activity at the gamma frequency (45 to 100 hertz), like the ones that were observed in the data. These waves occur sporadically, with gaps between them, and each ensemble of neurons, encoding a specific item, produces a different burst of gamma waves, like a fingerprint.

Implications for other cognitive functions

The findings suggest that it would be worthwhile to look for this kind of cyclical activity in other cognitive functions such as attention, the researchers say. Oscillations like those seen in this study may help the brain to package information and keep it separate so that different pieces of information don’t interfere with each other.

Robert Knight, a professor of psychology and neuroscience at the University of California at Berkeley, says the new study “provides compelling evidence that nonlinear oscillatory dynamics underlie prefrontal dependent working memory capacity.”

“The work calls for a new view of the computational processes supporting goal-directed behavior,” adds Knight, who was not involved in the research. “The control processes supporting nonlinear dynamics are not understood, but this work provides a critical guidepost for future work aimed at understanding how the brain enables fluid cognition.”


editor’s comments: I’m curious how this relates to forgetting things to make space to learn new things. (Turns out the hippocampus works closely with the prefrontal cortex in working memory, as this open-access Nature paper explains.) Also, what’s the latest on how many things we can keep in working memory (it used to be around five)? Is that number limited by forgetting or by the capacity to differentiate different spike trains? Any tricks for keeping more things in working memory?


Abstract of Gamma and Beta Bursts Underlie Working Memory

Working memory is thought to result from sustained neuron spiking. However, computational models suggest complex dynamics with discrete oscillatory bursts. We analyzed local field potential (LFP) and spiking from the prefrontal cortex (PFC) of monkeys performing a working memory task. There were brief bursts of narrow-band gamma oscillations (45–100 Hz), varied in time and frequency, accompanying encoding and re-activation of sensory information. They appeared at a minority of recording sites associated with spiking reflecting the to-be-remembered items. Beta oscillations (20–35 Hz) also occurred in brief, variable bursts but reflected a default state interrupted by encoding and decoding. Only activity of neurons reflecting encoding/decoding correlated with changes in gamma burst rate. Thus, gamma bursts could gate access to, and prevent sensory interference with, working memory. This supports the hypothesis that working memory is manifested by discrete oscillatory dynamics and spiking, not sustained activity.

Gamma and Beta Bursts Underlie Working Memory

Mikael Lundqvist5, Jonas Rose5, Pawel Herman, Scott L. Brincat, Timothy J. Buschman, Earl K. Miller

Highlights
  • Working memory information in neuronal spiking is linked to brief gamma bursts
  • The narrow-band gamma bursts increase during encoding, decoding, and with WM load
  • Beta bursting reflects a default network state interrupted by gamma
  • Support for a model of WM is based on discrete dynamics and not sustained activity

 

References  Authors  Title   Source

Amit, D.J. and Brunel, N.Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex.

Cereb. Cortex. 1997; 7: 237–252

Asaad, W.F. and Eskandar, E.N.A flexible software tool for temporally-precise behavioral control in Matlab.

J. Neurosci. Methods. 2008;174: 245–258

Axmacher, N., Henseler, M.M., Jensen, O., Weinreich, I., Elger, C.E., and Fell, J.Cross-frequency coupling supports multi-item working memory in the human hippocampus.

Proc. Natl. Acad. Sci. USA.2010; 107: 3228–3233

Brunel, N. and Wang, X.J.What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance.

J. Neurophysiol. 2003; 90:415–430

Buschman, T.J., Siegel, M., Roy, J.E., and Miller, E.K.Neural substrates of cognitive capacity limitations.

Proc. Natl. Acad. Sci. USA.2011; 108: 11252–11255

Johan Eriksson, Edward K. Vogel, Anders Lansner, Fredrik Bergström, Lars Nyberg
Neuron, Vol. 88, Issue 1, p33–46
Published in issue: October 07, 2015
Abstract Image
Alexander Bratch, Spencer Kann, Joshua A. Cain, Jie-En Wu, Nilda Rivera-Reyes, Stefan Dalecki, Diana Arman, Austin Dunn, Shiloh Cooper, Hannah E. Corbin, Amanda R. Doyle, Matthew J. Pizzo, Alexandra E. Smith, Jonathon D. Crystal
Current Biology, Vol. 26, Issue 3, p351–355
Published online: January 14, 2016
Abstract Image
Diego Lozano-Soldevilla, Niels ter Huurne, Roshan Cools, Ole Jensen
Current Biology, Vol. 24, Issue 24, p2878–2887
Published online: November 26, 2014

Open Archive

Abstract Image
Lily Kahsai, Troy Zars
Current Biology, Vol. 23, Issue 18, R843–R845
Published in issue: September 23, 2013

Open Archive

Abstract Image
Dimitry Fisher, Itsaso Olasagasti, David W. Tank, Emre R.F. Aksay, Mark S. Goldman
Neuron, Vol. 79, Issue 5, p987–1000
Published in issue: September 04, 2013

Open Archive

Synaptic Amplifier

Gene discovery reveals mechanism behind how we think

By ELIZABETH COONEY   March 16, 2016
http://hms.harvard.edu/news/synaptic-amplifier

Skyler Jackman and colleagues studied the phenomenon known as synaptic facilitation by using light to turn neuronal connections on and off. The optogenetic protein used in this technique appears yellow. Image: Regehr lab

Our brains are marvels of connectivity, packed with cells that continually communicate with one another. This communication occurs across synapses, the transit points where chemicals called neurotransmitters leap from one neuron to another, allowing us to think, to learn and to remember.
Now Harvard Medical School researchers have discovered a gene that provides that boost by increasing neurotransmitter release in a phenomenon known as synaptic facilitation. And they did so by turning on a light or two.
Image: Jasmine Vazquez
image: Jasmine Vazquez

The gene is synaptotagmin 7 (syt7 for short), a calcium sensor that dynamically increases neurotransmitter release; each release serves to strengthen communication between neurons for about a second. These swift releases are thought to be critical for the brain’s ability to perform computations involved in short-term memory, spatial navigation and sensory perception.

A team of researchers who made this discovery was led by Skyler Jackman, a postdoctoral researcher in the lab of Wade Regehr, professor of neurobiology at HMS. They recently reported their findings in Nature.

 The calcium sensor synaptotagmin 7 is required for synaptic facilitationSkyler L. JackmanJosef TurecekJustine E. Belinsky & Wade G. Regehr
Nature 529, 88–91 (07 January 2016)
        doi:10.1038/nature16507
It has been known for more than 70 years that synaptic strength is dynamically regulated in a use-dependent manner1. At synapses with a low initial release probability, closely spaced presynaptic action potentials can result in facilitation, a short-term form of enhancement in which each subsequent action potential evokes greater neurotransmitter release2. Facilitation can enhance neurotransmitter release considerably and can profoundly influence information transfer across synapses3, but the underlying mechanism remains a mystery. One proposed mechanism is that a specialized calcium sensor for facilitation transiently increases the probability of release24, and this sensor is distinct from the fast sensors that mediate rapid neurotransmitter release. Yet such a sensor has never been identified, and its very existence has been disputed56. Here we show that synaptotagmin 7 (Syt7) is a calcium sensor that is required for facilitation at several central synapses. In Syt7-knockout mice, facilitation is eliminated even though the initial probability of release and the presynaptic residual calcium signals are unaltered. Expression of wild-type Syt7 in presynaptic neurons restored facilitation, whereas expression of a mutated Syt7 with a calcium-insensitive C2A domain did not. By revealing the role of Syt7 in synaptic facilitation, these results resolve a longstanding debate about a widespread form of short-term plasticity, and will enable future studies that may lead to a deeper understanding of the functional importance of facilitation.

“We really think one of the most important things the brain can do is change the strength of connections between neurons,” Jackman said. “Now that we have a tool to selectively turn off facilitation, we can test some long-held beliefs about its importance for thinking and working memory.”

Although synaptic facilitation was first described 70 years ago by Te-Pei Feng, known as the father of Chinese physiology, Jackman and colleagues were able to identify the mechanism behind synaptic strengthening by taking advantage of advanced laboratory techniques unavailable to previous generations of scientists.

A dozen years ago, Regehr suspected that syt7 might drive this synaptic strengthening process: The gene turns on slowly and then ramps up in speed, which would fit gradual release of neurotransmitters.

About eight years ago scientists in another lab engineered “knockout” mice that lack the syt7 gene, setting the stage for experiments to test Regehr’s speculations. But when grown in a lab dish, neurons from these knockout mice behaved no differently than other neurons; results that, at the time, dashed hopes that syt7 could explain the synaptic boost.

A year ago Jackman took another tack. He tested synaptic connections in brain tissue taken from the knockout mice but still having intact brain circuits, an experiment more reflective of how neurons and synapses might work in a living animal.

“It was striking. It was amazing,” Jackman said. “As soon as we probed these connections we saw there was a huge deficit, a complete lack of synaptic facilitation in the knockout mice, completely different from their wild-type brothers and sisters.”

To be certain that knocking out syt7 was responsible for this change, Jackman had to find a way to reinsert syt 7 and restore its function. He did that by using optogenetics, a genetic manipulation tool that allows neuronal connections to be turned on and off with light. He augmented this technique with bicistronic expression, a method that packages one optogenetic protein and one syt7protein into a single virus that infects all neurons equally. Using these two techniques, Jackman could selectively study what happened when syt7 was reinserted into a neuron and measure its effects reliably.

 

We need to forget things to make space to learn new things, scientists discover

Mice study, if confirmed in people, might help forget traumatic experiences
http://www.kurzweilai.net/we-need-to-forget-things-to-make-space-to-learn-new-things-scientists-discover

The three routes into the hippocampus seem to be linked to different aspects of learning: forming memories (green), recalling them (yellow) and forgetting them (red) (credit: John Wood)

While you’re reading this (and learning about this new study), your brain is actively trying to forget something.

We apologize, but that’s what scientists at the European Molecular Biology Laboratory (EMBL) and the University Pablo Olavide in Sevilla, Spain, found in a new study published Friday (March 18) in an open-access paper in Nature Communications.

“This is the first time that a pathway in the brain has been linked to forgetting — to actively erasing memories,” says Cornelius Gross, who led the work at EMBL.

Working with mice, Gross and colleagues studied the hippocampus, a region of the brain known to help form memories. Information enters this part of the brain through three different routes. As memories are formed, connections between neurons along the “main” route become stronger.

When they blocked this main route (dentate gyrus granule cells), the scientists found that the mice were no longer capable of learning (in this case, a specific Pavlovian response).* But surprisingly, blocking that main route  also resulted in its connections weakening, meaning the memory was actually being erased.

Limited space in the brain

Gross proposes that one explanation: “There is limited space in the brain, so when you’re learning, you have to weaken some connections to make room for others,” says Gross.

Interestingly, this active push for forgetting only happens in learning situations. When the scientists blocked the main route into the hippocampus under other circumstances, the strength of its connections remained unaltered.

The findings were made using genetically engineered mice, but the scientists demonstrated that it is possible to produce a drug that activates this “forgetting” route in the brain without the need for genetic engineering. This approach, they say, might help people forget traumatic experiences.

* But if the mice had learned that association before the scientists stopped information flow in that main route, they could still retrieve that memory. This confirmed that this route is involved in forming memories, but isn’t essential for recalling those memories. The latter probably involves the second route into the hippocampus, the scientists surmise.


Abstract of Rapid erasure of hippocampal memory following inhibition of dentate gyrus granule cells

The hippocampus is critical for the acquisition and retrieval of episodic and contextual memories. Lesions of the dentate gyrus, a principal input of the hippocampus, block memory acquisition, but it remains unclear whether this region also plays a role in memory retrieval. Here we combine cell-type specific neural inhibition with electrophysiological measurements of learning-associated plasticity in behaving mice to demonstrate that dentate gyrus granule cells are not required for memory retrieval, but instead have an unexpected role in memory maintenance. Furthermore, we demonstrate the translational potential of our findings by showing that pharmacological activation of an endogenous inhibitory receptor expressed selectively in dentate gyrus granule cells can induce a rapid loss of hippocampal memory. These findings open a new avenue for the targeted erasure of episodic and contextual memories.

 

Rapid erasure of hippocampal memory following inhibition of dentate gyrus granule cells

Noelia MadroñalJosé M. Delgado-GarcíaAzahara Fernández-GuizánJayanta ChatterjeeMaja KöhnCamilla Mattucci, et al.

Nature Communications7,Article number:10923    http://www.nature.com/ncomms/2016/160318/ncomms10923/full/ncomms10923.html

The hippocampus is an evolutionarily ancient part of the cortex that makes reciprocal excitatory connections with neocortical association areas and is critical for the acquisition and retrieval of episodic and contextual memories. The hippocampus has been the subject of extensive investigation over the last 50 years as the site of plasticity thought to be critical for memory encoding. Models of hippocampal function propose that sensory information reaching the hippocampus from the entorhinal cortex via dentate gyrus (DG) granule cells is encoded in CA3 auto-association circuits and can in turn be retrieved via Schaffer collateral (SC) projections linking CA3 and CA1 (refs 1, 2, 3, 4; Fig. 1a). Learning-associated plasticity in CA3–CA3 auto-associative networks encodes the memory trace, and plasticity in SC connections is necessary for the efficient retrieval of this trace2, 5, 6, 7, 8, 9, 10. In addition, both CA3 and CA1 regions receive direct, monosynaptic inputs from entorhinal cortex that are thought to convey information about ongoing sensory inputs that could modulate CA3 memory trace acquisition and/or retrieval via SC (refs 11,12, 13; Fig. 1a). In DG granule cells, sensory information is thought to undergo pattern separation into orthogonal cell ensembles before encoding (or reactivating, in the case of retrieval) memories in CA3 (ref. 14). However, how the hippocampus executes both the acquisition and recall of memories stored in CA3 remains a question of debate with some models attributing a role for DG inputs in memory acquisition, but not retrieval2, 15, 16, 17.

Rapid and selective inhibition of DG neurotransmission in vivo.

(a) The hippocampal tri-synaptic circuit receives PP inputs from entorhinal cortex to DG, CA3 and CA1. (b) A stimulating electrode was implanted in the PP and a recording electrode in CA3 pyramidal layer. (c) Strength of CA3 pyramidal layer fEPSPs evoked in anaesthetized mice by electrical stimulation of PP inputs showed fast and slow latency population spike components corresponding to direct PP-CA3 and indirect PP–DG-CA3 inputs, respectively. Systemic administration of the selective Htr1a agonist, 8-OH-DPAT (0.3mgkg−1, subcutaneous), to Htr1aDG (Tg) mice caused a rapid and selective decrease in the long-latency component that persisted for several hours. Quantification indicated a significant decrease in DG neurotransmission following agonist treatment of Htr1aDG, but not Htr1aKO (KO) littermates or vehicle treated wild-type mice that reached 80% suppression and persisted for >2h (mean±s.e.m.; n=10;*P<0.05; two-way analysis of variance followed by Holm–Sidak post hoc test). (d) Representative fEPSPs evoked at CA3 pyramidal layer after stimulation of PP inputs before and after agonist treatment. The fast and the slow latency population spike components are indicated (black arrow, short; grey arrow, long).

 

Figure 2

Inhibition of DG induces rapid and persistent loss of hippocampal memory and plasticity.

Figure 4

Loss of plasticity depends on entorhinal cortex inputs and local adenosine signalling.

In the present study we examined the contribution of DG granule cells to learning and recall and its associated synaptic plasticity in animals that had previously acquired a hippocampal memory. We found that transient pharmacogenetic inhibition of DG granule cells did not impair conditioned responding to CS presentation nor alter SC synaptic plasticity demonstrating that DG is not required for memory recall (Fig. 3c,d). However, when DG inhibition occurred during paired presentation of CS and US, we observed a rapid loss of SC synaptic plasticity and conditioned responding to CS (Fig. 2d,e and Supplementary Fig. 3). Strikingly, the synaptic plasticity and behavioural impairment persisted in the absence of further stimulus presentation and later relearning occurred at a rate indistinguishable from initial learning, suggesting a loss of the memory trace (Fig. 2f,g).

One possible explanation for the memory loss seen on DG inhibition is that presentation of paired CS–US has a dual effect on CA1 plasticity, on the one hand strengthening SC synapses via a DG-dependent mechanism (indirect inputs to CA1 via the tri-synaptic circuit) and on the other hand weakening SC synapses in a non-DG-dependent manner (direct PP-CA1 inputs). This explanation is consistent with several studies in the literature reporting mechanistic and functional differences between the direct and the indirect inputs to CA1 (refs 12, 13, 30, 31, 32). Furthermore, earlier in vitro12, 23 and in vivo33 electrophysiology studies found that stimulation of PP-CA1 inputs to the hippocampus could depotentiate synaptic plasticity that had been previously acquired at SC synapses suggesting that the direct PP pathway might promote depotentiation during hippocampal learning. To test this possibility, we used dual, orthogonal pharmacogenetic inhibition of DG and entorhinal cortex to show that the memory loss phenomenon we observed depended on PP inputs (Fig. 4e). Furthermore, one of the earlier studies23 had shown that PP stimulation-induced SC depotentiation could be inhibited by blockade of adenosine A1 receptors, but not several other receptors, and we found that bilateral administration of DPCPX to the CA1 region of the hippocampus blocked synaptic depotentiation in our model (Fig. 4g).

Our data lead us to propose a novel function for PP-CA1 inputs to the hippocampus. During CS–US presentation, but not during presentation of unpaired CS–US or CS alone, information arriving via this pathway actively promotes depotentiation of SC synapses, while information arriving via the DG pathway opposes this depotentiation. Thus, in an animal that has successfully acquired a hippocampal-dependent memory, and in which the direct and indirect pathways are intact, SC synaptic strength is stable and memories can be retrieved. However, when the DG pathway is blocked, as we have done artificially in our study, depotentiation is favoured and memory is lost (see scheme, Fig. 6). The precise function of PP-dependent SC depotentiation remains unclear at this point, but we speculate that it may play a role in weakening previously acquired associations to facilitate the encoding of new memories. Existing data show that selective blockade of synaptic activity in entorhinal cortex neurons projecting to CA1 impairs the acquisition of trace fear conditioning34 and support our hypothesis of a positive role for this pathway in learning13, 30, 32, 33. Moreover, our DPCPX experiments suggest that blockade of the depotentiation mechanism promotes SC synaptic plasticity during CS–US presentation in otherwise intact animals (Fig. 4g). However, further loss and gain-of-function manipulations of this pathway coupled with in vivoelectrophysiology and learning behaviour are needed to directly test a role of PP-CA1 inputs in memory clearing.

Figure 6: Model for function of PP-CA1 inputs to the hippocampus.

Figure 6

Model for function of PP-CA1 inputs to the hippocampus.

Model for function of PP-CA1 inputs to the hippocampus.

Area CA1 of the hippocampus receives information directly from the entorhinal cortex (direct PP-CA1 pathway) and also indirectly via the tri-synaptic circuit. (a) Presentation of paired CS–US promotes potentiation of SC synapses (+) via the indirect pathway depotentiation of SC synapses (–) via the PP-CA1 pathway. In an animal having successfully undergone learning, potentiation and depotentiation are balanced, SC synaptic strength is stable and memories can be retrieved. (b) Inhibition of DG during CS–US presentation suppresses potentiation via the indirect pathway, unmasking depotentiation of SC synapses and promoting memory loss.

Our finding that DG granule cells are not required for retrieval of hippocampal memory is consistent with previous data arguing that retrieval of associative information encoded in CA3–CA3 and SC plasticity is achieved via direct PP projections to CA3 (refs 1, 2, 3, 4, 35, 36, 37, 38). However, our data appear to contradict at least one recent study demonstrating a role for DG granule cells in retrieval during contextual fear conditioning39. We believe this discrepancy is due to a requirement for DG granule cells in the processing of the contextual CS (ref. 40). However, to rule out the possibility that other methodological differences between the studies underlie the discrepancy, it would be important to determine whether the cell-type specific optogenetic inhibition method used in their study left intact the recall of hippocampal-dependent memories for discrete cues.

Our study raises several questions. First, while we show SC depotentiation is adenosine receptor dependent, the location of adenosine signalling is not clear. Adenosine A1 receptors are expressed highly in CA3 pyramidal cells as well as more modestly in CA1 (ref. 28), and a study in which this receptor was selectively knocked out in one or the other of these structures demonstrated a role for presynaptic CA3, but not postsynaptic CA1 receptors in dampening SC neurotransmission41suggesting a presynaptic mechanism for our effect. The source of adenosine, on the other hand, could involve pre- and/or postsynaptic release as well as release from non-neuronal cells such as astrocytes27, 42. Second, although our DPCPX experiment pointed to a role for PP-CA1 projections in SC depotentiation, our entorhinal cortex pharmacogenetic inhibition experiment did not allow us to distinguish between contributions of PP-CA1 and PP-CA3 inputs. Although we cannot rule out a contribution of PP-CA3 projections to SC depotentiation, earlier in vitro and in vivo electrophysiology studies clearly demonstrate a role for PP-CA1 in SC depotentiation12, 22, 33. Third, the method we used to assess SC postsynaptic strength, namely electrical stimulation evoked field potentials does not allow us to rule out that changes in synaptic plasticity at non-SC inputs underlie our plasticity effects. Experiments using targeted optogenetic stimulation of CA3 efferents could be used to more selectively measure SC synaptic strength. Fourth, our observation that SC depotentiation and memory loss occurred only during paired, but not unpaired CS–US presentation (Fig. 2d,e) suggests that the memory loss phenomenon we describe is distinct from other well-described avenues for memory degradation, including enhancement of extinction43 and blockade of reconsolidation44. Finally, our findings demonstrating generalization of DG inhibition-induced memory loss across tasks coupled with our identification of an endogenous pharmacological target that can induce similar memory loss raise the possibility that the novel memory mechanism we have uncovered may be useful for erasing unwanted memories in a clinical setting.

Fastest-growing Biotech Companies in Massachusetts

Reporter: Aviva Lev-Ari, PhD, RN

 

  1. Alnylam Pharmaceuticals, Cambridge, John Maraganore, CEO

$5.2 Billion Marker cap, $41 Million in 2015 revenue

 

  1. Genocea Biosciences, Cambridge, Chip Clark, CEO

$127 Million Market cap, $670,000 in 2015 revenue

 

  1. ArQule, Burlington, Paolo Pucci, CEO

$109 Million Market cap, $11 Million in 2015 revenue

 

  1. Foundation Medicine, Cambridge, Michael Pellini, CEO

$ 604 Million Market cap, $ 93 Million in 2015 revenue

 

  1. bluebird bio, Cambridge, Nick Leschly, CEO

$ 1.6 Billion Market cap, $ 14 Million in 2015 revenue

 

  1. Sage Therapeutics, Cambridge, Jeffrey Jonas, CEO

$ 1.04 Billion Market cap, $ O Million in 2015 revenue

 

  1. Amag Pharmaceuticals, Waltham, Bill Heiden, CEO

$ 754 Million Market cap, $ 418 Million in 2015 revenue

 

  1. Tesaro, Waltham, Leon Moulder, CEO, Co-Founder

$ 1.8 Billion Market cap, $ 317,000 Million in 2015 revenue

 

  1. Radius Health, Waltham, Bob Ward, President and CEO

$ 1.43 Billion Market cap, $ 0 Million in 2015 revenue

investigational drug abaloparatide for the reduction of fracture risk in postmenopausal osteoporosis

 

All data is according to the latest federal filings from the companies as compiled by Bloomberg. 

 

Here are the 9 fastest-growing biotech firms in the Bay State

Mar 22, 2016, 11:36am EDT Updated Mar 22, 2016, 11:53am EDT

 

Click through the attached slideshow to see what other companies made the list.

SOURCE

http://www.bizjournals.com/boston/blog/bioflash/2016/03/here-are-the-9fastest-growing-biotech-firms-in-the.html?surround=etf&ana=e_article&u=fxrP9hU%2FlZ8NaabTSfzdVw05a06531&t=1459026657&j=71767582

 

The list includes a few of the area’s largest firms — including Alnylam Pharmaceuticals (Nasdaq: ALNY) and Amag Pharmaceuticals (Nasdaq: AMAG) — and mostly consists of those with little or no revenue. In fact, three of the companies had no revenue in 2015 at all.

At the top of the list is Radius Health (Nasdaq: RDUS), a company which is planning to submit its application for its first-ever drug in coming days, which grew from just 25 employees as of Dec. 31, 2014 to 73 employees a year later (the Waltham-based company has more than 100 employees today). Most of them (48) are in research and development, while 27 are engaged in administration, business development and finance.

“Since our initial IPO in 2014, we have grown exponentially as we evolve from a development stage biopharmaceutical company to a commercial organization,” CEO Robert Ward said in an email. ”We have established a robust and influential presence in Massachusetts, and are looking forward to continuing the momentum with the planned submission of our New Drug Application to the U.S. Food and Drug Administration at the end of this month for our investigational drug abaloparatide for the reduction of fracture risk in postmenopausal osteoporosis.”

The second slot on the list is Tesaro (Nasdaq: TSRO), also in Waltham, which employed 286 people as of the end of last year.

All data is according to the latest federal filings from the companies as compiled by Bloomberg. Click through the attached slideshow to see what other companies made the list.

 

SOURCE

http://www.bizjournals.com/boston/blog/bioflash/2016/03/here-are-the-9fastest-growing-biotech-firms-in-the.html?surround=etf&ana=e_article&u=fxrP9hU%2FlZ8NaabTSfzdVw05a06531&t=1459026657&j=71767582

Google Glass Meets Organs-on-Chips

 Reporter: Irina Robu, PhD

 

Google Glass is a recently designed wearable device capable of displaying information in a smartphone-like hands-free format by wireless communication designed by Brigham and Women’s Hospital that mimic the human physiological system. The device allows researchers to test drug compounds and predict physiological responses with high acccuracy in laboratory setting.

The Glass also provides control over remote devices, primarily enabled by voice recognition commands and offers researchers a hands-free and flexible monitoring system. To make Google Glass work, Zhang et al. custom developed hardware and software that takes advantage of the voice control command in order to not only monitor but remotely control their liver and heart on chip systems. The Google Glass device is also capable in monitoring physical and physiological parameters such as temperature, pH and morphology of liver- and heart-on-chips.

The Google Glass has particular importance in cases where the experimental conditions threaten human life, as when researchers work with highly contagious bacteria and virus or radioactivity.
Source

Sleep and memory

Sleep and memory

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Learning with the Lights Out

Researchers are uncovering the link between sleep and learning and how it changes throughout our lives.

By Jenny Rood | March 1, 2016    http://www.the-scientist.com/?articles.view/articleNo/45335/title/Learning-with-the-Lights-Out

NIGHTY NIGHT: Goffredina Spanò from Jamie Edgin’s University of Arizona lab uses polysomnography to measure sleep in a toddler with Down syndrome.PAMELA SPEDER

By the early 2000s, scientists had found that sleep helps young adults consolidate memory by reinforcing and filing away daytime experiences. But the older adults that Rebecca Spencer was studying at the University of Massachusetts Amherst didn’t seem to experience the same benefit. Spencer wondered if developmental stage altered the relationship between sleep and memory, and chose nearby preschool children as subjects. She found that habitual nappers benefitted the most from daytime rest, largely because their memories decayed the most without a nap. “By staying awake, they have more interference from daytime experiences,” Spencer explains.

Until recently, most of the research into the relationship between memory and sleep has been conducted using young adults or animal models. These studies have suggested that dampened sensory inputs during sleep allow the brain to replay the day’s events during a period relatively free of distracting information, helping to solidify connections and transfer daytime hippocampal memories into long-term storage in the cortex. But how sleep and memory interact at different ages has been an open question.

In children younger than 18 months, learning is thought to occur in the cortex because the hippocampus isn’t yet fully developed. As a result, researchers hypothesize that infants don’t replay memories during sleep, the way adults do. Instead, sleep merely seems to prevent infants from forgetting as much as they would if they were awake. “The net effect is that sleep permits infants to retain more of the redundant details of a learning experience,” says experimental psychologist Rebecca Gómez of the University of Arizona. By the time they are two years old, “we think that children have the brain development that supports an active process of consolidation,” she adds.

At that age, adequate nighttime sleep becomes critical for learning. Toddlers who sleep less than 10 hours display lasting cognitive deficits, even if they catch up on sleep later in their development (Sleep, 30:1213-19, 2007). The effects are particularly strong in children with developmental disorders, who often suffer from sleep disruptions. “Kids with Down syndrome that are sleep-impaired look like they have very large differences in language,” says Jamie Edgin of the University of Arizona who studies sleep and cognition in such children. When comparing Down syndrome children who are sleep deprived with those who sleep normally, she has observed a vocabulary difference of more than 190 words on language tests, even after controlling for behavioral differences.

Understanding the impact of sleep on memory could also help another at-risk group of learners at the other end of the age spectrum. Previous research has suggested that older adults don’t remember newly acquired motor skills as well as young adults do, perhaps because the posttraining stages of the learning process appear diminished. But neuroscientist Maria Korman and her colleagues at the University of Haifa in Israel recently demonstrated that a nap soon after learning can allow the elderly to retain procedural memories just as well as younger people (Neurosci Lett, 606:173-76, 2015). Korman hypothesizes that by shortening the interval between learning and consolidation, the nap prevents intervening experiences from weakening the memory before it solidifies. Overnight sleep might be even better, if the motor skills—in this case a complex sequence of finger and thumb movements on the nondominant hand—are taught late enough in the day, something she is testing now.

Optimizing the timing of sleep and training in the elderly takes advantage of something Korman sees as a positive side of growing old. “As we age, our neural system becomes more aware of the relevance of the task,” Korman says. Unlike young adults who solidify all the information they acquire throughout the day, older people consolidate “those experiences that were tagged by the brain as very important.”

Tests for older adults’ memory acuity are generating new findings about the relationship between sleep and memory at other ages as well. After learning at a conference about a memory test for cognitive impairment and dementia in older adults, neuroscientist Jeanne Duffy of Brigham and Women’s Hospital in Boston wondered if sleep could help strengthen the connection between names and faces. She and her colleagues found that young adults who slept overnight after learning a list of 20 names and faces showed a 12 percent increase in retention when tested 12 hours later compared with subjects who didn’t sleep between training and testing (Neurobiol Learn Mem, 126:31-38, 2015). The findings have “an immediate real-world application,” Duffy says, as they address a common memory concern among people of all ages.

A poll by the National Sleep Foundation found that adolescents have a deficit of nearly two hours of sleep per night during the school week compared with the weekend, suggesting the potential for serious learning impairments, according to Jared Saletin, a postdoctoral sleep researcher at Brown University. In fact, one study found that restricting 13- to 17-year-olds to six and a half hours of sleep a night for five nights reduced the information they absorbed in a school-like setting (J Adolesc Health, 47:523-25, 2010). However, other studies have suggested that four nights of just five hours of sleep didn’t impair 14- to 16-year-olds’ performance on tests of skills and vocabulary (Sleep Med, 12:170-78, 2011). A lack of consistency in study design and the ages of the subjects makes these conflicting results difficult to interpret, Gómez writes in a review, and much remains to be discovered about the true impact of sleep deficits on teenagers’ learning (Trans Issues in Psych Sci, 1:116-25, 2015).

Developing a fuller picture of what happens to memories during sleep—and how best to tweak sleep habits to aid the recall process—could benefit some of society’s most sleep-deprived members of every age. “We need to understand this role of sleep in memory because there is such potential for intervention,” Spencer says. “Now that we have a well-founded concept of what sleep can do for memory, it’s time to put it to the test.”

 

Associations Between Sleep Duration Patterns and Behavioral/Cognitive Functioning at School Entry

Évelyne Touchette, MPs,1,2 Dominique Petit, PhD,1 Jean R. Séguin, PhD,3,4,5 Michel Boivin, PhD,6,7 Richard E. Tremblay, PhD,2,3,4,5,8 and Jacques Y. Montplaisir, MD, CRCP(c), PhD1,5
Sleep. 2007 Sep 1; 30(9): 1213–1219.     http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1978413/
See commentary “Sleep and the Developing Brain

The aim of the study was to investigate the associations between longitudinal sleep duration patterns and behavioral/cognitive functioning at school entry.
Design, Setting, and Participants:   Hyperactivity-impulsivity (HI), inattention, and daytime sleepiness scores were measured by questionnaire at 6 years of age in a sample of births from 1997 to 1998 in a Canadian province (N=1492). The Peabody Picture Vocabulary Test – Revised (PPVT-R) was administered at 5 years of age and the Block Design subtest (WISC-III) was administered at 6 years of age. Sleep duration was reported yearly by the children’s mothers from age 2.5 to 6 years. A group-based semiparametric mixture model was used to estimate developmental patterns of sleep duration. The relationships between sleep duration patterns and both behavioral items and neurodevelopmental tasks were tested using weighted multivariate logistic regression models to control for potentially confounding psychosocial factors.  Results:   Four sleep duration patterns were identified: short persistent (6.0%), short increasing (4.8%),10-hour persistent (50.3%), and 11-hour persistent (38.9%). The association of short sleep duration patterns with high HI scores (P=0.001), low PPVT-R performance (P=0.002), and low Block Design subtest performance (P=0.004) remained significant after adjusting for potentially confounding variables.   Conclusions:   Shortened sleep duration, especially before the age of 41 months, is associated with externalizing problems such as HI and lower cognitive performance on neurodevelopmental tests. Results highlight the importance of giving a child the opportunity to sleep at least 10 hours per night throughout early childhood.

Citation: Touchette E; Petit D; Séguin JR; Boivin M; Tremblay RE; Montplaisir JY. Associations between sleep duration patterns and behavioral/cognitive functioning at school entry. SLEEP 2007;30(9):1213-1219.

Nap it or leave it in the elderly: A nap after practice relaxes age-related limitations in procedural memory consolidation

M. Kormana, , Y. DaganbA. Karnib   

 Highlights

•   Elderly individuals gain in practicing a new motor task as do young adults.
•   But elderly individuals fail to show delayed (offline) memory related gains.
•   A post-training nap uncovered robust offline skill consolidation in the elderly.
•   Consolidation processes are preserved in aging but are more stringently controlled.
•   Sleep scheduling can relax age related constraints on mnemonic processes.       
Using a training protocol that effectively induces procedural memory consolidation (PMC) in young adults, we show that older adults are good learners, robustly improving their motor performance during training. However, performance declined over the day, and overnight ‘offline’ consolidation phase performance gains were under-expressed. A post-training nap countered these deficits. PMC processes are preserved but under-engaged in the elderly; sleep can relax some of the age-related constraints on long-term plasticity.

 

A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations

Leonie Maurera, c, Kirsi-Marja Zittinga, b, Kieran Elliotta, Charles A. Czeislera, b, Joseph M. Rondaa, b, Jeanne F. Duffya, b, ,

Highlights

•   We tested whether sleep influences the accuracy of remembering face-name associations.
•   Presentation and recall were 12 h apart, one time with 8 h sleep and once without.
•   More correct face-name pairs were recalled when there was a sleep opportunity.
•   Sleep duration or sleep stage was not associated with improvement between conditions.     

Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults.

Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity (“Sleep”), while in the other condition they remained awake (“Wake”).

There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the “Wake” and “Sleep” conditions was not related to duration of sleep or any sleep stage.

These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments.

Selye’s Riddle solved

Larry H. Bernstein, mD, FCAP, Curator

LPBI

 

Mathematicians Solve 78-year-old Mystery

Mathematicians developed a solution to Selye's riddle which has puzzled scientists for almost 80 years.
Mathematicians developed a solution to Selye’s riddle which has puzzled scientists for almost 80 years.

In previous research, it was suggested that adaptation of an animal to different factors looks like spending of one resource, and that the animal dies when this resource is exhausted. In 1938, Hans Selye introduced “adaptation energy” and found strong experimental arguments in favor of this hypothesis. However, this term has caused much debate because, as it cannot be measured as a physical quantity, adaptation energy is not strictly energy.

 

Evolution of adaptation mechanisms: Adaptation energy, stress, and oscillating death

Alexander N. Gorbana, , Tatiana A. Tyukinaa, Elena V. Smirnovab, Lyudmila I. Pokidyshevab,

Highlights

•   We formalize Selye׳s ideas about adaptation energy and dynamics of adaptation.
•   A hierarchy of dynamic models of adaptation is developed.
•   Adaptation energy is considered as an internal coordinate on the ‘dominant path’ in the model of adaptation.
•   The optimal distribution of resources for neutralization of harmful factors is studied.
•   The phenomena of ‘oscillating death’ and ‘oscillating remission’ are predicted.       

In previous research, it was suggested that adaptation of an animal to different factors looks like spending of one resource, and that the animal dies when this resource is exhausted.

In 1938, Selye proposed the notion of adaptation energy and published ‘Experimental evidence supporting the conception of adaptation energy.’ Adaptation of an animal to different factors appears as the spending of one resource. Adaptation energy is a hypothetical extensive quantity spent for adaptation. This term causes much debate when one takes it literally, as a physical quantity, i.e. a sort of energy. The controversial points of view impede the systematic use of the notion of adaptation energy despite experimental evidence. Nevertheless, the response to many harmful factors often has general non-specific form and we suggest that the mechanisms of physiological adaptation admit a very general and nonspecific description.

We aim to demonstrate that Selye׳s adaptation energy is the cornerstone of the top-down approach to modelling of non-specific adaptation processes. We analyze Selye׳s axioms of adaptation energy together with Goldstone׳s modifications and propose a series of models for interpretation of these axioms. Adaptation energy is considered as an internal coordinate on the ‘dominant path’ in the model of adaptation. The phenomena of ‘oscillating death’ and ‘oscillating remission’ are predicted on the base of the dynamical models of adaptation. Natural selection plays a key role in the evolution of mechanisms of physiological adaptation. We use the fitness optimization approach to study of the distribution of resources for neutralization of harmful factors, during adaptation to a multifactor environment, and analyze the optimal strategies for different systems of factors.

In this work, an international team of researchers, led by Professor Alexander N. Gorban from the University of Leicester, have developed a solution to Selye’s riddle, which has puzzled scientists for almost 80 years.

Alexander N. Gorban, Professor of Applied Mathematics in the Department of Mathematics at the University of Leicester, said: “Nobody can measure adaptation energy directly, indeed, but it can be understood by its place already in simple models. In this work, we develop a hierarchy of top-down models following Selye’s findings and further developments. We trust Selye’s intuition and experiments and use the notion of adaptation energy as a cornerstone in a system of models. We provide a ‘thermodynamic-like’ theory of organism resilience that, just like classical thermodynamics, allows for economics metaphors, such as cost and bankruptcy and, more importantly, is largely independent of a detailed mechanistic explanation of what is ‘going on underneath’.”

Adaptation energy is considered as an internal coordinate on the “dominant path” in the model of adaptation. The phenomena of “oscillating death” and “oscillating remission,” which have been observed in clinic for a long time, are predicted on the basis of the dynamical models of adaptation. The models, based on Selye’s idea of adaptation energy, demonstrate that the oscillating remission and oscillating death do not need exogenous reasons. The developed theory of adaptation to various factors gives the instrument for the early anticipation of crises.

Professor Alessandro Giuliani from Istituto Superiore di Sanità in Rome commented on the work, saying: “Gorban and his colleagues dare to make science adopting the thermodynamics style: they look for powerful principles endowed with predictive ability in the real world before knowing the microscopic details. This is, in my opinion, the only possible way out from the actual repeatability crisis of mainstream biology, where a fantastic knowledge of the details totally fails to predict anything outside the test tube.1

Citation: Alexander N. Gorban, Tatiana A. Tyukina, Elena V. Smirnova, Lyudmila I. Pokidysheva. Evolution of adaptation mechanisms: Adaptation energy, stress, and oscillating death. Journal of Theoretical Biology, 2016; DOI:10.1016/j.jtbi.2015.12.017. Voosen P. (2015) Amid a Sea of False Findings NIH tries Reform, The Chronicle of Higher Education.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How do we address medical diagnostic errors? Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

How do we address medical diagnostic errors?

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

It is my observation over the course of 50 years in medicine that there is no simple resolution to a frequently identified problem, which has grown more serious with the implementation of hospital-wide and system-wide health information technology.  There has been reduction of hospital staff, nurses and physicians, with an increased patient load even while the technology has advanced significantly.  Over this investment in technology and manpower the hospital staff time has become dependent on the medical record and a large portion of their time is drawn away from patient care.  This is not a substantial improvement in the last decade.  The problem has been a lack of focus on the correct design of these systems.  They should be interactive and they should have incorporated anticipatory logic.  Instead, they are not that much more than large electronic filing cabinets. There are large sources of information in laboratory, radiology and pharmacy and are siloed.  When one drives a car, one does not need to look under the hood, a sentiment that has been expressed by a real expert.

 

Diagnostic Errors Get the Attention of the Institute of Medicine, Reinforcing Efforts by Nation’s Clinical Pathology Laboratory Scientists to Improve Patient Safety

March 25, 2016         DARK DAILY info@darkreport.com via cmail2.com

 

Along with its assessment of the rate of errors in diagnosis, the IOM has a plan to improve, but will doctors accept the IOM’s advice, or continue business as usual?

Diagnostic errors in the American healthcare system is a problem that is now on the radar screen of policymakers at the Institute of Medicine (IOM). Pathologists andclinical laboratory professionals will welcome this development, because recommendations from the IOM carry weight with Congress.

Thus, should the IOM develop specific actions items intended to reduce medical errors, not only are these suggestions likely to involve more effective use of medical laboratory tests by physicians, but there is a strong probability that Congress might eventually write these recommendations into future healthcare legislation.

The Institute of Medicine is a division of the National Academies of Sciences, Engineering, and Medicine. The IOM recently convened a committee that released a list of recommendations to address the problem of diagnostic errors in medicine. Those recommendations, however, are running up against ingrained mindsets and overconfidence on the part of physicians who are reluctant to include decision-support technology in the diagnostic process.

Misdiagnoses in healthcare has led to tragic consequences. Over the years, the IOM has worked to increase the public’s awareness of the problem of misdiagnosis. It has also encouraged health information technology (HIT) developers to provide clinicians with the decision-support systems they need to prevent misdiagnosis from happening in the first place. But getting physicians to use the new tools has become a challenge.

IOM Dedicated to Improving Diagnostic Accuracy 

Preventing diagnostic errors has been a primary target in medical reform for many years. A 1999 report by the U.S Institute of Medicine (IOM) titled: “To Err Is Human: Building a Safer Health System” first drew attention to the rate of preventable medical errors in the U.S.

Since then, much has been written about the problem of diagnostic inaccuracy. Although many provider organizations are working to improve it, progress has been slow.

Click here to see image

Edward Hoffer, MD (above), FACP, FACC, FACMI, is a faculty member at the Massachusetts General Hospital Laboratory of Computer Science. He told Modern Healthcare that getting doctors to use diagnostic software is a hard sell. “The main problem we face is trying to convince physicians that they actually need to use it,” he concluded. (Image copyright: Massachusetts General Hospital, Laboratory of Computer Science.)

Solutions for Preventing Errors Face Many Challenges

From the first evaluation of the patient, to the clinical laboratory, to physician follow-up; errors can occur throughout the chain of care. And a culture that discourages reporting of medical errors can make it very difficult for healthcare organizations to resolve these issues.

For example, an investigation by the Milwaukee Journal Sentinel uncovered what they say is a “secretive system [that] hides [medical] lab errors from the public and puts patients at risk.”

To make matters worse, according to the Journal Sentinel article, federal agencies that are tasked with oversight of the more than 35,000 clinical laboratories in the U.S. (such as the Joint Commission) are overloaded and thus may be missing many violations that would lead to sanctions or outright loss of accreditation.

The Milwaukee Journal Sentinel stated that, “even when serious violations are identified, offending labs are rarely sanctioned except in the most extreme cases.” And that “in 2013, just 90 sanctions were issued—accounting for not even 1% of the 35,000 labs that do high-level lab testing in the United States.”

Additionally, there are different types of misdiagnoses, which also complicates matters. A study by three professors at the Dartmouth Center for Healthcare Deliver Science investigated the cost of what they termed “silent misdiagnoses.”

According to a Dartmouth Now article outlining the study, the disparity between “the treatments patients want and what doctors think they want” accounts for a great number of misdiagnoses. The study’s authors call these misdiagnoses “silent” because they are rarely reported or recorded.

Click here to see image

Albert G. Mulley, Jr., MD, MPP (above), is Managing Director, Global Health Care Delivery Science, The Dartmouth Institute for Health Policy and Clinical Practice; Professor of Medicine, Geisel School of Medicine at Dartmouth; and the lead author of the Dartmouth study. Mulley says that listening to the patient is more important than ever. “Today, the rise in treatment options makes this even more critical, not only to reach a correct medical diagnosis but also to understand fully the patients’ preferences, and reduce the huge waste in time and money that comes from delivery of services that patients often neither want nor need.” (Photo copyright: Dartmouth Now.)

IOM Committee on Diagnostic Error in Healthcare Releases Recommendations

In September, 2015, the IOM released its latest report in the series of reports about medical errors and diagnostic problems that goes back 17 years. The series started in 1999 with “To Err is Human: Building A Safer Health System” in 1999. This was followed in 2001 by “Crossing the Quality Chasm: A New Health System for the 21st Century.”

This newest report is titled, “Improving Diagnosis in HealthCare.” It comes from the IOM’s National Academies of Medicine’s Committee on Diagnostic Error in Healthcare. The study lists recommendations that the IOM says will help improve the rate of diagnostic errors. Included are:

• Facilitate more effective teamwork in the diagnostic process;

• Enhance education in the healthcare community on the topic of improving diagnostics;

• Incorporate diagnostic support into electronic health records (EHRs), and make EHRs fully interoperable;

• Set up systems to identify, learn from, and reduce diagnostic errors;

• Establish a culture that encourages investigation of mistakes;

• Develop a collaborative reporting system that involves clinicians, government agencies, and liability insurers that allows everyone to learn from diagnostic errors and near misses;

• Design a payment system that supports correct diagnosis; and

• Provide funding for research into how to improve the diagnostic process.

Click here to see video

This video (above) is of the IOM’s public release in September, 2015, of the Committee on Diagnostic Error in Healthcare’s report, “Improving Diagnosis in Health Care.” A full text of the report can be downloaded by clicking here, or by visiting http://iom.nationalacademies.org/reports/2015/improving-diagnosis-in-healthcare.

Technology Exists to Support Accurate Diagnoses

One area of development that offers real hope for lowering the rate of misdiagnosis is medical decision-support software—specifically in clinical informatics, where the integration of health data with health information technology (HIT) supports physician decision making at the point of care.

Whether that software is part of an organization’s EHR system, or is a standalone program dedicated to reducing diagnostic errors, can make a difference. For example, a research study published in the Journal of Clinical Bioinformatics) suggests that health information technology has an important role to play in improving diagnostic accuracy.

The challenge is that, although the technology currently exists, diagnostic error rates remain high. A research study published in the Journal of the American Medical Association Internal Medicine (JAMA) in 2013 may explain why. It states that overconfidence could prevent some physicians from re-evaluating questionable diagnoses.

In a Modern Healthcare article, Edward Hoffer, MD, FACP, FACC, FACMI, a faculty member at the Massachusetts General Hospital Laboratory of Computer Science, said that getting doctors to use diagnostic software is a hard sell. “The main problem we face is trying to convince physicians that they actually need to use it,” he concluded.

All of the building blocks for improving the diagnostic process are either in place, or as in the case of EHR interoperability, in the works. Clinical laboratory personnel are uniquely positioned to assist physicians in improving through communication and the careful use of technology.

—Dava Stewart

Related Information:

To Err Is Human

Weak Oversight Allows Lab Failures to Put Patients at Risk  

Dartmouth Study: ‘Silent’ Misdiagnoses by Doctors Are Common, and Come at Great Cost 

Press Release: Doctors ‘Silent’ Misdiagnoses Cost Patients Dearly . . . And US Health Care Billions

Improving Diagnosis in Health Care

Clinical Decision Support Systems for Improving Diagnostic Accuracy and Achieving Precision Medicine 

Using Software To Avoid Misdiagnoses 

In Conversation with…Mark L. Graber, MD 

 

All of the building blocks for improving the diagnostic process are either in place, or as in the case of EHR interoperability, in the works.   Nothing could be further from reality than what has been stated.   There is no minimization of keystrokes.  There is no anticipatory logic.  There is no automated comparison of medication and diagnoses electronically.  There is no check to determine the consistency of the laboratory results with a probability of diagnosis, or automated identification of tests that would clarify the situation.

Art Therapy

Art Therapy

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

University Of Houston Brain Study Explores Intersection Of Art And Science

The theory that the brain has a positive response to art is not new to science. But a researcher at the University of Houston is using a different approach to test that belief.

This is your brain. This is your brain on art. Any questions? 🍳🤗🎨🗝 yes in fact this raises a TON of questions!

Jennifer Schwartz

 

When I’m at an art museum, I never know what piece will catch my eye.

On this particular visit to the University of Houston’s Blaffer Art Museum, it’s an art installation by Matthew Buckingham. It consists of is a 16-millimeter film projector on a pedestal, projecting a flickering black and white image of the numbers “1720” on a small screen suspended in mid-air. The music coming from the projector is a baroque flute sonata by Bach.

Picture of Matthew Buckingham's "1720"

Matthew Buckingham’s exhibit, “1720” (2009) is a continuous 16 mm film projection of the date on a suspended screen. A movement from Bach’s Sonata in G for Flute and Continuo plays as the soundtrack accompanied by the flickering sound of the film reel.

http://www.houstonpublicmedia.org/wp-content/uploads/2016/01/15141909/BRAIN-ON-ART-FEATURE-MP3.mp3

http://www.houstonpublicmedia.org/articles/news/2016/01/20/134348/university-of-houston-brain-study-explores-intersection-of-art-and-science/

 

So, if someone could look into my head at this moment and see what’s going on in my brain, would they be able to see that I like what I’m looking at?

Dr. Jose Luis Contreras-Vidal, (better known as “Pepe”) is in the process of finding out. The University of Houston College of Engineering professor is collecting neural data from thousands of people while they engage in creative activities, whether it’s dancing, playing music, making art, or, in my case, viewing it.

“(The hypothesis is) that there will be brain patterns associated with aesthetic preference that are recruited when you perceive art and make a judgement about art,” Contreras-Vidal says.

Last October, three local artists – Dario Robleto, JoAnn Fleischhauer, and Lily Cox-Richard – took part in an event that allowed people to watch what was going on in their brains as they created art. The process involved fitting each artist with EEG caps, which look like swim caps with 64 electrodes attached. As they worked on their pieces, a screen on the wall showed their brain activity in blots of blue and yellow.

Picture of Contreras-Vidal

Contreras-Vidal at the Blaffer’s “Your Brain on Art” event in October.     Amy Bishop | Houston Public Media

To Cox-Richard, it’s a unique chance to help bridge the worlds of art and science.

“Being able to contribute and have it be a two-way street is part of what seemed like a really excellent opportunity for all of us to push this conversation forward,” she says.

It was just one of a series of similar experiments Contreras-Vidal has launched. The project is being made possible by funding from the National Science Foundation to advance science and health by studying the brain in action. Contreras-Vidal explains that, even though art is used as a form of therapy, there’s still a mystery surrounding what’s taking place up there to make it therapeutic.

While there have already been studies showing how creativity influences the brain, this one is different. What separates it from others is the fact that the brain is being monitored outside of the lab, such as while walking through a museum, creating art in a studio, or even dancing onstage.

“It’s as real as it gets,” Contreras-Vidal says. “We are not showing you pictures inside a scanner, which is a very different environment.”

Which brings me back to that art installation of the film projector at the Blaffer. While staring at it, I wonder, “What does my brain activity look like right now?”

I decided to find out. In the second part of this story, we’ll pick up with my EEG gallery stroll, followed by a visit to Contreras-Vidal’s laboratory to get the results.

http://www.houstonpublicmedia.org/wp-content/uploads/2016/01/15173424/IMG_1276.jpg

As Houston Public Media Arts and Culture reporter, Amy Bishop spotlights Houston’s dynamic creative community. Her stories have brought national exposure to the local arts scene through NPR programs such as Here and Now.