Feeds:
Posts
Comments

Archive for the ‘Curation’ Category

Brain and behavior

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Behavior Brief

A round-up of recent discoveries in behavior research

By Catherine Offord | March 25, 2016

http://www.the-scientist.com/?articles.view/articleNo/45665/title/Behavior-Brief

Manta in the mirror

http://www.the-scientist.com/images/News/March2016/mantamain.jpg

The mirror self-recognition (MSR) test is commonly used to evaluate nonhuman animals’ self-awareness, and has been reportedly passed by several mammals and birds including apes, elephants, dolphins, and magpies. According to a study published earlier this month (March 11) in The Journal of Ethology, there’s now evidence to add manta rays to that list.

Contingency checking and self-directed behaviors in giant manta rays: Do elasmobranchs have self-awareness?

Csilla Ari  , Dominic P. D’Agostino     Journal of Ethology   11 March 2016: 1-8    http://link.springer.com/article/10.1007%2Fs10164-016-0462-z    doi:10.​1007/​s10164-016-0462-z

Elaborate cognitive skills arose independently in different taxonomic groups. Self-recognition is conventionally identified by the understanding that one’s own mirror reflection does not represent another individual but oneself, which has never been proven in any elasmobranch species to date. Manta rays have a high encephalization quotient, similar to those species that have passed the mirror self-recognition test, and possess the largest brain of all fish species. In this study, mirror exposure experiments were conducted on two captive giant manta rays to document their response to their mirror image. The manta rays did not show signs of social interaction with their mirror image. However, frequent unusual and repetitive movements in front of the mirror suggested contingency checking; in addition, unusual self-directed behaviors could be identified when the manta rays were exposed to the mirror. The present study shows evidence for behavioral responses to a mirror that are prerequisite of self-awareness and which has been used to confirm self-recognition in apes.

X-RAY MAG: How did you become interested in studying the behavior of manta rays?

CA: I knew that I wanted to dedicate my life to study and protect marine life since I was 13 years old. It was during a family vacation in Croatia when I first had the chance to try scuba diving. I was so mesmerized by the experience that when I surfaced I decided to try to find out more about this magical world. I became especially fascinated by the majestic and mysterious manta rays after watching a nature documentary, soon after this first dive. It described how little we know about them and how vulnerable they are.

But growing up in Hungary, a landlocked country, I did not have much option to pursue my dream as a marine biologist, so I got my master’s degree in zoology and my doctorate in neurobiology, while volunteering at oceanography institutes in different countries during the summers. During my PhD studies, I worked on the neuroanatomy and neurohistology of several shark and ray species, including mobulids (mantas and mobulas). During these years, I had the chance to explore the brain structures of mantas and mobulas, which reflected some very unique and surprising features. It was the unusual enlargement of some of their brain parts that got me interested in focusing on their behavior.

“Manta rays are likely the first fish species found to exhibit self-awareness, which implies higher order brain function, as well as sophisticated cognitive and social skills,” study coauthor Csilla Ari told X-Ray Mag.

COGNITION AND SELF AWARENESS IN MANTA RAYS   

Observing two rays in a tank at the Atlantis Aquarium in the Bahamas, the researchers noticed that the animals changed their behavior when a mirror was placed on one of the walls. New behaviors included apparently checking out their fins (see this video) and blowing bubbles at their reflections.     https://youtu.be/LQ1KErB_2oU

X-RAY MAG: What were the findings that caused you to conclude that these animals are using cognition?

CA: Animal cognition, often referred to as animal intelligence, is an exciting scientific field that attempts to describe the mental capacity of an animal. It developed from the field of comparative psychology and it includes exciting research questions, such as perception, attention, selective learning, memory, spatial cognition, tool use, problem solving or consciousness.

There are no easy ways to test these on manta rays, but I found a widely-used and well-established test that can give us insight on their cognitive abilities. The mirror self-recognition (MSR) test is considered to be a reliable behavioral index to show the animal’s ability for self-recognition/self-awareness. Recognizing oneself in a mirror is a very rare capacity among animals. Only a few, large-brained species have passed this test so far, including Asian elephants, bottlenose dolphins and great apes, but no fish species so far.

So, employing a protocol adapted from primates and bottlenose dolphin MSR studies, I exposed captive manta rays to a large mirror and recorded their behavior. The manta rays showed significantly higher frequency of repetitive behavior, such as circling at the mirror or high frequency cephalic fin movements when the mirror was placed in the tank. Contingency checking and self-directed behavior included body turns into a vertical direction, exposing the ventral side of the body to the mirror while staying visually oriented to the mirror. Most surprisingly, such self-directed behaviors were sometimes accompanied with bubble blowing front of the mirror and sharp downward swims.

“This new discovery is incredibly important,” Marc Bekoff of the University of Colorado in Boulder who was not involved in the study told New Scientist. “It shows that we really need to expand the range of animals we study.”

But the MSR test’s developer, Gordon Gallup of the State University of New York at Albany, told New Scientist that the observed movements might reflect curious, rather than self-aware, behavior. “Humans, chimpanzees, and orangutans are the only species for which there is compelling, reproducible evidence for mirror self-recognition,” he said.

Manta rays are first fish to recognise themselves in a mirror  https://www.newscientist.com/article/2081640-manta-rays-are-first-fish-to-recognise-themselves-in-a-mirror

Manta Ray (Manta birostris) feeding on plankton in current, Sangalakki Island, Borneo

Manta ray hears the dinner bell    Norbert Wu/Minden Pictures/FLPA

Giant manta rays have been filmed checking out their reflections in a way that suggests they are self-aware.

Harmless but zippy

Rattlesnakes and other vipers are well-known for their lightning-quick bites, but nonvenomous snakes may be just as speedy, according to a study published this month (March 15) in Biology Letters.

Debunking the viper’s strike: harmless snakes kill a common assumption

David A. Penning, Baxter Sawvel, Brad R. Moon

“There’s this kind of pre-emptive discussion that [vipers] are faster,” study coauthor David Penning of the University of Louisiana, Lafayette, told Smithsonian. But, he added, “as sexy as the topic sounds, there’s not that much research on it.”

To Scientists’ Surprise, Even Nonvenomous Snakes Can Strike at Ridiculous Speeds  By Marcus Woo

The Texas rat snake was just as much of a speed demon as deadly vipers, challenging long-held notions about snake adaptations

http://www.smithsonianmag.com/science-nature/scientists-surprise-even-nonvenomous-snakes-can-strike-ridiculous-speeds-180958452

Texas Rat Snake

Read more: http://www.smithsonianmag.com/science-nature/scientists-surprise-even-nonvenomous-snakes-can-strike-ridiculous-speeds-180958452/#XCyQyDlTqj1JWi14.99

To put the assumption to the test, Penning and his colleagues used a high-speed camera to film strikes from three snake species—the western cottonmouth and the western diamond-backed rattlesnake (both vipers), and a relatively harmless Texas rat snake that kills its prey using constriction.

When a snake strikes, it literally moves faster than the blink of an eye, whipping its head forward so quickly that it can experience accelerations of more than 20 Gs. “It’s the lynchpin of their strategy as predators,” says Rulon Clark at San Diego State University. “Natural selection has optimized a series of adaptations around striking and using venom that really helps them be effective predators.”

When Penning and his colleagues compared strike speeds in three types of snakes, they found that at least one nonvenomous species was just as quick as the vipers. The results hint that serpents’ need for speed may be much more widespread than thought, which raises questions about snake evolution and physiology.  They compared the western cottonmouth and the western diamond-backed rattlesnake, which are both vipers, and the nonvenomous Texas rat snake. They put each snake inside a container and inserted a stuffed glove on the end of a stick. They waved the glove around until the animal struck, recording the whole thing with a high-speed camera. The team tested 14 rat snakes, 6 cottonmouths and 12 rattlesnakes, recording several strikes for each individual.

The recordings revealed that although the highest head acceleration—279 meters per second squared, or nearly 29 g—did indeed come from a rattlesnake, one of the rat snakes followed close behind, accelerating its head at 274 meters per second squared.   All the snakes turned out to be speed demons, the team reports this week in Biology Letters. The rattlesnake scored the highest measured acceleration, at 279 meters per second squared. But to their surprise, the nonvenomous rat snake came in a close second at 274 meters per second squared. That’s lightning-quick, considering that a Formula One race car accelerates at less than 27 meters per second squared to go from 0 to 60 in just one second.

“I was really surprised, because this comparison hadn’t been made before,” Rulon Clark of San Diego Statue University who was not involved in the work told Smithsonian. “It’s not that the vipers are slow, it’s that this very high-speed striking ability is something that seems common to a lot of snake species—or a wider array than people might’ve expected.”

Penning told Discover Magazine that the results make sense, since even nonvenomous snakes have to catch their food. “Prey are not passively waiting to be eaten by snakes,” he said.

Even Harmless Snakes Strike at Deadly Speed

Rather than offering the snakes some sacrificial prey animals, the researchers baited the snakes into striking in self-defense. They used a stuffed glove on a stick. The glove would move around the snake until the animal realized the glove was “clearly not going away,” Penning says, and struck at it. High-speed cameras and mirrors captured these attacks, which happened in the blink of an eye.

Early learning

Emerging evidence suggests that both humans and superb fairywrens begin learning the vocal patterns of their mothers even before birth. Now, a study published this month (March 16) in The Auk: Ornithological Advances indicates that the same is true of the red-backed fairywren, offering the possibility of studying the phenomenon across related species.

“Fairywrens have become a new model system in which to test new dimensions in the ontogeny of parent-offspring communication in vertebrates,” study coauthor Mark Hauber of New York City’s Hunter College said in a statement.

Following on their previous discovery of prenatal learning in superb fairywrens, the researchers compared the structure of nestling calls in the red-backed fairywren to the calls of the birds’ mothers. The team found that the more calls per hour that nestlings received when in the egg, the higher the similarity to maternal calls after hatching. (The number of calls received during the nestling period had no effect on call similarity.)

“Prenatal vocal learning has rarely been described in any animal, with the exception of humans and Australian superb fairywrens,” William Feeney of the University of Queensland, Australia, who was not involved in the work said in the statement. “This result is exciting as it opens the door to investigating the taxonomic diversity of this ability, which could provide insights into why it evolves.”

Vocal imitation of mother’s calls by begging Red-backed Fairywren nestlings increases parental provisioning

Red-backed fairywren (Malurus melanocephalus)  J WELKIN

 

Prenatal imitative learning is an emerging research area in both human and non-human animals. Previous studies in Superb Fairywrens (Malurus cyaneus) showed that mothers are vocal tutors to their embryos and that better imitation of maternal calls yields more parental provisions after hatching. To begin to test if such adaptive behavior is widespread amongst Australasian wrens in Maluridae, we investigated maternal in-nest calling patterns in Red-backed Fairywrens (Malurus melanocephalus). We first compared the structure of maternal and nestling call elements. Next, we examined how in-nest calling behavior varied with parental behaviors and ecological contexts (i.e. prevalence of brood parasitism and nest predation). All Red-backed Fairywren females called to their eggs during incubation and they continued to do so for several days after hatching at a lower rate. Embryos that received more calls per hour during the incubation period (but not the nestling period) developed into hatchlings with higher call element similarity between mother and young. Female call rate was mostly independent of nest predation but in years with more interspecific brood parasitism, nestling element similarity was greater and female call rates tended to be higher. Playback experiments showed that broods with higher element similarity to their mother received more successful feeds. The potential for prenatal tutoring and imitative begging calls in 2 related fairywren taxa sets the stage for a full-scale comparative analysis of the evolution and function of these behaviors across Maluridae and in other vocal-learning lineages.

 

Traveling junk-foodies

White storks may be addicted to junk food, in some cases making migratory trips of tens of kilometers to landfill sites during the breeding season, according to a study published earlier this month (March 15) in Movement Ecology.

“We found that the continuous availability of junk food from landfill has influenced nest use, daily travel distances, and foraging ranges,” study coauthor Aldina Franco of the University of East Anglia said in a statement. “Storks now rely on landfill sites for food—especially during the non-breeding season when other food sources are more scarce.”

Using GPS tracking, the researchers focused on 17 storks traveling between nesting and feeding areas over the course of a year. They found that most long-distance trips were made to landfill sites, and that “having a nest close to a guaranteed food supply also means that the storks are less inclined to leave for the winter,” Franco explained in the statement. “They instead spend their non-breeding season defending their highly desirable nest locations.”

“It’s clear migratory behaviors are quite plastic, in that the [storks] are adaptable and can change quickly,” Andrew Farnsworth of the Cornell Lab of Ornithology who was not involved in the work told National Geographic. He added that the new, detailed dataset will help scientists “consider how such changes in behavior may affect the future population of these birds.”

Are white storks addicted to junk food? Impacts of landfill use on the movement and behaviour of resident white storks (Ciconia ciconia) from a partially migratory population

Nathalie I. Gilbert Email authorRicardo A. CorreiaJoão Paulo Silva,…, Jenny A. Gill and Aldina M. A. Franco

Movement Ecology 2016; 4:7      http://dx.doi.org:/10.1186/s40462-016-0070-0

The migratory patterns of animals are changing in response to global environmental change with many species forming resident populations in areas where they were once migratory. The white stork (Ciconia ciconia) was wholly migratory in Europe but recently guaranteed, year-round food from landfill sites has facilitated the establishment of resident populations in Iberia. In this study 17 resident white storks were fitted with GPS/GSM data loggers (including accelerometer) and tracked for 9.1 ± 3.7 months to quantify the extent and consistency of landfill attendance by individuals during the non-breeding and breeding seasons and to assess the influence of landfill use on daily distances travelled, percentage of GPS fixes spent foraging and non-landfill foraging ranges.   Results   Resident white storks used landfill more during non-breeding (20.1 % ± 2.3 of foraging GPS fixes) than during breeding (14.9 % ± 2.2). Landfill attendance declined with increasing distance between nest and landfill in both seasons. During non-breeding a large percentage of GPS fixes occurred on the nest throughout the day (27 % ± 3.0 of fixes) in the majority of tagged storks. This study provides first confirmation of year-round nest use by resident white storks. The percentage of GPS fixes on the nest was not influenced by the distance between nest and the landfill site. Storks travelled up to 48.2 km to visit landfills during non-breeding and a maximum of 28.1 km during breeding, notably further than previous estimates. Storks nesting close to landfill sites used landfill more and had smaller foraging ranges in non-landfill habitat indicating higher reliance on landfill. The majority of non-landfill foraging occurred around the nest and long distance trips were made specifically to visit landfill.  Conclusions   The continuous availability of food resources on landfill has facilitated year-round nest use in white storks and is influencing their home ranges and movement behaviour. White storks rely on landfill sites for foraging especially during the non-breeding season when other food resources are scarcer and this artificial food supplementation probably facilitated the establishment of resident populations. The closure of landfills, as required by EU Landfill Directives, will likely cause dramatic impacts on white stork populations.

WEIRD & WILD   Junk Food-Loving Birds Diss Migration, Live on Landfill    By Brian Handwerk

Spain and Portugal’s white storks are forgoing their annual journeys to African wintering grounds, a new study says

http://news.nationalgeographic.com/2016/03/060315-storks-food-animals-science-urban-food/

You’ve heard of the staycation. Some white storks in Europe are now opting for the staygration. 

The big birds are skipping their annual trip to African wintering grounds to remain year-round in Spain and Portugal, a new study shows.

Why? They’ve developed an addiction to junk food at landfills.

“White storks used to be wholly migratory. Before the 1980s, there were no white storks staying in” Spain and Portugal, says study leader Aldina Franco, a conservation ecologist at the University of East Anglia in the U.K.

“During the 1980s, the first individuals started staying, and now we see those numbers increasing exponentially.” (Related: “Beloved Storks, Emblems of Fertility, Rebounding in France.”)

 

Unlikely allies

Israel’s barren Negev desert is home to striped hyenas and gray wolves—two large scavenger species with considerably overlapping diets. But although such conditions might be expected to create fierce competition, researchers in Israel and the U.S. have now presented evidence that—at least in some cases—these animals form alliances and may even hunt collaboratively for food. The findings were published last month (February 10) in Zoology in the Middle East.

Wolves and hyenas in the desert might “just need each other to survive, because food is so, so limited,” study coauthor Vladimir Dinets of the University of Tennessee in Knoxville told The Washington Post.

Collating observations made over the past two decades (including reports of overlapping paw prints, and sightings of hyenas among packs of wolves), the researchers note that the findings could reflect the behavior of a few, oddly behaving hyenas, or a more widespread commensal, or even cooperative, relationship between the species.

“Animal behavior is often more flexible than described in textbooks,” Dinets said in a press release. “When necessary, animals can abandon their usual strategies and learn something completely new and unexpected. It’s a very useful skill for people, too.”

Read Full Post »

Conduction, graphene, elements and light

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

New 2D material could upstage graphene   Mar 25, 2016

Can function as a conductor or semiconductor, is extremely stable, and uses light, inexpensive earth-abundant elements
http://www.kurzweilai.net/new-2d-material-could-upstage-graphene
The atoms in the new structure are arranged in a hexagonal pattern as in graphene, but that is where the similarity ends. The three elements forming the new material all have different sizes; the bonds connecting the atoms are also different. As a result, the sides of the hexagons formed by these atoms are unequal, unlike in graphene. (credit: Madhu Menon)

A new one-atom-thick flat material made up of silicon, boron, and nitrogen can function as a conductor or semiconductor (unlike graphene) and could upstage graphene and advance digital technology, say scientists at the University of Kentucky, Daimler in Germany, and the Institute for Electronic Structure and Laser (IESL) in Greece.

Reported in Physical Review B, Rapid Communications, the new Si2BN material was discovered in theory (not yet made in the lab). It uses light, inexpensive earth-abundant elements and is extremely stable, a property many other graphene alternatives lack, says University of Kentucky Center for Computational Sciences physicist Madhu Menon, PhD.

Limitations of other 2D semiconducting materials

A search for new 2D semiconducting materials has led researchers to a new class of three-layer materials called transition-metal dichalcogenides (TMDCs). TMDCs are mostly semiconductors and can be made into digital processors with greater efficiency than anything possible with silicon. However, these are much bulkier than graphene and made of materials that are not necessarily earth-abundant and inexpensive.

Other graphene-like materials have been proposed but lack the strengths of the new material. Silicene, for example, does not have a flat surface and eventually forms a 3D surface. Other materials are highly unstable, some only for a few hours at most.

The new Si2BN material is metallic, but by attaching other elements on top of the silicon atoms, its band gap can be changed (from conductor to semiconductor, for example) — a key advantage over graphene for electronics applications and solar-energy conversion.

The presence of silicon also suggests possible seamless integration with current silicon-based technology, allowing the industry to slowly move away from silicon, rather than precipitously, notes Menon.

https://youtu.be/lKc_PbTD5go

Abstract of Prediction of a new graphenelike Si2BN solid

While the possibility to create a single-atom-thick two-dimensional layer from any material remains, only a few such structures have been obtained other than graphene and a monolayer of boron nitride. Here, based upon ab initiotheoretical simulations, we propose a new stable graphenelike single-atomic-layer Si2BN structure that has all of its atoms with sp2 bonding with no out-of-plane buckling. The structure is found to be metallic with a finite density of states at the Fermi level. This structure can be rolled into nanotubes in a manner similar to graphene. Combining first- and second-row elements in the Periodic Table to form a one-atom-thick material that is also flat opens up the possibility for studying new physics beyond graphene. The presence of Si will make the surface more reactive and therefore a promising candidate for hydrogen storage.

 

Nano-enhanced textiles clean themselves with light

Catalytic uses for industrial-scale chemical processes in agrochemicals, pharmaceuticals, and natural products also seen
http://www.kurzweilai.net/nano-enhanced-textiles-clean-themselves-with-light
Close-up of nanostructures grown on cotton textiles. Image magnified 150,000 times. (credit: RMIT University)

Researchers at at RMIT University in Australia have developed a cheap, efficient way to grow special copper- and silver-based nanostructures on textiles that can degrade organic matter when exposed to light.

Don’t throw out your washing machine yet, but the work paves the way toward nano-enhanced textiles that can spontaneously clean themselves of stains and grime simply by being put under a light or worn out in the sun.

The nanostructures absorb visible light (via localized surface plasmon resonance — collective electron-charge oscillations in metallic nanoparticles that are excited by light), generating high-energy (“hot”) electrons that cause the nanostructures to act as catalysts for chemical reactions that degrade organic matter.

Steps involved in fabricating copper- and silver-based cotton fabrics: 1. Sensitize the fabric with tin. 2. Form palladium seeds that act as nucleation (clustering) sites. 3. Grow metallic copper and silver nanoparticles on the surface of the cotton fabric. (credit: Samuel R. Anderson et al./Advanced Materials Interfaces)

The challenge for researchers has been to bring the concept out of the lab by working out how to build these nanostructures on an industrial scale and permanently attach them to textiles. The RMIT team’s novel approach was to grow the nanostructures directly onto the textiles by dipping them into specific solutions, resulting in development of stable nanostructures within 30 minutes.

When exposed to light, it took less than six minutes for some of the nano-enhanced textiles to spontaneously clean themselves.

The research was described in the journal Advanced Materials Interfaces.

Scaling up to industrial levels

Rajesh Ramanathan, a RMIT postdoctoral fellow and co-senior author, said the process also had a variety of applications for catalysis-based industries such as agrochemicals, pharmaceuticals, and natural productsand could be easily scaled up to industrial levels. “The advantage of textiles is they already have a 3D structure, so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” he said.

Cotton textile fabric with copper-based nanostructures. The image is magnified 200 times. (credit: RMIT University)

“Our next step will be to test our nano-enhanced textiles with organic compounds that could be more relevant to consumers, to see how quickly they can handle common stains like tomato sauce or wine,” Ramanathan said.

“There’s more work to do to before we can start throwing out our washing machines, but this advance lays a strong foundation for the future development of fully self-cleaning textiles.”


Abstract of Robust Nanostructured Silver and Copper Fabrics with Localized Surface Plasmon Resonance Property for Effective Visible Light Induced Reductive Catalysis

Inspired by high porosity, absorbency, wettability, and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the modes of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.

 

New type of molecular tag makes MRI 10,000 times more sensitive

Could detect biochemical processes in opaque tissue without requiring PET radiation or CT x-rays
http://www.kurzweilai.net/new-type-of-molecular-tag-makes-mri-10000-times-more-sensitive

Duke scientists have discovered a new class of inexpensive, long-lived molecular tags that enhance MRI signals by 10,000 times. To activate the tags, the researchers mix them with a newly developed catalyst (center) and a special form of hydrogen (gray), converting them into long-lived magnetic resonance “lightbulbs” that might be used to track disease metabolism in real time. (credit: Thomas Theis, Duke University)

Duke University researchers have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions, such as those involved in cancer and heart disease, and in real time.

Let’s review how MRI (magnetic resonance imaging) works: MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. By generating a strong magnetic field (such as 3 Tesla) and a series of radio-frequency waves, MRI induces these hydrogen magnets in atoms to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs (such as the brain), blood vessels, and tumors inside the body.


MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique. That makes it impossible to detect small numbers of molecules (without using unattainably more massive magnetic fields).

So to take MRI a giant step further in sensitivity, the Duke researchers created a new class of molecular “tags” that can track disease metabolism in real time, and can last for more than an hour, using a technique called hyperpolarization.* These tags are biocompatible and inexpensive to produce, allowing for using existing MRI machines.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

Sensitive tissue detection without radiation

The new molecular tags open up a new world for medicine and research by making it possible to detect what’s happening in optically opaque tissue instead of requiring expensive positron emission tomography (PET), which uses a radioactive tracer chemical to look at organs in the body and only works for (typically) about 20 minutes, or CT x-rays, according to the researchers.

This research was reported in the March 25 issue of Science Advances. It was supported by the National Science Foundation, the National Institutes of Health, the Department of Defense Congressionally Directed Medical Research Programs Breast Cancer grant, the Pratt School of Engineering Research Innovation Seed Fund, the Burroughs Wellcome Fellowship, and the Donors of the American Chemical Society Petroleum Research Fund.

* For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said. But while promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular “lightbulbs” burn out in a matter of seconds.

“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”

So the researchers synthesized a series of molecules containing diazarines — a chemical structure composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly. Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.


Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Conventional magnetic resonance (MR) faces serious sensitivity limitations, which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2singlet spin order, with signal decay time constants of 5.8 and 23 min, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for more than an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.

references:

[Seems like they have a great idea, now all they need to do is confirm very specific uses or types of cancers/diseases or other processes they can track or target. Will be interesting to see if they can do more than just see things, maybe they can use this to target and destroy bad things in the body also. Keep up the good work….. this sounds like a game changer.]

 

Scientists time-reverse developed stem cells to make them ‘embryonic’ again

May help avoid ethically controversial use of human embryos for research and support other research goals
http://www.kurzweilai.net/scientists-time-reverse-developed-stem-cells-to-make-them-embryonic-again
Researchers have reversed “primed” (developed) “epiblast” stem cells (top) from early mouse embryos using the drug MM-401, causing the treated cells (bottom) to revert to the original form of the stem cells. (credit: University of Michigan)

University of Michigan Medical School researchers have discovered a way to convert mouse stem cells (taken from an embryo) that have  become “primed” (reached the stage where they can  differentiate, or develop into every specialized cell in the body) to a “naïve” (unspecialized) state by simply adding a drug.

This breakthrough has the potential to one day allow researchers to avoid the ethically controversial use of human embryos left over from infertility treatments. To achieve this breakthrough, the researchers treated the primedembryonic stem cells (“EpiSC”) with a drug called MM-401* (a leukemia drug) for a short period of time.

Embryonic stem cells are able to develop into any type of cell, except those of the placenta (credit: Mike Jones/CC)

…..

* The drug, MM-401, specifically targets epigenetic chemical markers on histones, the protein “spools” that DNA coils around to create structures called chromatin. These epigenetic changes signal the cell’s DNA-reading machinery and tell it where to start uncoiling the chromatin in order to read it.

A gene called Mll1 is responsible for the addition of these epigenetic changes, which are like small chemical tags called methyl groups. Mll1 plays a key role in the uncontrolled explosion of white blood cells in leukemia, which is why researchers developed the drug MM-401 to interfere with this process. But Mll1 also plays a role in cell development and the formation of blood cells and other cells in later-stage embryos.

Stem cells do not turn on the Mll1 gene until they are more developed. The MM-401 drug blocks Mll1’s normal activity in developing cells so the epigenetic chemical markers are missing. These cells are then unable to continue to develop into different types of specialized cells but are still able to revert to healthy naive pluripotent stem cells.


Abstract of MLL1 Inhibition Reprograms Epiblast Stem Cells to Naive Pluripotency

The interconversion between naive and primed pluripotent states is accompanied by drastic epigenetic rearrangements. However, it is unclear whether intrinsic epigenetic events can drive reprogramming to naive pluripotency or if distinct chromatin states are instead simply a reflection of discrete pluripotent states. Here, we show that blocking histone H3K4 methyltransferase MLL1 activity with the small-molecule inhibitor MM-401 reprograms mouse epiblast stem cells (EpiSCs) to naive pluripotency. This reversion is highly efficient and synchronized, with more than 50% of treated EpiSCs exhibiting features of naive embryonic stem cells (ESCs) within 3 days. Reverted ESCs reactivate the silenced X chromosome and contribute to embryos following blastocyst injection, generating germline-competent chimeras. Importantly, blocking MLL1 leads to global redistribution of H3K4me1 at enhancers and represses lineage determinant factors and EpiSC markers, which indirectly regulate ESC transcription circuitry. These findings show that discrete perturbation of H3K4 methylation is sufficient to drive reprogramming to naive pluripotency.


Abstract of Naive Pluripotent Stem Cells Derived Directly from Isolated Cells of the Human Inner Cell Mass

Conventional generation of stem cells from human blastocysts produces a developmentally advanced, or primed, stage of pluripotency. In vitro resetting to a more naive phenotype has been reported. However, whether the reset culture conditions of selective kinase inhibition can enable capture of naive epiblast cells directly from the embryo has not been determined. Here, we show that in these specific conditions individual inner cell mass cells grow into colonies that may then be expanded over multiple passages while retaining a diploid karyotype and naive properties. The cells express hallmark naive pluripotency factors and additionally display features of mitochondrial respiration, global gene expression, and genome-wide hypomethylation distinct from primed cells. They transition through primed pluripotency into somatic lineage differentiation. Collectively these attributes suggest classification as human naive embryonic stem cells. Human counterparts of canonical mouse embryonic stem cells would argue for conservation in the phased progression of pluripotency in mammals.

 

 

How to kill bacteria in seconds using gold nanoparticles and light

March 24, 2016

 

zapping bacteria ft Could treat bacterial infections without using antibiotics, which could help reduce the risk of spreading antibiotics resistance

Researchers at the University of Houston have developed a new technique for killing bacteria in 5 to 25 seconds using highly porous gold nanodisks and light, according to a study published today in Optical Materials Express. The method could one day help hospitals treat some common infections without using antibiotics

Read Full Post »

Selye’s Riddle solved

Larry H. Bernstein, mD, FCAP, Curator

LPBI

 

Mathematicians Solve 78-year-old Mystery

Mathematicians developed a solution to Selye's riddle which has puzzled scientists for almost 80 years.
Mathematicians developed a solution to Selye’s riddle which has puzzled scientists for almost 80 years.

In previous research, it was suggested that adaptation of an animal to different factors looks like spending of one resource, and that the animal dies when this resource is exhausted. In 1938, Hans Selye introduced “adaptation energy” and found strong experimental arguments in favor of this hypothesis. However, this term has caused much debate because, as it cannot be measured as a physical quantity, adaptation energy is not strictly energy.

 

Evolution of adaptation mechanisms: Adaptation energy, stress, and oscillating death

Alexander N. Gorbana, , Tatiana A. Tyukinaa, Elena V. Smirnovab, Lyudmila I. Pokidyshevab,

Highlights

•   We formalize Selye׳s ideas about adaptation energy and dynamics of adaptation.
•   A hierarchy of dynamic models of adaptation is developed.
•   Adaptation energy is considered as an internal coordinate on the ‘dominant path’ in the model of adaptation.
•   The optimal distribution of resources for neutralization of harmful factors is studied.
•   The phenomena of ‘oscillating death’ and ‘oscillating remission’ are predicted.       

In previous research, it was suggested that adaptation of an animal to different factors looks like spending of one resource, and that the animal dies when this resource is exhausted.

In 1938, Selye proposed the notion of adaptation energy and published ‘Experimental evidence supporting the conception of adaptation energy.’ Adaptation of an animal to different factors appears as the spending of one resource. Adaptation energy is a hypothetical extensive quantity spent for adaptation. This term causes much debate when one takes it literally, as a physical quantity, i.e. a sort of energy. The controversial points of view impede the systematic use of the notion of adaptation energy despite experimental evidence. Nevertheless, the response to many harmful factors often has general non-specific form and we suggest that the mechanisms of physiological adaptation admit a very general and nonspecific description.

We aim to demonstrate that Selye׳s adaptation energy is the cornerstone of the top-down approach to modelling of non-specific adaptation processes. We analyze Selye׳s axioms of adaptation energy together with Goldstone׳s modifications and propose a series of models for interpretation of these axioms. Adaptation energy is considered as an internal coordinate on the ‘dominant path’ in the model of adaptation. The phenomena of ‘oscillating death’ and ‘oscillating remission’ are predicted on the base of the dynamical models of adaptation. Natural selection plays a key role in the evolution of mechanisms of physiological adaptation. We use the fitness optimization approach to study of the distribution of resources for neutralization of harmful factors, during adaptation to a multifactor environment, and analyze the optimal strategies for different systems of factors.

In this work, an international team of researchers, led by Professor Alexander N. Gorban from the University of Leicester, have developed a solution to Selye’s riddle, which has puzzled scientists for almost 80 years.

Alexander N. Gorban, Professor of Applied Mathematics in the Department of Mathematics at the University of Leicester, said: “Nobody can measure adaptation energy directly, indeed, but it can be understood by its place already in simple models. In this work, we develop a hierarchy of top-down models following Selye’s findings and further developments. We trust Selye’s intuition and experiments and use the notion of adaptation energy as a cornerstone in a system of models. We provide a ‘thermodynamic-like’ theory of organism resilience that, just like classical thermodynamics, allows for economics metaphors, such as cost and bankruptcy and, more importantly, is largely independent of a detailed mechanistic explanation of what is ‘going on underneath’.”

Adaptation energy is considered as an internal coordinate on the “dominant path” in the model of adaptation. The phenomena of “oscillating death” and “oscillating remission,” which have been observed in clinic for a long time, are predicted on the basis of the dynamical models of adaptation. The models, based on Selye’s idea of adaptation energy, demonstrate that the oscillating remission and oscillating death do not need exogenous reasons. The developed theory of adaptation to various factors gives the instrument for the early anticipation of crises.

Professor Alessandro Giuliani from Istituto Superiore di Sanità in Rome commented on the work, saying: “Gorban and his colleagues dare to make science adopting the thermodynamics style: they look for powerful principles endowed with predictive ability in the real world before knowing the microscopic details. This is, in my opinion, the only possible way out from the actual repeatability crisis of mainstream biology, where a fantastic knowledge of the details totally fails to predict anything outside the test tube.1

Citation: Alexander N. Gorban, Tatiana A. Tyukina, Elena V. Smirnova, Lyudmila I. Pokidysheva. Evolution of adaptation mechanisms: Adaptation energy, stress, and oscillating death. Journal of Theoretical Biology, 2016; DOI:10.1016/j.jtbi.2015.12.017. Voosen P. (2015) Amid a Sea of False Findings NIH tries Reform, The Chronicle of Higher Education.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Read Full Post »

Nanophotonics Applications

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Copper Plasmonics Explored for Nanophotonics Applications

http://www.photonics.com/Article.aspx?AID=58484

MOSCOW, March 22, 2016 — Experimental demonstration of copper components has expanded the list of potential materials suited to nanophotonic devices beyond gold and silver.

According to researchers from the Moscow Institute of Physics and Technology (MIPT), copper components are not only just as good as components based on noble metals, such as gold and silver, they can be easily implemented in integrated circuits using industry-standard fabrication processes. Gold and silver, as noble metals, may not enter into the requisite chemical reactions to create nanostructures readily and require expensive, difficult processing steps.

Nanoscale copper plasmonic waveguides on a silicon chip in a scanning near-field optical microscope (left) and their image obtained using electron microscopy (right).

Nanoscale copper plasmonic waveguides on a silicon chip in a scanning near-field optical microscope (left) and their image obtained using electron microscopy (right). Courtesy of MIPT.

In nanophotonics, the diffraction limit of light is overcome by using metal-dielectric structures. Light may be converted into surface plasmon polaritons, surface waves propagating along the surface of a metal, which make it possible to switch from conventional 3D photonics to 2D surface plasmon photonics, also known as plasmonics. This allows control of light at the 100-nm scale, far beyond the diffraction limit.

Now researchers from MIPT’s Laboratory of Nanooptics and Plasmonics have found a solution to the problems posed by noble metals. Based on a generalization of the theory for so-called plasmonic metals, in 2012 they found that copper as an optical material is not only able to compete with gold, but it can also be a better alternative. Unlike gold, copper can be easily structured using wet or dry etching. This gives a possibility to make nanoscale components that are easily integrated into silicon photonic or electronic integrated circuits.

Silicon chip with nanoscale copper plasmonic components.

Silicon chip with nanoscale copper plasmonic components. Courtesy of MIPT.

It took more than two years for the researchers to purchase the required equipment, develop the fabrication process, produce samples, conduct several independent measurements and confirm their hypothesis experimentally.

“As a result, we succeeded in fabricating copper chips with optical properties that are in no way inferior to gold-based chips,” says the research leader Dmitry Fedyanin. “Furthermore, we managed to do this in a fabrication process compatible with the CMOS technology, which is the basis for all modern integrated circuits, including microprocessors. It’s a kind of revolution in nanophotonics.”

The researchers said that the optical properties of thin polycrystalline copper films were determined by their internal structure, and that controlling this structure to achieve and consistently reproduce the required parameters in technological cycles was the most difficult task.

Having demonstrated copper’s suitable material characteristics, as well as nanoscale manufacturing capability, the researchers believe the devices could be integrated with both silicon nanoelectronics and silicon nanophotonics. Such technologies could enable LEDs, nanolasers, highly sensitive sensors and transducers for mobile devices, and high-performance optoelectronic processors with several tens of thousands of cores for graphics cards, personal computers and supercomputers.

“We conducted ellipsometry of the copper films and then confirmed these results using near-field scanning optical microscopy of the nanostructures. This proves that the properties of copper are not impaired during the whole process of manufacturing nanoscale plasmonic components,” says Dmitry Fedyanin.

The research was published in Nano Letters (doi: 10.1021/acs.nanolett.5b03942).

 

Ultralow-Loss CMOS Copper Plasmonic Waveguides

Surface plasmon polaritons can give a unique opportunity to manipulate light at a scale well below the diffraction limit reducing the size of optical components down to that of nanoelectronic circuits. At the same time, plasmonics is mostly based on noble metals, which are not compatible with microelectronics manufacturing technologies. This prevents plasmonic components from integration with both silicon photonics and silicon microelectronics. Here, we demonstrate ultralow-loss copper plasmonic waveguides fabricated in a simple complementary metal-oxide semiconductor (CMOS) compatible process, which can outperform gold plasmonic waveguides simultaneously providing long (>40 μm) propagation length and deep subwavelength (∼λ2/50, where λ is the free-space wavelength) mode confinement in the telecommunication spectral range. These results create the backbone for the development of a CMOS plasmonic platform and its integration in future electronic chips.

Read Full Post »

Diabetes Mellitus: new insight into genetic role

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

New Study May Lead to Improved Type 2 Diabetes Treatment

http://www.dddmag.com/news/2016/03/new-study-may-lead-improved-type-2-diabetes-treatment

 

Genetic cause found for loss of beta cells during diabetes development.

Worldwide, 400 million people live with diabetes, with rapid increases projected. Patients with diabetes mostly fall into one of two categories, type 1 diabetics, triggered by autoimmunity at a young age, and type 2 diabetics, caused by metabolic dysfunction of the liver. Despite being labeled a “lifestyle disease”, diabetes has a strong genetic basis. New research under the direction of Adrian Liston (VIB/KU Leuven) has discovered that a common genetic defect in beta cells may underlie both forms of diabetes. This research was published in the international scientific journal Nature Genetics.

Adrian Liston (VIB/University of Leuven): “Our research finds that genetics is critical for the survival of beta cells in the pancreas – the cells that make insulin. Thanks to our genetic make-up, some of us have beta cells that are tough and robust, while others have beta cells that are fragile and can’t handle stress. It is these people who develop diabetes, either type 1 or type 2, while others with tougher beta cells will remain healthy even in if they suffer from autoimmunity or metabolic dysfunction of the liver.”

Different pathways to diabetes development

Diabetes is a hidden killer. One out of every 11 adults is suffering from the disease, yet half of them have not even been diagnosed. Diabetes is caused by the inability of the body to lower blood glucose, a process normally driven by insulin. In patients with type 1 diabetes (T1D), this is caused by the immune system killing off the beta cells that produce insulin. In patients with type 2 diabetes (T2D), a metabolic dysfunction prevents insulin from working on the liver. In both cases, left untreated, the extra glucose in the blood can cause blindness, cardiovascular disease, diabetic nephropathy, diabetic neuropathy and death.

In this study, an international team of researchers investigated how genetic variation controls the development of diabetes. While most previous work has focused on the effect of genetics in altering the immune system (in T1D) and metabolic dysfunction of the liver (in T2D), this research found that genetics also affected the beta cells that produce insulin. Mice with fragile beta cells that were poor at repairing DNA damage would rapidly develop diabetes when those beta cells were challenged by cellular stress. Other mice, with robust beta cells that were good at repairing DNA damage, were able to stay non-diabetic for life, even when those islets were placed under severe cellular stress. The same pathways for beta cell survival and DNA damage repair were also found to be altered in diabetic patient samples, indicating that a genetic predisposition for fragile beta cells may underlie who develops diabetes.

Adrian Liston (VIB/University of Leuven): “While genetics are really the most important factor for developing diabetes, our food environment can also play a deciding role. Even mice with genetically superior beta cells ended up as diabetic when we increased the fat in their diet.”

A new model for testing type 2 diabetes treatments

Current treatments for T2D rely on improving the metabolic response of the liver to insulin. These antidiabetic drugs, in conjunction with lifestyle interventions, can control the early stages of T2D by allowing insulin to function on the liver again. However during the late stages of T2D, the death of beta cells means that there is no longer any insulin being produced in the pancreas. At this stage, antidiabetic drugs and lifestyle interventions have poor efficacy, and medical complications arise.

Dr Lydia Makaroff (International Diabetes Federation, not an author of the current study): “The health cost for diabetes currently exceeds US$600 billion, 12 percent of the global health budget, and will only increase as diabetes becomes more common. Much of this health care burden is caused by late-stage type 2 diabetes, where we do not have effective treatments, so we desperately need new research into novel therapeutic approaches. This discovery dramatically improves our understanding of type 2 diabetes, which will enable the design of better strategies and medications for diabetes in the future”.

Adrian Liston (VIB/University of Leuven): “The big problem in developing drugs for late-stage T2D is that, until now, there has not been an animal model for the beta cell death stage. Previously, animal models were all based on the early stage of metabolic dysfunction in the liver, which has allowed the development of good drugs for treating early-stage T2D. This new mouse model will allow us, for the first time, to test new antidiabetic drugs that focus on preserving beta cells. There are many promising drugs under development at life sciences companies that have just been waiting for a usable animal model. Who knows, there may even be useful compounds hidden away in alternative or traditional medicines that could be found through a good testing program. If a drug is found that stops late-stage diabetes, it would really be a major medical breakthrough!”

New Method Measures Type 2 Diabetes Risk in Blood

http://www.dddmag.com/news/2016/04/new-method-measures-type-2-diabetes-risk-blood

Researchers at Lund University in Sweden have found a new type of biomarker that can predict the risk of type 2 diabetes, by detecting epigenetic changes in specific genes through a simple blood test. The results are published today in Nature Communications.

“This could motivate a person at risk to change their lifestyle”, said Karl Bacos, researcher in epigenetics at Lund University.

Predicting the onset of diabetes is already possible by measuring the blood glucose level average, HbA1C, over time. However, the predictive potential of this method is modest and new methods are needed.

The discoveries made by the research group at Lund University have now made it possible to measure the presence of so-called DNA methylations in four specific genes, and thereby predict who is at risk of developing type 2 diabetes, long before the disease occurs. Methylations are chemical changes that control gene activity, that is, whether they are active or not.

“The hope is that this will be developed into a better way to predict the disease”, said Karl Bacos, first author of the study.

The researchers started by studying insulin-producing beta cells from deceased persons. They found that the DNA methylations in the four genes in question increased, depending on the donor’s age. This in turn affected the activity of the genes.

When these changes were copied in cultured beta cells, they proved to have a positive effect on insulin secretion.

“We could then see the same DNA methylation changes in the blood which was really cool”, said Karl Bacos.

The blood samples from the participants of two separate research projects – one Danish and one Finnish – were then studied and compared with blood samples taken from the same participants ten years later. The Finnish participants, who had exhibited higher levels of DNA methylation in their first sample, had a lower risk of type 2 diabetes ten years later. In the Danish participants, higher DNA methylation in their first sample was associated with higher insulin secretion ten years later. All of the Danish participants were healthy on both occasions, whereas approximately one-third of the Finnish participants had developed type 2 diabetes.

“Increased insulin secretion actually protects against type 2 diabetes. It could be the body’s way of protecting itself when other tissue becomes resistant to insulin, which often happens as we get older”, said professor and research project manager Charlotte Ling.

The studies were based on a relatively small number of participants, and a selection of genes. The researchers therefore now want to continue with finding markers with a stronger predictive potential by implementing so-called epigenetic whole-genome sequencing when analysing a person’s entire genetic make-up and all the DNA methylations that come with it, in a larger population group.

The research group has previously shown that age, diet and exercise affect the so-called epigenetic risk of type 2 diabetes.

“You cannot change your genes and the risks that they entail, but epigenetics means that you can affect the DNA methylations, and thereby gene activity, through lifestyle choices”, said Charlotte Ling.

 

Read Full Post »

Laser Therapy Opens Blood-Brain Barrier

Curator: Larry H. Bernstein, MD, FCAP

 

Laser Surgery Opens Blood-Brain Barrier to Chemotherapy

http://www.photonics.com/Article.aspx?AID=58445

ST. LOUIS, March 11, 2016 — A laser probe has been used to open the brain’s protective cover, enabling delivery of chemotherapy drugs to patients with glioblastoma — the most common and aggressive form of brain cancer.

In a pilot study conducted by the Washington University School of Medicine in St. Louis, Mo., 14 patients with glioblastoma underwent minimally invasive laser surgery to treat a recurrence of their tumors. Heat from the laser was already known to kill brain tumor cells but, unexpectedly, the researchers found that the technology penetrated the blood-brain barrier.

“The laser treatment kept the blood-brain barrier open for four to six weeks, providing us with a therapeutic window of opportunity to deliver chemotherapy drugs to the patients,” said neurosurgery professor Eric Leuthardt, MD, who also treats patients at Barnes-Jewish Hospital. “This is crucial because most chemotherapy drugs can’t get past the protective barrier, greatly limiting treatment options for patients with brain tumors.

The team is still closely following the patients, though early results indicate they are doing better on average, in terms of survival and clinical outcomes, than what the researchers would expect with other treatment methods.

Glioblastomas are one of the most difficult cancers to treat. Most patients diagnosed with this type of brain tumor survive just 15 months, according to the American Cancer Society.

The research is part of a larger phase II clinical trial that will involve 40 patients. Twenty patients were enrolled in the pilot study, 14 of whom were found to be suitable candidates for the minimally invasive laser surgery, a technology that Leuthardt helped pioneer.

The laser technology was approved by the FDA in 2009 as a surgical tool to treat brain tumors. The Washington team’s research marks the first time the laser has been shown to disrupt the blood-brain barrier, which shields the brain from harmful toxins but inadvertently blocks potentially helpful drugs, such as chemotherapy.

As part of the trial, doxorubicin, a widely used chemotherapy, was delivered intravenously to 13 patients in the weeks following the laser surgery. Preliminary data indicate that 12 patients showed no evidence of tumor progression during the short, 10-week time frame of the study. One patient experienced tumor growth before chemotherapy was delivered; the tumor in another patient progressed after chemotherapy was administered, the researcher reported.

The laser surgery was well-tolerated by the patients in the trial; most went home one to two days afterward, and none experienced severe complications. The surgery was performed while a patient lies in an MRI scanner, providing the neurosurgical team with a real-time look at the tumor. Using an incision of only 3 mm, a neurosurgeon robotically inserted the laser to heat up and kill brain tumor cells at a temperature of about 150 °F.

“The laser kills tumor cells, which we anticipated,” said Leuthardt. “But, surprisingly, while reviewing MRI scans of our patients, we noticed changes near the former tumor site that looked consistent with the breakdown of the blood-brain barrier.”

Leuthardt confirmed and further studied these imaging findings with study co-author Dr. Joshua Shimony, a professor of radiology at Washington University.

The researchers, including co-corresponding author Dr. David Tran, a neuro-oncologist now at the University of Florida, performed follow-up testing, which showed that the degree of permeability through the blood-brain barrier peaked one to two weeks after surgery but that the barrier remained open for up to six weeks.

Other successful attempts to breach the barrier have left it open for only a short time — about 24 hours — not long enough for chemotherapy to be consistently delivered, or have resulted in only modest benefits, the researchers said. The laser technology leaves the barrier open for weeks — long enough for patients to receive multiple treatments with chemotherapy. Further, the laser only opens the barrier near the tumor, leaving the protective cover in place in other areas of the brain. This has the potential to limit the harmful effects of chemotherapy drugs in other areas of the brain, the researchers said.

The findings also suggest that other approaches, such as cancer immunotherapy — which harnesses cells of the immune system to seek out and destroy cancer — could also be useful for patients with glioblastomas.

The researchers are planning another clinical trial that combines the laser technology with chemotherapy and immunotherapy, as well as trials to test targeted cancer drugs that normally can’t breach the blood-brain barrier.

The research was published in Plos One (doi: 10.1371/journal.pone.0148613).

 

Hyperthermic Laser Ablation of Recurrent Glioblastoma Leads to Temporary Disruption of the Peritumoral Blood Brain Barrier

Poor central nervous system penetration of cytotoxic drugs due to the blood brain barrier (BBB) is a major limiting factor in the treatment of brain tumors. Most recurrent glioblastomas (GBM) occur within the peritumoral region. In this study, we describe a hyperthemic method to induce temporary disruption of the peritumoral BBB that can potentially be used to enhance drug delivery.

 Methods

Twenty patients with probable recurrent GBM were enrolled in this study. Fourteen patients were evaluable. MRI-guided laser interstitial thermal therapy was applied to achieve both tumor cytoreduction and disruption of the peritumoral BBB. To determine the degree and timing of peritumoral BBB disruption, dynamic contrast-enhancement brain MRI was used to calculate the vascular transfer constant (Ktrans) in the peritumoral region as direct measures of BBB permeability before and after laser ablation. Serum levels of brain-specific enolase, also known as neuron-specific enolase, were also measured and used as an independent quantification of BBB disruption.

Results

In all 14 evaluable patients, Ktrans levels peaked immediately post laser ablation, followed by a gradual decline over the following 4 weeks. Serum BSE concentrations increased shortly after laser ablation and peaked in 1–3 weeks before decreasing to baseline by 6 weeks.

Conclusions   

The data from our pilot research support that disruption of the peritumoral BBB was induced by hyperthemia with the peak of high permeability occurring within 1–2 weeks after laser ablation and resolving by 4–6 weeks. This provides a therapeutic window of opportunity during which delivery of BBB-impermeant therapeutic agents may be enhanced.

Trial Registration  

ClinicalTrials.gov NCT01851733

Citation: Leuthardt EC, Duan C, Kim MJ, Campian JL, Kim AH, Miller-Thomas MM, et al. (2016) Hyperthermic Laser Ablation of Recurrent Glioblastoma Leads to Temporary Disruption of the Peritumoral Blood Brain Barrier. PLoS ONE 11(2): e0148613.  http://dx.doi.org:/10.1371/journal.pone.0148613

Glioblastoma (GBM) is the most common and lethal malignant brain tumor in adults [1]. Despite advanced treatment, median survival is less than 15 months, and fewer than 5% of patients survive past 5 years [2, 3]. Effective treatment options for recurrent GBM remain very limited and much of research and development efforts in recent years have focused on this area of greatly unmet needs. Up to 90% of recurrent tumors develop within the 2–3 cm margin of the primary site and are thought to arise from microscopic glioma cells that infiltrate the peritumoral brain region prior to resection of the primary tumor [4, 5]. Therefore elimination of infiltrative GBM cells in this region likely will improve long-term disease control.

Inadequate CNS delivery of therapeutic drugs due to the blood brain barrier (BBB) has been a major limiting factor in the treatment of brain tumors. The presence of contrast enhancement on standard brain MRI qualitatively reflects a disrupted state of the BBB. For this reason, drug access to the viable contrast enhanced tumor rim is likely significantly higher than to the peritumoral region, which usually does not have contrast enhancement [6, 7]. Evidence supporting this hypothesis came from studies in which drug levels of cytotoxic agents were sampled in tumors and the surrounding brain tissue at the time of surgery or autopsy. Drug concentrations were at the highest in the enhancing portion of tumors, and then rapidly decreased up to 40 fold lower by 2–3 cm distance from the viable tumor edge [810]. Overall, these observations suggest that the BBB and its integrity negatively correlate with delivery and potentially therapeutic effects of BBB impermeant drugs.

To circumvent the BBB problem in local drug delivery, recent approaches have focused on bypassing it. A previously described method is the use of Gliadel, a polymer wafer impregnated with the chemotherapeutic agent carmustine (BCNU) and placed intra-operatively in the resection cavity to bypass the BBB. This approach resulted in a statistically significant but modest survival advantage in both newly diagnosed and recurrent GBM [1113]. The modest benefit of Gliadel could be due to the short duration of drug delivery as nearly 80% of BCNU is released from the wafer over a period of only 5 days [14]. This observation further supports the notion that the BBB is critical to chemotherapy effect. However, Gliadel is not widely utilized as it requires an open craniotomy and can impair wound healing. Another approach of bypassing the BBB is the convection-enhanced delivery system in which a catheter is surgically inserted into the tumor to deliver chemotherapy [15]. This procedure requires prolonged hospitalization to maintain the external catheter to prevent serious complications and as a result has not been used extensively.

The role of hyperthermia in inducing BBB disruption has been previously described in animal models of CNS hyperthermia. In a rodent model of glioma, the global heating of the mouse’s head to 42°C for 30 minutes in a warm water bath significantly increased the brain concentration of a thermosensitive liposome encapsulated with adriamycin chemotherapy [16]. To effect more locoregional hyperthermia, retrograde infusion of a saline solution at 43°C into the left external carotid artery in the Wistar rat reversibly increased BBB permeability to Evans-blue albumin in the left cerebral hemisphere [17]. In another approach, neodymium-doped yttrium aluminum garnet (Nd:YAG) laser-induced thermotherapy to the left forebrain of Fischer rats resulted in loco-regional BBB disruption as evidenced by passage of Evans blue dye, serum proteins (e.g. fibrinogen & IgM), and the chemotherapeutic drug paclitaxel for up to several days after thermotherapy [18]. The effect of hyperthermia on the BBB of human brain has not been examined.

Here we describe an approach to induce sustained, local disruption of the peritumoral BBB using MRI-guided laser interstitial thermal therapy, or LITT. The biologic effects and correlation with MRI findings of LITT have been studied in both animal and human models since the development of LITT over twenty years ago. A well-described zonal distribution of histopathological changes with corresponding characteristic MR imaging findings centered on the light-guide track replace the lesion targeted for thermal therapy. The central treatment zone shows development of coagulative necrosis with complete loss of normal neurons or supporting structures immediately following therapy, corresponding to hyperintense T1-weighted signal intensity relative to normal brain [1922]. The peripheral zone of the post-treatment lesion is characterized by avid enhancement with intravenous gadolinium contrast agents, which peaks several days following thermal therapy and persists for many weeks after the procedure. Gadolinium contrast enhancement in the brain following LITT is due to leakage of gadolinium contrast into the extravascular space across a disrupted BBB [2023]. The perilesional zone of hyperintense signal intensity of FLAIR-weighted images develops within 1–3 days of thermal treatment and persists for 15–45 days [22].

We demonstrate that in addition to cytoreductive ablation of the main recurrent tumor, hyperthermic exposure of the peritumoral region resulted in localized, lasting disruption of the BBB as quantified by dynamic contrast-enhanced MRI (DCE-MRI) and serum levels of brain-specific enolase (BSE), thus providing a therapeutic window of opportunity for enhanced delivery of therapeutic agents.

Table 1. Patient Baseline Demographics and Characteristics.
TMZ/RT: Stupp protocol of 60 Gy radiotherapy plus concurrent 75mg/m2 daily temozolomide. Doxorubicin treatment: Timing of 20mg/m2 IV weekly doxobubicin treatment after LITT. Early = Starting within 1 week after LITT; Late = Starting at 6 weeks after LITT.  http://dx.doi.org:/10.1371/journal.pone.0148613.t001
……
Quantitative measurement of LITT-induced peritumoral BBB disruption by DCE-MRI

Brain MRI obtained within 48 hours following LITT showed the targeted tumor replaced by a post-treatment lesion corresponding to the volume of treated tissue on intraoperative thermometry maps. The post-treatment lesion lost the original rim of tumor-associated contrast enhancement and instead demonstrated central hyperintense T1-weighted signal compared to the pre-treated tumor and normal brain and a faint, newly developed discontinuous rim of peripheral contrast enhancement extending beyond the original tumor-associated enhancing rim (Fig 2A). These findings are consistent with a loss of viable tumor tissue caused by LITT, thus achieving an effective cytoreduction similar to open surgical resection. Of note, the rim of new peripheral contrast enhancement persisted for at least the next 28 days (Fig 2B–2E). Perilesional edema qualitatively evaluated on FLAIR-weighted images increased from pretreatment imaging at week 2 and persisted at week 4 following LITT (Fig 2F–2I). Perilesional edema decreased on subsequent MRI examinations. These findings qualitatively indicate that peritumoral BBB is disrupted by LITT and that the disruption peaks within approximately 2 weeks after the procedure.

……

Fig 3 demonstrates the Ktrans time curves for our cohort of patients. In all subjects the Ktrans in the ROIs within the enhancing ring around the ablated tumor is highly elevated in the first few days after the procedure and then progressively decreases at approximately the 4-week time point. The bottom right subplot in Fig 3 is an average of the Ktrans time courses from all the subjects with adjacent curves indicating the plus and minus one standard error of the mean curves. This figure demonstrates the peak Ktrans value immediately after the LITT procedure with persistent elevation out to about 4 weeks. Radiographically, persistent contrast enhancement and FLAIR hyperintensity were observed well past 6 weeks and in many cases more than 10 weeks later. Several patients had recurrent tumor by radiographic criteria (increasing size of the edema and enhancing area around the tumor site) and these patients also demonstrated a corresponding increase in the Ktrans value. These recurrences occurred after the 10-week mark and thus were not included in Fig 3. Importantly no difference in the pattern of Ktrans tracing was consistently observed between the 10 patients receiving late doxorubicin treatment and the 4 patients receiving early doxorubicin treatment. In summary, these results indicate that the peritumoral BBB disruption as measured by Ktrans peaked immediately after LITT and persisted above baseline for an additional 4 weeks.

……

To optimize the ELISA assay for BSE, we collected sera from 3 patients with a newly diagnosed low-grade (WHO grade 2) glioma before and after their planned craniotomy and surgical resection, and determined serum concentrations of BSE. WHO grade 2 gliomas were chosen for the optimization because as they are generally non-contrast enhanced tumors on brain MRI, tumor-associated BBB is relatively intact and consequently, serum concentrations of brain-specific factors are predicted to be low pre-operatively and to then rise post-operatively due to the BBB compromise from the surgery. Serum BSE concentrations were low prior to surgery and then, as predicted, consistently increased after open craniotomy and tumor resection, thus indicating that this method had adequate sensitivity in detecting changes in serum levels of BSE due to disruption of the BBB (Fig 4).

Fig 4. Optimization of the BSE ELISA assay for measuring BBB disruption.

Serum concentrations of BSE before and after open craniotomy for surgical debulking in 3 subjects (A, B, and C) with a low-grade glioma, WHO grade II. *p<0.05.  http://dx.doi.org:/10.1371/journal.pone.0148613.g004

……

Fig 5. BBB disruption induced by LITT as measured by serum biomarkers
Serum concentrations of BSE for each of the 14 evaluable subjects in the study (A-N) and as the mean + SEM (O) as a function of time in days from the LITT procedure. In 7/14 subjects, serum BSE levels slightly decreased immediately after LITT, then in 13/14 subjects, serum BSE levels rose shortly after LITT, peaked between 1–3 weeks after LITT, and then decreased by the 6-week time point. In Patient #12, serum BSE concentration increased at week 10 coincident with an increased Ktrans at the same time point, consistent with a recurrent tumor as demonstrated on diagnostic MR imaging. Patient #15’s serum BSE concentration began to rise by week 4, consistent with early multifocal recurrent disease as demonstrated on diagnostic MR imaging.  http://dx.doi.org:/10.1371/journal.pone.0148613.g005
…….

LITT is a minimally invasive neurosurgical technique that achieves effective tumor cytoreduction of brain tumors using a laser to deliver hyperthermic ablation. Here we have demonstrated that an unexpected, potentially useful effect of LITT is its ability to also disrupt the BBB in the peritumoral region that extends outwards 1–2 cm from the viable tumor rim. Importantly, the disruption persists in all 14 evaluable, treated patients for up to 4 weeks after LITT as measured quantitatively by DCE-MRI and up to 6 weeks as measured by serum levels of the brain-specific factor BSE. These observations indicate that after LITT there is a window during which enhanced local delivery of therapeutic agents into the desired location (i.e. peritumoral region) can potentially be achieved.

In all of the patients in this series, the peaks of serum concentrations of BSE showed wider variations and were delayed from several days to 1–2 weeks following the peak of BBB disruption as measured by Ktrans. The wider variations and delay of BSE concentrations lead to relatively low correlation coefficients between the two parameters and could be explained by: 1) the higher data point resolution for the serum values versus DCE-MRI values (weekly versus biweekly, respectively); 2) interval physiologic breakdown of thermally ablated tissue coupled with subsequent diffusion and equilibration between the intracranial and peripheral compartments; and 3) high inter-tumor heterogeneity among patients resulting in a wide variation in the rates at which ablated tissues of different compositions are broken down and released into the circulation. Whether these differences may be in part due to tumor-related factors such as IDH1/2 mutations and MGMT promoter methylation is unclear due to the small number of subjects. More importantly, both methods showed that the peritumoral BBB disruption induced by LITT was temporary, decreasing soon after peaking and being resolved by 4–6 weeks in most patients. In addition, although no significant difference in all the BBB measurement parameters was observed between the early and late doxorubicin treatment arms, the number of evaluable subjects was too small to allow generalization at this time.

Read Full Post »

DNA and Origami, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

DNA and Origami

Curator: Larry H. Bernstein, MD, FCAP

 

 

Practical DNA

The promise of DNA origami shows signs of coming to fruition a decade after its debut.

15 March 2016   PDF

Science seeks to understand the mechanisms of nature, to develop tools of investigation and to make useful and sometimes revolutionary things with which to build our future. And every now and again, a piece of science comes along that seems like a work of art.

All of this was exemplified by a research paper published in Nature ten years ago that, literally, produced smiles (see Nature 440, 297–302; 2006). Using an astoundingly simple and general method to assemble strands of DNA into arbitrary shapes, the research generated ‘smileys’ that graced the cover of Nature and announced the arrival of DNA origami to the world.

The robustness of this method changed the game for DNA nanotechnology, which has since developed at an astonishing pace. It is a beautiful demonstration of how science can progress.

The concept behind DNA origami was laid down in the early 1980s by crystallographer Nadrian Seeman, who realized that the ability of DNA molecules to carry and transfer information according to strict base-pairing rules could be used to rationally assemble structures with precisely controlled nanoscale features.

This unprecedented level of programmability makes DNA a unique building material. Nanodesigners have embraced the biomolecule to fabricate intricate tiled patterns, boxes with lids that can be opened and arrays of precisely located binding elements that can incorporate proteins, dyes and other functional materials into regular lattices.

Pivotal to the success of DNA as a nanoscale building material have been automated methods to synthesize short DNA molecules of any sequence. A detailed understanding of how base-pairing translates into the formation of DNA double helices has also been crucial. Such helices control the shapes into which DNA molecules with given sequences will fold.

DNA origami provides the missing ingredient: a versatile yet straightforward assembly method. Computer-aided design programs determine how DNA scaffolds can be folded to realize desired structures, as well as which short DNA strands, or staples, are needed to hold the structures in shape.

Individual structures can also be assembled into more complex patterns, and sites that bind to functional materials can be introduced at any position.

The many eye-catching structures that have been built have pleased those of us with an appreciation of beauty. But even the most creative science will ultimately face the question: what is the point?

DNA nanotechnology has long searched for relevance. It is unrivalled in its ability to build complex structures with near-atomic precision, but the results tend to be labile, soft and so small that it is a challenge to put them to practical use.

Yet applications that address basic problems in science have emerged. DNA structures can serve as tools for determining the structures of proteins or as templates for assembling electronic components and basic devices. Responsive DNA structures can target diseased cells, and artificial membrane channels formed from DNA can act as single-molecule sensors.

Real-world applications might become feasible through recent developments — for example, improvements to the folding process that reduce assembly time and boost yield. Initial steps have also been taken to efficiently pair DNA nanostructures with technologically relevant substrates.

Many challenges remain, and DNA nanotechnology is far from maturity. But a growing number of scientists are entering the field to make more than just art. Watch this space.
Nature 531, 276 (17 March 2016)   http://dx.doi.org:/10.1038/531276a

 

Editor’s Summary   16 March 2006

DNA origami

DNA is a popular building block for nanostructures as it combines self-assembly with programmability and a plethora of chemical techniques for its manipulation. There is an extensive literature on DNA nanomaterials, but a procedure described this week breaks many of the fabrication rules established in the field. Paradoxically, although it ignores sequence design, strand purity and strand concentration ratios, the new method yields DNA nanostructures that are larger and more complex than previously possible. The one-pot method uses a few hundred short DNA strands to ‘staple’ a very long strand into two-dimensional structures that adopt any desired shape, like the ‘nanoface’ on the cover. Individual staples can be made into nanometre-scale pixels that create surface patterns on a given 100-nm shape (like the Americas map and snowflakes), or to combine shapes into larger structures (the hexagon of triangles).

NEWS AND VIEWSNanostructures: The manifold faces of DNA

When it comes to making shapes out of DNA, the material is there, and its properties are understood. What was missing was a convincing, universal design scheme to allow our capabilities to unfold to the full.

As civilization has developed over the past 10,000 years, humankind has learned how to build larger and larger structures; over the past two decades, we have begun to learn how to build smaller and smaller structures. On page 297 of this issue1, Paul Rothemund presents a material step forward in this second arena: he describes a stunningly simple and versatile approach to the fabrication, by self-assembly, of two-dimensional DNA nanostructures of arbitrary shape.

Lloyd M. Smith   http://dx.doi.org:/10.1038/440283a   Full Text | PDF (749K)

‘Bottom-up fabrication’, which exploits the intrinsic properties of atoms and molecules to direct their self-organization, is widely used to make relatively simple nanostructures. A key goal for this approach is to create nanostructures of high complexity, matching that routinely achieved by ‘top-down’ methods. The self-assembly of DNA molecules provides an attractive route towards this goal. Here I describe a simple method for folding long, single-stranded DNA molecules into arbitrary two-dimensional shapes. The design for a desired shape is made by raster-filling the shape with a 7-kilobase single-stranded scaffold and by choosing over 200 short oligonucleotide ‘staple strands’ to hold the scaffold in place. Once synthesized and mixed, the staple and scaffold strands self-assemble in a single step. The resulting DNA structures are roughly 100nm in diameter and approximate desired shapes such as squares, disks and five-pointed stars with a spatial resolution of 6nm. Because each oligonucleotide can serve as a 6-nm pixel, the structures can be programmed to bear complex patterns such as words and images on their surfaces. Finally, individual DNA structures can be programmed to form larger assemblies, including extended periodic lattices and a hexamer of triangles (which constitutes a 30-megadalton molecular complex).

Figure 1   Design of DNA origami.
Full size figure and legend (127K)

Figure 2   DNA origami shapes.
Full size figure and legend (81K)

Figure 3     Patterning and combining DNA origami.

Supplementary Notes 1–11
Notes on the design process; helix bending and the inter-helix gap; models and sequences; experimental methods; control experiments; patterning with dumbbell hairpins; the combination of shapes into larger structures; secondary structure of the scaffold and staples; the robustness of the scaffolded approach; the cost of the scaffold versus staples; and additiona references.

Full designs for all structures. Staple sequences are drawn out explicitly where they occur in the design. Because the designs are very large and the fonts are very small, this file will not print legibly. Instead of printing this file, open it in a PDF viewer and use the zoom feature to inspect the designs.    Download PDF file (188KB)

 

Designed DNA molecules: principles and applications of molecular nanotechnology

Anne Condon     Nature Reviews Genetics 7, 565-575 (July 2006) | http://dx.doi.org:/10.1038/nrg1892

Long admired for its informational role in the cell, DNA is now emerging as an ideal molecule for molecular nanotechnology. Biologists and biochemists have discovered DNA sequences and structures with new functional properties, which are able to prevent the expression of harmful genes or detect macromolecules at low concentrations. Physical and computational scientists can design rigid DNA structures that serve as scaffolds for the organization of matter at the molecular scale, and can build simple DNA-computing devices, diagnostic machines and DNA motors. The integration of biological and engineering advances offers great potential for therapeutic and diagnostic applications, and for nanoscale electronic engineering.

Single-molecule chemical reactions on DNA origami

Niels V. Voigt1,2, Thomas Tørring1,2, Alexandru Rotaru1,2, Mikkel F. Jacobsen1,2, Jens B. Ravnsbæk1,2, Ramesh Subramani1,3, Wael Mamdouh1,3, Jørgen Kjems1,4, Andriy Mokhir5, Flemming Besenbacher1,3 & Kurt Vesterager Gothelf1,2
Nature Nanotechnology 5, 200 – 203 (2010)
     Published online: 28 February 2010 | doi:10.1038/nnano.2010.5

DNA nanotechnology1, 2 and particularly DNA origami3, in which long, single-stranded DNA molecules are folded into predetermined shapes, can be used to form complex self-assembled nanostructures4, 5, 6, 7, 8, 9, 10. Although DNA itself has limited chemical, optical or electronic functionality, DNA nanostructures can serve as templates for building materials with new functional properties. Relatively large nanocomponents such as nanoparticles and biomolecules can also be integrated into DNA nanostructures and imaged11, 12,13. Here, we show that chemical reactions with single molecules can be performed and imaged at a local position on a DNA origami scaffold by atomic force microscopy. The high yields and chemoselectivities of successive cleavage and bond-forming reactions observed in these experiments demonstrate the feasibility of post-assembly chemical modification of DNA nanostructures and their potential use as locally addressable solid supports.

Large-area spatially ordered arrays of gold nanoparticles directed by lithographically confined DNA origami

Albert M. Hung1,2, Christine M. Micheel2,3, Luisa D. Bozano2, Lucas W. Osterbur2,4, Greg M. Wallraff2 & Jennifer N. Cha1,2 
Nature Nanotechnology 5, 121 – 126 (2010) Published online: 20 December 2009 | http://dx.doi.org:/10.1038/nnano.2009.450

The development of nanoscale electronic and photonic devices will require a combination of the high throughput of lithographic patterning and the high resolution and chemical precision afforded by self-assembly1, 2, 3, 4. However, the incorporation of nanomaterials with dimensions of less than 10 nm into functional devices has been hindered by the disparity between their size and the 100 nm feature sizes that can be routinely generated by lithography. Biomolecules offer a bridge between the two size regimes, with sub-10 nm dimensions, synthetic flexibility and a capability for self-recognition. Here, we report the directed assembly of 5-nm gold particles into large-area, spatially ordered, two-dimensional arrays through the site-selective deposition of mesoscopic DNA origami5 onto lithographically patterned substrates6 and the precise binding of gold nanocrystals to each DNA structure. We show organization with registry both within an individual DNA template and between components on neighbouring DNA origami, expanding the generality of this method towards many types of patterns and sizes.

DNA Origami Could Help Build Faster, Cheaper Computer Chips

American Chemical Society  http://www.scientificcomputing.com/news/2016/03/dna-origami-could-help-build-faster-cheaper-computer-chips?et_cid=5181802&et_rid=535648082

SAN DIEGO — Electronics manufacturers constantly hunt for ways to make faster, cheaper computer chips, often by cutting production costs or by shrinking component sizes. Now, researchers report that DNA, the genetic material of life, might help accomplish this goal when it is formed into specific shapes through a process reminiscent of the ancient art of paper folding.

The researchers presented their work at the 251st National Meeting & Exposition of the American Chemical Society (ACS).

Prototypes for cheaper computer chips are being built with metal-containing DNA origami structures. Courtesy of Zoie Young, Kenny Lee and Adam Woolley

http://www.scientificcomputing.com/sites/scientificcomputing.com/files/DNA_Origami_Could_Help_Build_Faster_Cheaper_Computer_Chips_440.jpg

Prototypes for cheaper computer chips are being built with metal-containing DNA origami structures. Courtesy of Zoie Young, Kenny Lee and Adam Woolley

“We would like to use DNA’s very small size, base-pairing capabilities and ability to self-assemble, and direct it to make nanoscale structures that could be used for electronics,” Adam T. Woolley, Ph.D., says. He explains that the smallest features on chips currently produced by electronics manufacturers are 14 nanometers wide. That’s more than 10 times larger than the diameter of single-stranded DNA, meaning that this genetic material could form the basis for smaller-scale chips.

“The problem, however, is that DNA does not conduct electricity very well,” he says. “So, we use the DNA as a scaffold and then assemble other materials on the DNA to form electronics.”

To design computer chips similar in function to those that Silicon Valley churns out, Woolley, in collaboration with Robert C. Davis, Ph.D., and John N. Harb, Ph.D., at Brigham Young University, is building on other groups’ prior work on DNA origami and DNA nanofabrication.

The most familiar form of DNA is a double helix, which consists of two single strands of DNA. Complementary bases on each strand pair up to connect the two strands, much like rungs on a twisted ladder. But to create a DNA origami structure, researchers begin with a long single strand of DNA. The strand is flexible and floppy, somewhat like a shoelace. Scientists then mix it with many other short strands of DNA — known as “staples” — that use base pairing to pull together and crosslink multiple, specific segments of the long strand to form a desired shape.

However, Woolley’s team isn’t content with merely replicating the flat shapes typically used in traditional two-dimensional circuits. “With two dimensions, you are limited in the density of components you can place on a chip,” Woolley explains. “If you can access the third dimension, you can pack in a lot more components.”

Kenneth Lee, an undergraduate who works with Woolley, has built a 3-D, tube-shaped DNA origami structure that sticks up like a smokestack from substrates, such as silicon, that will form the bottom layer of their chip. Lee has been experimenting with attaching additional short strands of DNA to fasten other components such as nano-sized gold particles at specific sites on the inside of the tube. The researchers’ ultimate goal is to place such tubes, and other DNA origami structures, at particular sites on the substrate. The team would also link the structures’ gold nanoparticles with semiconductor nanowires to form a circuit. In essence, the DNA structures serve as girders on which to build an integrated circuit.

Lee is currently testing the characteristics of the tubular DNA. He plans to attach additional components inside the tube, with the eventual aim of forming a semiconductor.

Woolley notes that a conventional chip fabrication facility costs more than $1 billion, in part because the equipment necessary to achieve the minuscule dimensions of chip components is expensive and because the multi-step manufacturing process requires hundreds of instruments. In contrast, a facility that harnesses DNA’s knack for self-assembly would likely entail much lower start-up funding, he states. “Nature works on a large scale, and it is really good at assembling things reliably and efficiently,” he says. “If that could be applied in making circuits for computers, there’s potential for huge cost savings.”

 

Read Full Post »

Sleep science

Larry H. Bernstein,MD, FCAP, Curator

LPBI

 

Perchance to Dream

Mapping the dreaming brain through neuroimaging and studies of brain damage

By Karen Zusi | March 1, 2016

Prefrontal leucotomies—surgeries to cut a section of white matter in the front of the brain, thus severing the frontal lobe’s connections to other brain regions—were all the rage through the 1950s as treatments for psychoses. The operations drastically altered the mental state of most patients. But along with personality changes, dulled initiative, and reduced imagination came a seemingly innocuous effect of many of these procedures: the patients stopped dreaming.

Mark Solms, a neuropsychologist at the University of Cape Town in South Africa, uncovered the correlation in historical data from around the globe as part of a long-term study to assess the impact, on dreams and dreaming, of damage to different parts of the brain. Between 1985 and 1995, Solms interviewed 332 of his own patients at hospitals in Johannesburg and London who had various types of brain trauma, asking them about their nightly experiences.

Solms identified two brain regions that appeared critical for the experience of dreaming. The first was at the junction of the parietal, temporal, and occipital lobes—a cortical area that supports spatial cognition and mental imagery. The second was the ventromesial quadrant of the frontal lobes, a lump of white matter commonly associated with goal-seeking behavior that links the limbic structures to the frontal cortex. “This lesion site rang a historical bell in my mind—that’s where the prefrontal leucotomy used to be done,” says Solms, adding that the operation controlled the hallucinations and delusions that came with psychosis. “That sort of struck me as, ‘Gosh, that’s what dreaming is.’” Lesions in other areas could intensify or reduce certain aspects of dreams, but damage to either of the regions Solms pinpointed reportedly caused dreaming to cease completely (Psychoanal Q, 64:43-67, 1995).

Advances in neuroimaging have lent more support to Solms’s brain map, and pinned down other areas that researchers now understand play a part in dream development. In 2013, Bill Domhoff, a psychologist from the University of California, Santa Cruz, and colleagues from the University of British Columbia published results that combined neuroimaging scans from separate studies of REM sleep and daydreaming. They discovered that brain regions that light up when there’s a high chance that one is dreaming overlapped with parts of the brain’s default mode network—regions active when the brain is awake but not focused on a specific external task (Front Hum Neurosci, 7:412, 2013). “It very much lines up,” says Domhoff. “It’s just stunning.”

The default mode network allows us to turn our attention inward, and dreaming is the extreme example, explains Jessica Andrews-Hanna, a cognitive scientist at the University of Colorado Boulder. The network takes up a large amount of cortical real estate. Key players are regions on the midline of the brain that support memories and future planning; these brain sections connect to other areas affecting how we process social encounters and imagine other individuals’ thoughts. “When people are sleeping—in particular, when they’re dreaming—the default mode network actually stays very active,” says Andrews-Hanna. With external stimuli largely cut off, the brain operates in a closed loop, and flights of fancy often ensue.

We usually take the bizarre nature of these experiences at face value.  “Even in a completely crazy dream, we all think that it’s normal,” says Martin Dresler, a cognitive neuroscientist at Radboud University in the Netherlands. Dresler and many other researchers attribute this blasé acceptance to the deactivation of a brain region called the dorsolateral prefrontal cortex. When we sleep, the dorsolateral prefrontal cortex powers down, and higher executive control—which would normally flag a nonsensical concern, such as running late for a class when you haven’t been in school for a decade, as unimportant—evaporates. “You have this overactive default mode network with no connectivity, with no communication with regions that are important for making sense of the thoughts,” says Andrews-Hanna.

In healthy sleeping subjects, these executive functions can be unlocked in what’s known as lucid dreaming, when the prefrontal cortex reactivates and sleepers gain awareness of and control over their imagined actions. A lucid dreamer can actually “direct” a dream as it unfolds, deciding to fly, for example, or turning a nightmarish monster into a docile pet.

Records of lucid dreaming are limited to REM sleep, the sleep stage where the brain is most active. REM sleep normally induces paralysis to prevent people from acting out their dreams, but the eye muscles are exempt, and this gives skilled lucid dreamers a way to signal their lucidity to researchers.

Dresler’s team is using this phenomenon as a tool to ask specific questions about dreams. Before trained lucid dreamers fall asleep in Dresler’s lab, they agree to flick their eyes from left to right as soon as they realize within a dream that they’re asleep. The dreamed movement causes their actual eyes to move in a similar way under their closed eyelids. Researchers mark this signal as the beginning of a lucid dream, and then track brain patterns associated with specific dreamed actions. Dreaming also occurs in non-REM sleep, but with the brain less active, the eye muscles won’t respond to dream input—so there’s no robust way to tell if lucid dreaming takes place.

When subjects achieved lucidity and consciously dreamed that they performed a predetermined hand movement, Dresler’s research team observed activity in the sensorimotor cortex matching what would occur if the subjects actually moved their hands while awake (Curr Biol, 21:1833-37, 2011). “It’s probably the case that, for most of what we are dreaming about, the very same machinery and the very same brain regions are active compared to wakefulness,” says Dresler. “It’s just that the motor execution is stopped at the spinal level.”

Beyond sleep research, tracking lucid and normal dreaming offers an investigative model to study aspects of psychosis, according to some researchers. “These regions that are activated during lucid dreaming are typically impaired in patients with psychosis,” explains Dresler. “Having insight into your non-normal mental state in dreaming shares neural correlates with having insights into your non-normal state of consciousness in psychosis.” Dresler proposes training patients in early stages of psychosis to dream lucidly, in the hope that it might grant them some therapeutically relevant understanding of their illness.

While executive functions are impaired in many patients suffering from psychosis, their default networks seem to be overactive, says Andrews-Hanna. But how much similarity exists between the brain states of dreaming and psychosis remains controversial. Domhoff emphasizes the unique nature of dreams. “They’re not like schizophrenia, they’re not like meditation, they’re not like any kind of drug trip,” he says. “They’re an enactment of a scenario that is based upon various wishes and concerns.”

Ultimately, says Solms, deciphering dreaming furthers the field’s knowledge of what the brain does, as much as studies conducted during waking hours. “If you’re a clinician, and you understand what the different parts of the brain do in relation to dreaming, then it’s one of the things you can use as a road map for evaluating your patients.”

Dreamed Movement Elicits Activation in the Sensorimotor Cortex

Martin Dresler1, 5Stefan P. Koch2, 5Renate Wehrle1, 5Victor I. Spoormaker1, et al.   Curr Biol.8 Nov 2011; 21(21): 1833–1837   doi:10.1016/j.cub.2011.09.029

Since the discovery of the close association between rapid eye movement (REM) sleep and dreaming, much effort has been devoted to link physiological signatures of REM sleep to the contents of associated dreams [1, 2, 3 and 4]. Due to the impossibility of experimentally controlling spontaneous dream activity, however, a direct demonstration of dream contents by neuroimaging methods is lacking. By combining brain imaging with polysomnography and exploiting the state of “lucid dreaming,” we show here that a predefined motor task performed during dreaming elicits neuronal activation in the sensorimotor cortex. In lucid dreams, the subject is aware of the dreaming state and capable of performing predefined actions while all standard polysomnographic criteria of REM sleep are fulfilled [5 and 6]. Using eye signals as temporal markers, neural activity measured by functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS) was related to dreamed hand movements during lucid REM sleep. Though preliminary, we provide first evidence that specific contents of REM-associated dreaming can be visualized by neuroimaging.


Highlights

► Eye signals can be used to access dream content with concurrent EEG and neuroimaging

► Dreamed hand movements correspond to activity in the contralateral sensorimotor cortex

 

Lucid dreaming is a rare but robust state of sleep that can be trained [5]. Phenomenologically, it comprises features of both waking and dreaming [7]: in lucid dreams, the sleeping subject becomes aware of his or her dreaming state, has full access to memory, and is able to volitionally control dreamed actions [6]. Although all standard polysomnographic criteria of rapid eye movement (REM) sleep [8] are maintained and REM sleep muscle atonia prevents overt motor behavior, lucid dreamers are able to communicate their state by predefined volitional eye movements [6], clearly discernable in the electrooculogram (EOG) (Figure 1). Combining the techniques of lucid dreaming, polysomnography, and brain imaging via functional magnetic resonance imaging (fMRI) or near-infrared spectroscopy (NIRS), we demonstrate the possibility to investigate the neural underpinnings of specific dream contents—in this case, dreamed hand clenching. Predecided eye movements served as temporal markers for the onset of hand clenching and for hand switching. Previous studies have shown that muscle atonia prevents the overt execution of dreamed hand movements, which are visible as minor muscle twitches at most [3 and 9].

Exemplary Lucid REM Sleep as Captured by Polysomnography during Simultaneous ...

Figure 1.

Exemplary Lucid REM Sleep as Captured by Polysomnography during Simultaneous fMRI

Note high-frequency electroencephalogram (EEG) and minimal electromyogram (EMG) amplitude due to muscle atonia characteristic of rapid eye movement (REM) sleep (left), with wakefulness for comparison (right). Subjects were instructed to communicate the state of lucidity by quick left-right-left-right (LRLR) eye movements. Filter settings are as follows: EEG, bandpass filter 0.5−70 Hz, with additional notch filter at 50 Hz; electrooculogram (EOG), bandpass filter 0.1–30 Hz; EMG, bandpass filter 16–250 Hz.

 

Comparison of Sensorimotor Activation during Wakefulness and SleepFunctional ...

Figure 2.

Comparison of Sensorimotor Activation during Wakefulness and Sleep

Functional magnetic resonance imaging (fMRI) blood oxygen level-dependent (BOLD)-response increases were contrasted between left and right hand movements (columns) in the three conditions (rows): executed hand movement during wakefulness (WE) (A), imagined hand movement during wakefulness (WI) (B), and dreamed hand movement during lucid REM sleep (LD) (C). Effects of left (right) hand movements were calculated in a fixed-effects analysis as a contrast “left > right” and “right > left,” respectively. Subpanels depict results in an SPM glass-brain view (sagital and coronal orientation) to demonstrate the regional specificity of the associated cortical activation, along with sensorimotor activation overlaid on an axial slice of the subject’s T1-weighted anatomical scan (position indicated on the glass brain for condition A). Clusters of activation in the glass-brain views are marked using the numbering given in Table S1. Red outlines in the glass-brain views mark the extent of activation found in the WE condition. This region of interest (ROI) was derived from the respective activation map during executed hand movement (A), thresholded at whole-brain corrected pFWE < 0.005, cluster extent >50 voxels, and served as a ROI for analysis of the WI and LD conditions in (B) and (C), respectively. T values are color-coded as indicated. The time course of the peak voxel inside the ROI is depicted (black) along with the predicted hemodynamic response based on the external pacing (A and B) or the predefined LRLR-eye signals during (C). The maximal difference in activation of the peak voxel between conditions is indicated as percentage of BOLD signal fluctuations of the predicted time course (gray).

FMRI results were confirmed by an independent imaging method in a second subject: NIRS data showed a typical hemodynamic response pattern of increased contralateral oxygenation over the sensorimotor region during successful task performance in lucid REM sleep (Figure 3; Figure 4). Notably, during dreaming, the hemodynamic responses were smaller in the sensorimotor cortex but of similar amplitude in the supplementary motor area (SMA) when compared to overt motor performance during wakefulness.

Near-Infrared Spectroscopy TopographyConcentration changes of oxygenated ...

Figure 3.

Near-Infrared Spectroscopy Topography

Concentration changes of oxygenated (Δ[HbO], upper panel) and deoxygenated hemoglobin (Δ[HbR], lower panel) during executed (WE) and imagined (WI) hand clenching in the awake state and dreamed hand clenching (LD). The optical probe array covered an area of ∼7.5 × 12.5 cm2 over the right sensorimotor area. The solid box indicates the ROI over the right sensorimotor cortex with near-infrared spectroscopy (NIRS)-channels surrounding the C4-EEG electrode position. NIRS channels located centrally over midline and more anterior compared to sensorimotor ROI were chosen as ROI for the supplementary motor area (SMA, dotted box).

Condition-Related NIRS Time CoursesTime courses of HbO (red traces) and HbR ...

Figure 4.

Condition-Related NIRS Time Courses

Time courses of HbO (red traces) and HbR (blue traces) from the right sensorimotor ROI (left panel) and the supplementary motor ROI SMA (right panel) for executed (WE) and imagined (WI) hand clenching in the awake state and dreamed hand clenching (LD). The time courses represent averaged time courses from NIRS channels within the respective ROI (Figure 3). For each condition, 0 s denotes the onset of hand clenching indicated by LRLR-signals. Note that the temporal dynamics, i.e., an increase in HbO and a decrease in HbR, are in line with the typical hemodynamic response. Overt movement during wakefulness (dark red/blue traces) showed the strongest hemodynamic response, whereas the motor-task during dreaming leads to smaller changes (light red/blue traces). In the SMA, the hemodynamic response was stronger during the dreamed task when compared to imagery movement during wakefulness

Neurophysiological studies suggest that during REM sleep, the brain functions as a closed loop system, in which activation is triggered in pontine regions while sensory input is gated by enhanced thalamic inhibition and motor output is suppressed by atonia generated at the brain stem level [4 and 12].

Efforts have been made to correlate REMs to gaze direction during dreams—the “scanning hypothesis” [1 and 2]—and indeed similar cortical areas are involved in eye movement generation in wake and REM sleep [17]. In a similar vein, small muscle twitches during REM sleep were presumed to signal a change in the dream content [3]. Dream research methodology mostly relies on the evaluation of subjective reports of very diverse dream contents.

During dreaming, activation was much more localized in small clusters representing either generally weaker activation or focal activation of hand areas only, with signal fluctuations only in the order of 50% as compared to the actually executed task during wakefulness. The SMA is involved in timing, preparation, and monitoring of movements [21], and linked to the retrieval of a learned motor sequence especially in the absence of external cues [22]. Our NIRS data speak for an activation of SMA even during simple movements. This is in line with several PET and fMRI studies reporting SMA activations for simple tasks such as hand clenching, single finger-tapping, and alternated finger-tapping.

 

While You Were Sleeping

Assessing body position in addition to activity may improve monitoring of sleep-wake periods.

By Ruth Williams | March 1, 2016

Polysomnography—the combined assessment of brain waves, heart rate, oxygen saturation, muscle activity, and other parameters—is the most precise way to track a person’s sleeping patterns. However, the equipment required for such analyses is expensive, bulky, and disruptive to natural behavior.

Researchers are thus searching for ways to improve the accuracy of wearable devices while maintaining user-friendliness. Maria Angeles Rol of the University of Murcia in Spain and her colleagues have now discovered that by using a device strapped to the patient’s upper arm that measures both arm activity and position (the degree of tilt), they can more precisely detect periods of sleep.

The researchers studied just 13 people in this pilot study, says Barbara Galland of the University of Otago in New Zealand, but adds that nonetheless it “provide[s] an opening for further investigations to demonstrate the value of this novel technique.” (Chronobiol Int, 32:701-10, 2015)

 

Validation of an innovative method, based on tilt sensing, for the assessment of activity and body position  

M. A. Bonmati-Carriona, B. Middletonb, V. L. Revellb, D. J. Skeneb, M. A. Rola* & J. A. Madrid
Chronobiology International: The Journal of Biological and Medical Rhythm Research 2015;  32,(5) :701-710   PDF  http://dx.doi.org:/10.3109/07420528.2015.1016613

Since there is less movement during sleep than during wake, the recording of body movements by actigraphy has been used to indirectly evaluate the sleep–wake cycle. In general, most actigraphic devices are placed on the wrist and their measures are based on acceleration detection. Here, we propose an alternative way of measuring actigraphy at the level of the arm for joint evaluation of activity and body position. This method analyzes the tilt of three axes, scoring activity as the cumulative change of degrees per minute with respect to the previous sampling, and measuring arm tilt for the body position inference. In this study, subjects (N = 13) went about their daily routine for 7 days, kept daily sleep logs, wore three ambulatory monitoring devices and collected sequential saliva samples during evenings for the measurement of dim light melatonin onset (DLMO). These devices measured motor activity (arm activity, AA) and body position (P) using the tilt sensing of the arm, with acceleration (wrist acceleration, WA) and skin temperature at wrist level (WT). Cosinor, Fourier and non-parametric rhythmic analyses were performed for the different variables, and the results were compared by the ANOVA test. Linear correlations were also performed between actimetry methods (AA and WA) and WT. The AA and WA suitability for circadian phase prediction and for evaluating the sleep–wake cycle was assessed by comparison with the DLMO and sleep logs, respectively. All correlations between rhythmic parameters obtained from AA and WA were highly significant. Only parameters related to activity levels, such as mesor, RA (relative amplitude), VL5 and VM10 (value for the 5 and 10 consecutive hours of minimum and maximum activity, respectively) showed significant differences between AA and WA records. However, when a correlation analysis was performed on the phase markers acrophase, mid-time for the 10 consecutive hours of highest (M10) and mid-time for the five consecutive hours of lowest activity (L5) with DLMO, all of them showed a significant correlation for AA (R = 0.607, p = 0.028; R = 0.582, p = 0.037; R = 0.620, p = 0.031, respectively), while for WA, only acrophase did (R = 0.621, p = 0.031). Regarding sleep detection, WA showed higher specificity than AA (0.95 ± 0.01 versus 0.86 ± 0.02), while the agreement rate and sensitivity were higher for AA (0.76 ± 0.02 versus 0.66 ± 0.02 and 0.71 ± 0.03 versus 0.53 ± 0.03, respectively). Cohen’s kappa coefficient also presented the highest values for AA (0.49 ± 0.04) and AP (0.64 ± 0.04), followed by WT (0.45 ± 0.06) and WA (0.37 ± 0.04). The findings demonstrate that this alternative actigraphy method (AA), based on tilt sensing of the arm, can be used to reliably evaluate the activity and sleep–wake rhythm, since it presents a higher agreement rate and sensitivity for detecting sleep, at the same time allows the detection of body position and improves circadian phase assessment compared to the classical actigraphic method based on wrist acceleration.
Sleep’s Kernel

Surprisingly small sections of brain, and even neuronal and glial networks in a dish, display many electrical indicators of sleep.

By James M. Krueger and Sandip Roy | March 1, 2016   http://www.the-scientist.com/?articles.view/articleNo/45394/title/Sleep-s-Kernel

Sleep is usually considered a whole-brain phenomenon in which neuronal regulatory circuits impose sleep on the brain. This paradigm has its origins in the historically important work of Viennese neurologist Constantin von Economo, who found that people who suffered from brain infections that damaged the anterior hypothalamus slept less. The finding was a turning point in sleep research, as it suggested that sleep was a consequence of active processes within the brain. This stood in stark contrast to the ideas of renowned St. Petersburg physiologist Ivan Pavlov, who believed that sleep resulted from the passive withdrawal of sensory input. Although the withdrawal of sensory input remains recognized as playing a role in sleep initiation, there is now much evidence supporting the idea that neuronal and glial activity in the anterior hypothalamus leads to the inhibition of multiple excitatory neuronal networks that project widely throughout the brain.

But we also know from millions of stroke cases that cause brain damage and from experimentally induced brain damage in animal models that, regardless of where a lesion occurs in the brain, including the anterior hypothalamus, all humans or animals that survive the brain damage will continue to sleep. Further, a key question remains inadequately answered: How does the hypothalamus know to initiate sleep? Unless one believes in the separation of mind and brain, then, one must ask: What is telling the hypothalamus to initiate sleep? If an answer is found, it leads to: What is telling the structure that told the hypothalamus? This is what philosophers call an infinite regress, an unacceptable spiral of logic.

For these reasons, 25 years ago the late Ferenc Obál Jr. of A. Szent-Györgyi Medical University in Szeged, Hungary, and I (J.K.) began questioning the prevailing ideas of how sleep is regulated. The field needed answers to fundamental questions. What is the minimum amount of brain tissue required for sleep to manifest? Where is sleep located? What actually sleeps? Without knowing what sleeps or where sleep is, how can one talk with any degree of precision about sleep regulation or sleep function? A new paradigm was needed.

CHARACTERIZING SLEEP: Sleep-like patterns of neural activity are apparent not just at the level of the whole brain, but also in isolated neural circuits. Researchers have even documented sleep-like behavior in cultures of glial and neural cells. By increasing the number of electrophysiological measurements we use to characterize sleep states, the homology between sleep-like states in culture and sleep in intact animals becomes stronger.
See full infographic: WEB | PDF
© CATHERINE DELPHIA

There is no direct measure of sleep, and no single measure is always indicative of sleep. Quiescent behavior and muscle relaxation usually occur simultaneously with sleep but are also found in other circumstances, such as during meditation or watching a boring TV show. Sleep is thus defined in the clinic and in experimental animals using a combination of multiple parameters that typically correlate with sleep.

The primary tool for assessing sleep state in mammals and birds is the electroencephalogram (EEG). High-amplitude delta waves (0.5–4 Hz) are a defining characteristic of the deepest stage of non–rapid eye movement (non-REM) sleep. However, similar waves are evident in adolescents who hyperventilate for a few seconds while wide awake. Other measures used to characterize sleep include synchronization of electrical activity between EEG electrodes and the quantification of EEG delta wave amplitudes. Within specific sensory circuits, the cortical electrical responses induced by sensory stimulation (called evoked response potentials, or ERPs) are higher during sleep than during waking. And individual neurons in the cerebral cortex and thalamus display action potential burst-pause patterns of firing during sleep.

Using such measures, researchers have shown that different parts of the mammalian brain can sleep independently of one another. Well-characterized sleep regulatory substances, or somnogens, such as growth hormone releasing hormone (GHRH) and tumor necrosis factor α (TNF-α), can induce supranormal EEG delta waves during non-REM sleep in the specific half of the rat brain where the molecules were injected. Conversely, if endogenous TNF-α or GHRH production is inhibited, spontaneous EEG delta waves during non-REM sleep are lower on the side receiving the inhibitor. A more natural example of sleep lateralization is found in the normal unihemispheric sleep of some marine mammals. (See “Who Sleeps?”)

Much smaller parts of the brain also exhibit sleep-like cycles. As early as 1949, Kristian Kristiansen and Guy Courtois at McGill University and the Montreal Neurological Institute showed that, when neurons carrying input from the thalamus and surrounding cortical tissue are surgically severed, clusters of neurons called cerebral cortical islands will alternate between periods of high-amplitude slow waves that characterize sleep and low-amplitude fast waves typical of waking, independently of surrounding tissue.1 This suggests that sleep is self-organizing within small brain units.

In 1997, Ivan Pigarev of the Russian Academy of Sciences in Moscow and colleagues provided more-concrete evidence that sleep is a property of local networks. Measuring the firing patterns of neurons in monkeys’ visual cortices as the animals fell asleep while performing a visual task, they found that some of the neurons began to stop firing even while performance persisted. Specifically, the researchers found that, within the visual receptive field being engaged, cells on the outer edges of the field stopped firing first. Then, as the animal progressed deeper into a sleep state, cells in more-central areas stopped firing. This characteristic spatial distribution of the firing failures is likely a consequence of network behavior. The researchers thus concluded that sleep is a property of small networks.2

More recently, David Rector at Washington State University and colleagues provided support for the idea of locally occurring sleep-like states. In a series of experiments, they recorded electrical activity from single cortical columns using a small array of 60 electrodes placed over the rat somatosensory cortex. The sensory input from individual facial whiskers maps onto individual cortical columns. As expected, ERPs in the cortical columns induced by twitching a whisker were higher during sleep than during waking. But looking at the activity of individual columns, the researchers observed that they could behave somewhat independently of each other. When a rat slept, most—but not all—of the columns exhibited the sleep-like high-amplitude ERPs; during waking, most—but not all—of the columns were in a wake-like state. Interestingly, the individual cortical columns also exhibited patterns that resembled a sleep rebound response: the longer a column was in the wake-like state, the higher the probability that it would soon transition into a sleep-like state.3

To test how cortical-column state can affect whole-animal behavior, Rector and his team trained rats to lick a sucrose solution upon the stimulation of a single whisker, then characterized the whisker’s cortical-column state. If the column receiving input from the stimulated whisker was in a wake-like state (low-magnitude ERP), the rats did not make mistakes. But if the column was in the sleep-like state (high-magnitude ERP), the animals would fail to lick the sucrose when stimulated and would sometimes lick it even when their whisker was not flicked.4 Even though the animal was awake, if a cortical column receiving stimulation was asleep, it compromised the animal’s performance. These experiments indicate that even very small neuronal networks sleep and that the performance of learned behavior can depend on the state of such networks.

Given that sleep can manifest in relatively small brain regions, perhaps it should not be too surprising that co-cultures of neurons and glia possess many of the electrophysiological sleep phenotypes that are used to define sleep in intact animal brains. During sleep, cortical and thalamic neurons display bursts of action potentials lasting about 500 ms, followed by periods of hyperpolarization lasting about the same length of time. The synchronization of this firing pattern across many neurons is thought to generate the EEG activity characteristic of delta-wave sleep, and undisturbed co-cultures of glia and neurons display periodic bursts of action potentials, suggesting that the culture default state is sleep-like. In contrast, if neuronal and glia networks are stimulated with excitatory neurotransmitters, the culture’s “burstiness”—the fraction of all action potentials found within bursts—is reduced, indicating a transition to a wake-like state. Treatment of co-cultures with excitatory neurotransmitters also converts their gene expression profile from a spontaneous sleep-like pattern to a wake-like pattern.5

SLEEP IN VITRO: Neurons co-cultured with glial cells display patterns of action potentials and slow (delta) waves, suggesting that small neural networks can and do sleep, even outside of the body. In culture, neurons fire in bursts, and slow-wave electrical activity is synchronized while in a default sleep-like state. However, if the culture is stimulated with electricity or excitatory neurotransmitters, delta-wave amplitude and the neurons’ synchrony, or burstiness, are reduced, suggesting that the culture “wakes up.” Conversely, the addition of TNF-α, a sleep-inducing agent, increases burstiness and the amplitudes of delta waves.
See full infographic: WEB | PDF
© CATHERINE DELPHIA

Cell cultures also respond to sleep-inducing agents similarly to whole organisms. If a neuronal and glial culture is treated with TNF-α, the synchronization and amplitudes of slow-wave electrical activity increase, indicating a deeper sleep-like state. Moreover, ERPs are of greater magnitude after cultures are treated with TNF-α than during the sleep-like default state, suggesting that the somnogen induces a deeper sleep-like state in vitro as it does in vivo.6

Researchers have even studied the developmental pattern of such sleep phenotypes, using multielectrode arrays to characterize network activity throughout the culture, and the emergence of network properties follows a similar time course as in intact mouse pups. Spontaneous action potentials occur during the first few days in culture, but network emergent properties are not evident until after about 10 days. Then, synchronization of electrical potentials begins to emerge, and the network’s slow waves begin to increase in amplitude. If the cultures are electrically stimulated, slow-wave synchronization and amplitudes are reduced, suggesting the networks wake up. This is followed by rebound-enhanced slow-wave synchronization and amplitudes the next day, suggesting sleep homeostasis is also a characteristic of cultured networks.6

Clearly, even small neural networks can exhibit sleep-like behavior, in a dish or in the brain. But the question remains: What is driving the oscillations between sleep- and wake-like states?

Sleep emerges

In the intact brain, communication among neurons and between neurons and other cells is ever changing. Bursts of action potentials trigger the release of multiple substances and changes in gene expression, both of which alter the efficacy of signal transmission. For instance, neural or glial activity induces the release of ATP into the local extracellular space. Extracellular ATP, in turn, induces changes in the expression of TNF-α and other somnogens known to induce a sleep-like state. Because these effects take place in the immediate vicinity of the cell activity, they target sleep to local areas that were active during prior wakefulness.

In 1993, Obál and I (J.K.) proposed that sleep is initiated within local networks as a function of prior activity.7 The following year, Derk-Jan Dijk and Alex Borbely of the University of Zurich provided support for this idea when they had volunteers hold hand vibrators in one hand during waking to stimulate one side of the somatosensory cortex. In subsequent sleep, the side of the brain that received input from the stimulated hand exhibited greater sleep intensity, determined from amplitudes of EEG slow waves, than the opposite side of the brain. And in 2006, Reto Huber, then at the University of Wisconsin, showed that if an arm is immobilized during waking, amplitudes of EEG slow waves from the side of the brain receiving input from that arm are lower in subsequent sleep.

These experiments indicate that local sleep depth is a function of the activity of the local network during waking—an idea that has been confirmed by multiple human and animal studies. Moreover, local network state oscillations strongly indicate that sleep is initiated within local networks such as cortical columns. But how do the states of a population of small networks translate into whole-animal sleep?

Small local clusters of neurons and glia are loosely connected with each other via electrophysiological and biochemical signaling, allowing for constant communication between local networks. Steven Strogatz of Cornell University showed that dynamically coupled entities, including small neuronal circuits, will synchronize with each other spontaneously without requiring direction by an external actor. Synchronization of loosely coupled entities occurs at multiple levels of complexity in nature from intact animals to molecules—for example, birds flocking, or the transition from water to ice. The patterns generated by bird flocking, or the hardness of ice, are called emergent properties.

We, Obál, and our colleagues proposed that whole-brain sleep is an emergent property resulting from the synchronization of local neuronal network states.7,8,9 This would explain why sleep continues to occur after brain damage: because the remaining local circuits will spontaneously synchronize with each other. This view also allows one to easily envision variations in the depth or degree of sleep and waking because it allows for some parts of the brain to be in sleep-like states while other areas are in wake-like states, just as Rector observed. These independent states of local networks may account for sleep inertia, the minutes-long period upon awakening of poor cognitive performance and fuzzy-mindedness, and may also play a role in the manifestation of dissociated states such as sleepwalking. Most importantly, this paradigm frees sleep regulation from the dualism trap of mind/brain separation: top-down imposition of state is not required for the initiation of local state oscillations or for subsequent whole-organism sleep to ensue.

Our theory is also consistent with the modulation of sleep and wakefulness by sleep regulatory circuits such as those in the hypothalamus. For example, if interleukin-1, a sleep regulatory substance, is applied locally to the surface of the rat cortex, it induces local high-amplitude EEG slow waves indicative of a greater local depth of sleep.10 The responses induced by interleukin-1 in the cortex enhanced neuronal activity in anterior hypothalamic sleep regulatory areas.11 That hypothalamic neuronal activity likely provides information on local sleep- and wake-like states occurring in the cortex to the hypothalamus, where it can modulate the orchestration of the sleep initiated within the smaller brain units.

Finally, our ideas may inform the study of how sleep influences the formation of memories. A fundamental problem a living brain faces is the incorporation of new memories and behaviors while conserving existing ones. We know that cell activity enhances neuronal connectivity and the efficacy of neurotransmission within active circuits, a phenomenon that has been posited to be a mechanism by which memories are formed and solidified. By themselves, however, these use-dependent mechanisms would lead to unchecked growth of connectivity (in response to activity patterns) and positive feedback (since increased connectivity leads to reuse), ultimately resulting in a rigid, non-plastic network.7 Instead, we suggest that biochemical mechanisms—specifically, the use-dependent expression of genes involved in sleep regulation and memory—induce oscillations, representing local wake- and sleep-like states, which serve to stabilize and preserve brain plasiticity.7

For more than a century, researchers have struggled to understand how sleep works and what it does. Perhaps this lack of answers stems from a fundamental misconception about what sleeps. By thinking about sleep in smaller units, such as individual networks in the brain, hopefully the field will start to understand what exactly is going on during this enigmatic—but very common—phenomenon.

James M. Krueger is a regents professor of neuroscience and Sandip Roy is an associate professor of electrical engineering at Washington State University.

References

  1. K. Kristiansen, G. Courtois, “Rhythmic electrical activity from isolated cerebral cortex,” Electroen Clin Neuro, 1:265-72, 1949.
  2. I.N. Pigarev et al., “Evidence for asynchronous development of sleep in cortical areas,” Neuroreport, 8:2557-60, 1997.
  3. D.M. Rector et al., “Local functional state differences between rat cortical columns,” Brain Res, 1047:45-55, 2005.
  4. J.M. Krueger et al., “Sleep: A synchrony of cell activity-driven small network states,” Eur J Neurosci, 38:2199-09, 2013.
  5. V. Hinard et al., “Key electrophysiological, molecular, and metabolic signatures of sleep and wakefulness revealed in primary cortical cultures,” J Neurosci, 32:12506-17, 2012.
  6. K.A. Jewett et al., “Tumor necrosis factor enhances the sleep-like state and electrical stimulation induces a wake-like state in co-cultures of neurons and glia,” Eur J Neurosci, 42:2078-90, 2015.
  7. J.M. Krueger, F. Obál, “A neuronal group theory of sleep function,” J Sleep Res, 2:63-69, 1993.
  8. J.M. Krueger et al., “Sleep as a fundamental property of neuronal assemblies,” Nat Rev Neurosci, 9: 910-19, 2008.
  9. S. Roy et al., “A network model for activity-dependent sleep regulation,” J Theor Biol, 253:462-68, 2008.
  10. T. Yasuda et al., “Interleukin-1 beta has a role in cerebral cortical state-dependent electro-encephalographic slow-wave activity,” Sleep, 28:177-84, 2005.
  11. K. Yasuda et al., “Unilateral cortical application of interleukin-1β (IL1β) induces asymmetry in Fos- and IL1β-immunoreactivity: Implications for sleep regulation,” Brain Res, 1131:44-59, 2007.

 In Dogged Pursuit of Sleep

Unearthing the root causes of narcolepsy keeps Emmanuel Mignot tackling one of sleep science’s toughest questions.

By Anna Azvolinsky | March 1, 2016   http://www.the-scientist.com/?articles.view/articleNo/45347/title/In-Dogged-Pursuit-of-Sleep

In November 1986, Emmanuel Mignot arrived at Stanford University’s  Center for Sleep Sciences and Medicine  for a 16-month stint as a research associate. His goal was to find effective drugs to treat narcolepsy; his study subjects belonged to a colony of canines that suffered from the malady. “[When I got there], the dogs were being maintained, but not much was being done with them other than some chemistry studies on known neurotransmitters,” says Mignot, a professor of psychiatry and behavioral sciences at Stanford University and now director of the center. “As a pharmacologist, I wanted to study potential treatments for narcolepsy and understand the molecular biology to improve treatment in humans.”

The first narcoleptic dog, a French poodle named Monique, was brought to Stanford in 1974 byWilliam Dement, the so-called “father of sleep medicine,” who had founded the center in 1970, the first in the world dedicated to the study of sleep. Dement and other researchers there established a full breeding colony in 1977 when dogs with a genetic form of the neurological disorder were discovered—initially, some puppies from a litter of Dobermans and, later, some Labradors. Narcoleptic dogs and humans both exhibit a combination of symptoms: perpetual sleepiness, cataplexy—muscle paralysis attacks triggered by emotions—and abnormal rapid eye movement (REM) sleep. While the condition in humans and dogs is treatable, there is no cure.

To study which narcolepsy drugs increased wakefulness and decreased cataplexy in the dogs, Mignot and psychiatry professor Seiji Nishino used a food-elicited cataplexy test: administration of the drug followed by release into a room with pieces of food on the floor and careful observation. “The dog would rush into the room and be so happy to eat the treats, and then would have an attack and collapse on the floor.” The researchers counted the number and duration of the attacks after treatment with a drug at various doses. In humans, cataplexy episodes are triggered by a positive emotion such as laughter at a joke or pleasant surprise. “For the dogs, it is food or the joy of playing. That is what is great about dogs as a model for this condition. When you give a treatment to a rat or mouse and they stop having cataplexy, you really don’t know if it is because they don’t feel good or if it is a genuine effect. But the dogs show you emotions like humans. I knew all of these dogs by name. They were my friends. I could see if they were worried or didn’t feel well.”

Mignot worked mostly with the Dobermans and Labs, but there were also dogs donated to the colony that seemed to have a sporadic form of narcolepsy, “There was Vern, a miniature poodle; Wally, a big poodle; Tucker, a mutt; and Beau, my beloved dachshund.” Using the cataplexy test in animals along with in vitro studies of the drugs’ chemical properties, Mignot and Nishino found that antidepressants suppress cataplexy by inhibiting adrenergic reuptake, and that amphetamine-like stimulants promote wakefulness in narcoleptics by increasing the availability of dopamine. “We improved the then-current treatments and started to understand the kinds of chemicals important to regulate narcolepsy symptoms.”

But Mignot wanted to understand the molecular mechanism of narcolepsy, so he turned his focus to the genetic basis of the disorder. A lack of genetics training and no map of the dog genome to guide him did not deter Mignot. He has tirelessly pursued this previously little-studied and, so far, only known neurological disorder that fundamentally perturbs the nature of sleep states.

Here, Mignot talks about pursuing a master’s, PhD, and MD simultaneously, the paper retraction that has been the most difficult episode in his career so far, and his unexpected devotion to a Chihuahua.

Mignot Motivated

Sir Mix-a-Lot. The youngest of six siblings, Mignot had a penchant for collecting fossils and for conducting chemistry experiments in the bathroom of his family’s home in Paris. “I bought chemicals sold by a Chinese shopkeeper on Rue Saint-Dominique to do all kinds of experiments, mixed them, and occasionally made mistakes. There were burn marks and projections on the walls of my bathroom.” In high school, the self-proclaimed “nerd with glasses” became interested in biology, and, after graduation in 1977, went to study for a medical degree at the René Descartes University Faculty of Medicine in Paris.

Collecting degrees. “In the second year of medical school, I got bored from all of the memorization.” He took the entrance exam for the prestigious École Normale Supérieure (ENS), which gives students freedom to pursue their academic interests at other institutions while providing a stipend, housing, and the support of professor mentors. He passed, and entered the ENS in 1979. Mignot worked towards a master’s in biochemistry, and then a PhD in molecular pharmacology while still continuing his medical studies. “Nothing was set up for MD-PhD programs at the time. It was all in parallel, which was crazy. I had an exam every few weeks,” says Mignot. In 1984, he received both his medical degree and, later, a PhD from Pierre and Marie Curie University.

New to narcolepsy. Mignot became interested in the effects of drugs on the brains of psychiatric patients, studying how different compounds affected the metabolism of neurotransmitters in the brains of rats, and pursued a residency in psychiatry to complement his laboratory research. In 1986, he was offered a professorship in pharmacology at the Paris V University School of Medicine. But first, Mignot needed to complete the mandatory military service that he had deferred. “Instead of going to a former French colony to practice medicine, I convinced the French government to send me to Stanford to study modafinil, a wakefulness-promoting drug created by a French pharmaceutical company called Lafon Laboratories for the treatment of narcolepsy. I had never heard about [narcolepsy] during medical school—it must have been a single line in my textbooks. I discovered that Stanford was doing work on sleep and that Dement had started a colony of narcoleptic dogs there. I thought I could study these animals and figure out how modafinil worked.”

So Mignot came to Stanford for 16 months as part of his military service with financial support from Lafon Laboratories. “The company had claimed modafinil worked by a novel mechanism, unrelated to how stimulants work,” says Mignot. But Mignot found that modafinil bound the dopamine transporter, inhibiting the reuptake of the neurotransmitter, boosting wakefulness. “This is a similar mode of action as Ritalin, but the company was claiming otherwise. It took 10 years for my results to be validated, finally, by Nora D. Volkow, now director of the National Institute on Drug Abuse, who showed . . . that indeed the drug displaces the dopamine transporter at doses that increase wakefulness in humans.”

Mignot Moves

Going to the dogs. At Stanford, Mignot immersed himself in his work with the dog colony. “I worked all the time and came home just to sleep. I was definitely not very successful with girls then, because I smelled like dog all the time. I spent all day with the dogs, going to the facility, hugging, playing, and working with them. When we bred them, sometimes the mothers rejected their puppies so we had to come in every few hours, even in the middle of the night, to bottle-feed the puppies. Even after I took a shower, you could still smell the dogs. It was a strange part of my life.”

From pharmacology to genetics. Mignot kept extending his stay at Stanford. “After a few years I realized our pharmacology studies were never going to lead to narcolepsy’s cause. We needed to find the genetic cause in the dog.” In 1988, he resigned a faculty position in Paris—which was being held for him even as he continued to extend his time at Stanford—deciding to search for the mutated gene responsible for narcolepsy in dogs. In 1993, Mignot became the head of the Center for Narcolepsy at Stanford. A connection between an immune gene, the human leukocyte antigen (HLA) allele HLA-DR2, and narcolepsy in humans had already been identified by Yutaka Honda at the University of Tokyo, so Mignot’s lab tried to ascertain whether the same connection was true in the dogs or if the immune gene was simply a genetic linkage marker. These were the days before the dog or human genome had been sequenced, so the work took Mignot’s lab 10 years, and almost 200 narcoleptic Dobermans and Labradors: years of painstaking chromosome walking experiments, DNA fingerprinting, and the construction of a bacterial artificial chromosome library of dog genomic pieces. “What helped us a lot was that we knew the Dobermans and Labs had the same genetic defect because we interbred and got narcoleptic puppies—what’s called a complementation test.” In 1999, Mignot’s team identified the mutated gene as hypocretin receptor 2, whose protein binds hypocretin (also called orexin), a neuropeptide that regulates arousal and wakefulness. Several weeks later, after seeing these findings, Masashi Yanagisawa’s lab independently published a confirmation, showing that hypocretin knockout mice also have narcolepsy.

In parallel narcolepsy studies across ethnic groups, Mignot’s lab found that it was not the initial HLA-DR2allele that predisposed humans to narcolepsy, but another, nearby HLA gene, DQB1*0602.

Humans are not like dogs. “After we found the gene, the research went fast. We decided to look at hypocretin itself and see if it’s abnormal in humans.” Mignot’s lab sequenced the genes for the hypocretin receptor and its ligand in narcoleptic patients, expecting mutations in either to be rare because of the known HLA-narcolepsy linkage and the fact that most cases in humans, unlike in dogs, are not familial. Only one documented case, a child who had narcolepsy onset at six months of age, has been found to harbor a hypocretin gene mutation. “I think you need to knock out both receptor 1 and 2 in humans to get the full narcoleptic phenotype,” says Mignot. “Those with just one mutation may be more prone to tiredness but not full narcolepsy.”

In 2000, Mignot’s and Nishino’s groups reported that hypocretin was not present in narcoleptic patients’ cerebrospinal fluid—a test still used diagnostically today. The same year, independent studies from Mignot’s laboratory and that of Jerome Siegel at the University of California, Los Angeles, found that the lack of hypocretin was not due to gene mutations but to the fact hypocretin cells were missing in the brains of narcoleptic patients. HLA genes were well known to be associated with many autoimmune diseases, and Mignot hypothesized that hypocretin was missing due to an autoimmune attack against hypocretin-secreting neurons. What the abnormality is in those narcolepsy patients with normal hypocretin levels remains a mystery.

Mignot Moves Forward

Still a missing link. “I have been working on this [autoimmunity] hypothesis for 10 years, and we see that this hypothesis is more and more likely, but we cannot find any direct proof. It’s frustrating, but that kind of struggle is the story of my life.” All known autoimmune diseases result in the generation of antibodies in patients, but antibodies against hypocretin or the hypocretin cells have never been detected. So Mignot’s lab tested whether T-cells were the immune component attacking hypocretin. In 2013, his lab published a study identifying the T-cell culprits. But the study was retracted by Mignot himself one year later, when Mignot’s group couldn’t reproduce the results after the scientist who did most of the experiments had left the lab. “It was really painful and the worst time in my career.”

A new lead. “In 2010, a lot of people suddenly started to develop narcolepsy after receiving the Pandemrix vaccine against swine flu. It’s very odd. We still don’t understand why this particular vaccine increased the risk of narcolepsy.” Mignot thinks that a component of the vaccine or the virus itself triggers the immune system to attack hypocretin-producing neurons. “So now I am doing a lot of studies comparing the different vaccines and the wild-type virus to try to understand what could be common to produce this response. I think the vaccine will give us a final clue to isolate the immune T-cells involved in narcolepsy.”

Genetics of sleep. Mignot’s lab is working on a genome-wide association study, which shows that the genetic variants linked to narcolepsy are mostly immune-related, similar to Type 1 diabetes, celiac disease and other autoimmune diseases, further supporting the autoimmune hypothesis. Mignot is also getting a large human study off the ground. “I want to study the genetics of 40,000 people with sleep issues to see if there are genetic traits that cause people to sleep well or not sleep well, to need more sleep or less sleep. This hasn’t been done yet. I think this will help us crack open the mysteries of sleep.”

A new companion. “The dog colony was officially dismantled in 2000 after we found the canine narcolepsy gene. The dogs were adopted and we got Bear, a narcoleptic Schipperke. He passed away over a year ago. I loved that dog and miss him a lot. He was an unusually kind soul. Three months later, a breeder from Vermont called and said he had a narcoleptic Chihuahua. I flew to Vermont and adopted Watson and he’s been with us ever since. I never would have thought to adopt a Chihuahua, but now I can’t think of life without Watson. He is faithful and cuddly. I really think you can bond with any dog.”

The journey continues. “This story of narcolepsy, it’s a difficult story. Finding the gene was very difficult, and finding the autoimmune connection should have been trivial, but it has been an ordeal because there is absolutely no collateral damage. As [Stanford neurologist] Larry Steinman said to me, it’s like a ‘hit and run’—it looks like it was cleaned up and the players disappear. It’s hard, but by learning about this disease, we may discover other diseases where a similar autoimmune destruction happens in the brain but we have never realized it. I wouldn’t be surprised if some forms of depression and schizophrenia have an autoimmune basis in the brain. By experience, the more difficult it is, the most interesting the answer will be.”

Greatest Hits

  • Identified the gene for hypocretin receptor 2, which, when mutated, causes an inherited form of narcolepsy in Dobermans and Labradors
  • Identified how antidepressant and stimulant drugs work as treatments for narcolepsy
  • Identified DQB1*0602 as the main human gene associated with narcolepsy
  • By genome-wide association, found immune polymorphisms, such as one in the T-cell receptor alpha, that also predispose people to the disease, further suggesting the disease is autoimmune
  • Found that human narcolepsy, unlike canine narcolepsy, is not caused by mutations in the hypocretin receptor 2 gene but is due to an immune-mediated destruction of hypocretin-producing neurons in the brain

DQB1*0602 and DQA1*0102 (DQ1) are better markers than DR2 for narcolepsy in Caucasian and black Americans.

Sleep. 1994 Dec;17(8 Suppl):S60-7.    http://www.ncbi.nlm.nih.gov/pubmed/7701202
In the present study, we tested 19 Caucasian and 28 Black American narcoleptics for the presence of the human leucocyte antigen (HLA) DQB1*0602 and DQA1*0102 (DQ1) genes using a specific polymerase chain reaction (PCR)-oligotyping technique. A similar technique was also used to identify DRB1*1501 and DRB1*1503 (DR2). Results indicate that all but one Caucasian patient (previously identified) were DRB1*1501 (DR2) and DQB1*0602/DQA1*102 (DQ1) positive. In Black Americans, however, DRB1*1501 (DR2) was a poor marker for narcolepsy. Only 75% of patients were DR2 positive, most of them being DRB1*1503, but not DRB1*1501 positive. DQB1*0602 was found in all but one Black narcoleptic patient. The clinical and polygraphic results for this patient were typical, thus confirming the existence of a rare, but genuine form of DQB1*0602 negative narcolepsy. These results demonstrate that DQB1*0602/DQA1*0102 is the best marker for narcolepsy across all ethnic groups.
Genetic studies in the sleep disorder narcolepsy.
Kadotani H1, Faraco J, Mignot E.  Author information    Genome Res. 1998 May;8(5):427-34.   
Narcolepsy is a chronic neurologic disorder characterized by excessive daytime sleepiness and abnormal manifestations of REM sleep including cataplexy, sleep paralysis, and hypnagogic hallucinations. Narcolepsy is both a significant medical problem and a unique disease model for the study of sleep. Research in human narcolepsy has led to the identification of specific HLA alleles (DQB1*0602 and DQA1*0102) that predispose to the disorder. This has suggested the possibility that narcolepsy may be an autoimmune disorder, a hypothesis that has not been confirmed to date. Genetic factors other than HLA are also likely to be involved. In a canine model of narcolepsy, the disorder is transmitted as a non-MHC single autosomal recessive trait with full penetrance (canarc-1). A tightly linked marker for canarc-1 has been identified, and positional cloning studies are under way to isolate canarc-1 from a newly developed canine genomic BAC library. The molecular cloning of this gene may lead to a better understanding of sleep mechanisms, as has been the case for circadian rhythms following the cloning of frq, per, and Clock.

Sleep consumes almost one-third of any human lifetime, yet its biological function remains unknown. Electrophysiological studies have shown that sleep is physiologically heterogeneous. Sleep onset is first characterized by light nonrapid eye movement (NREM) sleep (stage I and II), followed by deep NREM sleep or slow-wave sleep (stage III and IV) and finally rapid eye movement (REM) sleep. This sleep cycle is ∼90 min long and is repeated multiple times during nocturnal sleep. REM sleep, also called paradoxical sleep, is characterized by low-voltage fast electroencephalogram activity, increased brain metabolism, skeletal muscle atonia, rapid eye movements, and dreaming. Total sleep deprivation and/or REM sleep deprivation are both lethal in animals.

NREM and REM sleep are mainly regulated by circadian and homeostatic processes. Recent studies have suggested that across the animal kingdom, circadian rhythms are regulated by similar negative feedback loops involving the rhythmic expression of RNAs encoding proteins that act to shut off the genes encoding them (Hall 1995; Dunlap 1996;Rosbash et al. 1996; Young et al. 1996). From a genetic perspective, much less progress has been made in the noncircadian aspects of sleep regulation. This review demonstrates that a genetic approach to narcolepsy will in time provide a novel insight into the molecular basis of sleep control.

Narcolepsy, a Disorder of REM Sleep Regulation

Narcolepsy most often begins in the second decade of life but may be observed at the age of 5 or younger (Honda 1988). The cardinal symptom in narcolepsy is a persistent and disabling excessive daytime sleepiness. Sleep attacks are unpredictable, irresistible, and may lead to continuing activities in a semiconscious manner, a phenomenon referred to as automatic behavior. Naps are usually refreshing, but the restorative effect vanishes quickly.

Sleepiness is not sufficient to diagnose the disorder. Narcoleptic patients also experience symptoms that are secondary to abnormal transitions to REM sleep (Aldrich 1992; Bassetti and Aldrich 1996). The most important of these symptoms is cataplexy, a pathognomonic symptom for the disorder. In cataplexy, humor, laughter, or anger triggers sudden episodes of muscle weakness ranging from sagging of the jaw, slurred speech, buckling of the knees or transient head dropping, to total collapse to the floor (Aldrich 1992; Bassetti and Aldrich 1996). Patients typically remain conscious during the attack, which may last a few seconds or a few minutes. Reflexes are abolished during the attack, as they are during natural REM sleep atonia. Sleep paralysis, another manifestation of REM sleep atonia, is characterized by an inability to move and speak while falling asleep or upon awakening. Episodes last a few seconds to several minutes and can be very frightening. Hypnagogic hallucinations are vivid perceptual dream-like experiences (generally visual) occurring at sleep onset. Sleep paralysis and hypnagogic hallucinations occasionally occur in normal individuals under extreme circumstances of sleep deprivation or after a change in sleep schedule (Aldrich 1992; Bassetti and Aldrich 1996) and thus have little diagnostic value in isolation.

Nocturnal sleep polysomnography is conducted to exclude other possible causes of daytime sleepiness such as sleep apnea or periodic limb movements (Aldrich 1992). The Multiple Sleep Latency Test (MSLT) is also carried out to demonstrate daytime sleepiness objectively. In this test, patients are requested to take four or five naps at 2-hr intervals, during which time to sleep onset (sleep latency) is measured. Short sleep latencies under 5 min are usually observed in narcoleptic patients, together with abnormal REM sleep episodes, referred to as sleep-onset REM periods (SOREMPs). The combination of a history of cataplexy, short sleep latencies, and two or more SOREMPs during MSLT is diagnostic for narcolepsy (Bassetti and Aldrich 1996;Mignot 1996). Note that many naps consist only of NREM sleep suggesting that there is also a broader problem of impaired sleep–wake regulation, with indistinct boundaries between sleep and wakefulness in narcolepsy (Broughton et al. 1986; Bassetti and Aldrich 1996).

The disorder has a large psychosocial impact. Two-thirds of patients have fallen asleep while driving, and 78% suffer from reduced performance at work (Broughton et al. 1981). Depression occurs in up to 23% of cases (Roth 1980). Treatment is purely symptomatic and generally involves amphetamine-like stimulants for excessive daytime sleepiness and antidepressive treatment for cataplexy and other symptoms of abnormal REM sleep (Bassetti and Aldrich 1996; Nishino and Mignot 1997).

Familial and Genetic Aspects of Human Narcolepsy

Narcolepsy–cataplexy affects 0.02%–0.18% of the general population in various ethnic groups (Mignot 1998). A familial tendency for narcolepsy has long been recognized (Roth 1980). The familial risk of a first-degree relative is 0.9%–2.3% for narcolepsy–cataplexy, which is 10–40 times higher than the prevalence in the general population (Mignot 1998).

In a Finnish twin cohort study consisting of 13,888 monozygotic (MZ) and same-sexed dizygotic (DZ) twin pairs, three narcoleptic individuals were found and each of them was discordant DZ with a negative family history (Hublin et al. 1994). In the literature, 16 MZ pairs with at least one affected twin have been reported and five of these pairs were concordant for narcolepsy (Mignot 1998). Although narcolepsy is likely to have a genetic predisposition, the low rate of concordance in narcoleptic MZ twins indicates that environmental factors play an important role in the development of the disease.

HLA DQA1*0102 andDQB1*0602 Are Primary Susceptibility Factors for Narcolepsy

Narcolepsy was shown to be associated with the human leukocyte antigen (HLA) DR2 in the Japanese population (Honda et al. 1984;Juji et al. 1984). DR2 is observed in all Japanese patients versus 33% of Japanese controls (Juji et al. 1984; Matsuki et al. 1988a). A similar association is observed in Caucasians, with >85% versus 22% DR2 positivity (Langdon et al. 1984; Billiard et al. 1986;Rogers et al. 1997). Strikingly however, the DR2association is much lower in African–Americans (65%–67% in narcoleptic patients vs. 27%–38% in controls) (Neely et al. 1987;Matsuki et al. 1992;Rogers et al. 1997). Further studies have shown that HLA DQalleles, located ∼80 kb from the DRregion, are more tightly associated with narcolepsy than HLADR subtypes. More than 90% of narcolepsy–cataplexy patients across all ethnic groups carry a specific allele of HLA DQB1, DQB1*0602 (Matsuki et al. 1992;Mignot et al. 1994); this allele is present in 12%–38% of the general population across many ethnic groups (Matsuki et al. 1992; Mignot et al. 1994; Lin et al. 1997).DQB1*0602 is associated almost exclusively with DR2in Japanese (Lin et al. 1997) and Caucasians (Begovich et al. 1992), whereas it is observed frequently in association with DR2, DR5, or other DRsubtypes in African–Americans (Mignot et al. 1994, 1997a). The increased DR–DQ haplotypic diversity in African–Americans explains the low DR2 association observed in this population.

To further characterize the DQB1 region in narcoleptic subjects, novel polymorphic markers were isolated and characterized (Mignot et al. 1997a). The markers tested included six novel microsatellite markers (DQCAR, DQCARII, G51152, DQRIV, T16CAR, and G411624R). DQA1, a DQ gene whose product is known to pair with DQB1-encoding polypeptides to form the biologically active DQ heterodimer molecule, was also studied. The results obtained are summarized in Figure1. The association with narcolepsy decreases in theT16CAR–DQB2 region (Mignot et al. 1997a) and in the DRB1 region (Mignot et al. 1994, 1997b). The G411624R andT16CAR microsatellites are complex repeats with drastically different sizes, all of which are frequently observed in narcolepsy susceptibility haplotypes, a result suggesting crossovers in the region. In the DRB1 region, association with narcolepsy is still tight with DRB1*1501 (DR2) in Caucasians and Asians but is significantly lower in African–Americans, which suggests crossovers in the region among ethnic groups.

Figure 1.

Figure 1.

Schematic summary of the narcolepsy susceptibility region within the HLA complex. Genes and markers are depicted by vertical bars, alleles observed in narcoleptic patients are listed above each marker.DQB2, DQB3, DQB1, DQA1, andDRB1 are HLA genes and pseudogenes. QBP and QAP are the promoter regions ofDQB1 and DQA1, respectively. G411624R, T16CAR, G51152, DQCAR, and DQCARII are microsatellite CA repeats identified in the HLA DQ region (Mignot et al. 1997a).DQRIV is a compound tandem repeat of 4- and 2-bp units located between DQB1 and G51152. TheDQA1*0102allele is subdivided into 01021 and 01022 based on a codon 109 synonymous substitution. Genomic segments in which frequent recombination was detected are indicated by vertical solid lines. Broken lines indicate rare possible ancestral crossovers detected in the area. Crossovers betweenT16CAR and G51152 occur within ethnic groups; crossovers between QAP and DRB1are frequently observed among ethnic groups (Mignot et al. 1997a). Note that the genomic region shared by most narcoleptic patients extends from a region between T16CAR and G51152 to a region between QAP andDRB1. No other genes were found in 86 kb of genomic sequence surrounding the DQB1*0602 gene (Ellis et al. 1997). Additional diversity is also found at the level ofG51152 andDQRIV, this being most likely due to a slippage mechanism rather than crossover (Lin et al. 1997; Mignot et al. 1997a). (+, Δ, *) Frequent alleles found predominantly in Caucasian, Asian, and African–American populations, respectively; (kb) kilobase pairs. Alleles frequently observed in theDQB1*0602/DQA1*0121 haplotype are underlined.DRB1*1501, DRB1*1503, and DRB1*1602 are DR2subtypes.DRB1*1101 and DRB1*12022 are DR5 subtypes.

The DQA1*0102/DQB1*0602 haplotype is common in narcoleptic patients (Mignot et al. 1994). Other haplotypes withDQA1*0102but not DQB1*0602, such as DQA1*0102andDQB1*0604, are frequent in control populations in all ethnic groups and do not predispose to narcolepsy. DQA1*0102 alone is thus not likely to confer susceptibility but may be involved in addition to DQB1*0602 for the development of narcolepsy (Mignot et al. 1994, 1997a).

Microsatellite analysis in the HLA DQ region revealed that only the area surrounding the coding regions of DQB1 andDQA1 is well conserved across all susceptibility haplotypes. Polymorphism can be observed in microsatellite and/or in the promoter regions flanking the DQB1*0602 and DQA1*0102alleles and in the region between these two genes (Mignot et al. 1997a). Mutations by slippage for some loci, and rare ancestral crossovers in a few instances, contribute to this diversity (Mignot et al. 1997a). Sequence analysis of DQ genes from narcoleptic and control individuals has revealed no sequence variation that correlates with the disease (Lock et al. 1988; Uryu et al. 1989; Ellis et al. 1997;Mignot et al. 1997a). No new gene was found in 86 kb of genomic sequence surrounding the HLA DQ gene (Ellis et al. 1997). A study on the dosage effect of DQB1*0602 allele on narcolepsy susceptibility revealed that DQB1*0602 homozygous subjects are at two to four times greater risk than heterozygous subjects for developing narcolepsy (Pelin et al. 1998). Taken together, these results strongly suggest that the DQA1*0102 andDQB1*0602alleles themselves rather than an unknown gene in the region are the actual susceptibility genes for narcolepsy.

HLA DQB1*0602 Is Neither Sufficient nor Necessary for the Development of Narcolepsy
Of the general population, 12%–38% carry HLADQB1*0602, yet narcolepsy affects only 0.02%–0.18% of the general population. No sequence variation that correlates with the disease was detected in sequence analysis of DQ genes. Nevertheless, a few narcoleptic patients with cataplexy do not carry the DQB1*0602 allele (Mignot et al. 1992, 1997a). HLADQB1*0602 is thus neither necessary nor sufficient for development of narcolepsy–cataplexy.
…….
Canine Narcolepsy as a Model for the Human Disorder
….narcolepsy was identified in numerous canine breeds, including Doberman pinschers, Labrador retrievers, miniature poodles, dachshunds, beagles, and Saint Bernards. All animals display similar symptoms, but the age of onset, severity, and the clinical course vary significantly among breeds (Baker et al. 1982).
….Similar to human narcoleptic patients, animals affected with the disorder display emotionally triggered cataplexy, fragmented sleep, and increased daytime sleepiness. Sleep paralysis and hypnagogic hallucinations cannot be documented because of difficulties in assessing the symptoms in canines. The validity of this model of narcolepsy has also been established through neurophysiological and neuropharmacological similarities with the human disorder. Pharmacological and neurochemical studies suggest abnormal monoaminergic and cholinergic mechanisms in narcolepsy both in human and canines (Aldrich 1991; Nishino and Mignot 1997, 1998). Interestingly, it is also possible to induce brief episodes of cataplexy in otherwise asymptomatic canarc-1 heterozygous animals using specific drug combinations (Mignot et al. 1993).

……

Narcolepsy is both a significant medical problem and a unique disease model. Research in humans has led to the identification of specific HLA alleles that predispose to the disorder. This has suggested the possibility that narcolepsy may be an autoimmune disorder, a hypothesis that has not been confirmed to date. Cells of the central and peripheral nervous systems and immune systems are known to interact at multiple levels (Morganti-Kossmann et al. 1992; Wilder 1995). For example, peripheral immunity is modulated by the brain via autonomic or neuroendocrinal interactions, whereas the immune system affects the nervous system through the release of cytokines. Cytokines have been shown to modulate sleep directly and have established effects on neurotransmission and neuronal differentiation (Krueger and Karnovsky 1995; Mehler and Kessler 1997). It is therefore possible that neuroimmune interactions that are not autoimmune in nature might be involved in the pathophysiology of narcolepsy.

NREM and REM sleep are mainly regulated by circadian and homeostatic processes. Single gene circadian mutations have been isolated from species as diverse as Arabidopsis(toc1),Neurospora (frq), Drosophila (perand tim), and mouse (Clock) (Hall 1995). Theper and Clock genes isolated inDrosophiliaand mouse, respectively, have been shown to belong to the same family, the PAS domain family (Hall 1995; Rosbash et al. 1996; Young et al. 1996; King et al. 1997). Analysis of frq, tim,andper demonstrate that circadian rhythms of diverse species are regulated by similar negative feedback loops in which gene products negatively regulate their own transcripts (Hall 1995;Dunlap 1996;Rosbash et al. 1996; Young et al. 1996). Putative homologs of theper gene have also been isolated in mammals (Albrecht et al. 1997; Tei et al. 1997). In mouse, RNAs for two perhomologs are expressed rhythmically within the suprachiasmatic nucleus (SCN), a brain region with an established role in generating mammalian circadian rhythms (Shearman et al. 1997;Shigeyoshi et al. 1997; Tei et al. 1997).

Much less progress has been made in the noncircadian aspect of sleep regulation. Sleep can only be recognized and characterized electrophysiologically in mammals and birds, and single gene mutants for this behavior have not been described in the mouse. Canine narcolepsy is the only known single gene mutation affecting sleep state organization as opposed to circadian control of behavior. The molecular cloning of this gene may lead to a better understanding of the molecular basis and biological role of sleep, as has been the case for circadian rhythms following the cloning of frq, per, and Clock.

 

Sleep Circuit

By Karen Zusi

A web of cell types in one of the brain’s chief wake centers keeps animals up—but also puts them to sleep.

Feature: Desperately Seeking Shut-Eye

By Anna Azvolinsky

New insomnia drugs are coming on the market, but drug-free therapy remains the most durable treatment.

 

image: Sleep Circuit

 

Selectively driving cholinergic fibers optically in the thalamic reticular nucleus promotes sleep

 Kun-Ming Ni, 

Zhejiang University School of Medicine, China; Fuzhou Children’s Hospital, China; University of California, San Diego, United States; Zhejiang University, China
Published February 11, 2016
Cite as eLife 2016;5:e10382

Cholinergic projections from the basal forebrain and brainstem are thought to play important roles in rapid eye movement (REM) sleep and arousal. Using transgenic mice in which channelrhdopsin-2 is selectively expressed in cholinergic neurons, we show that optical stimulation of cholinergic inputs to the thalamic reticular nucleus (TRN) activates local GABAergic neurons to promote sleep and protect non-rapid eye movement (NREM) sleep. It does not affect REM sleep. Instead, direct activation of cholinergic input to the TRN shortens the time to sleep onset and generates spindle oscillations that correlate with NREM sleep. It does so by evoking excitatory postsynaptic currents via α7-containing nicotinic acetylcholine receptors and inducing bursts of action potentials in local GABAergic neurons. These findings stand in sharp contrast to previous reports of cholinergic activity driving arousal. Our results provide new insight into the mechanisms controlling sleep.

 

Read Full Post »

Photo-Receptor Production

Curator: Larry H. Bernstein, MD, FCAP

 

Using Zinc Finger Nuclease Technology to Generate CRX-Reporter Human Embryonic Stem Cells as a Tool to Identify and Study the Emergence of Photoreceptors Precursors During Pluripotent Stem Cell Differentiation

Joseph Collin1, Carla B Mellough1, Birthe Dorgau1, Stefan Przyborski2, Inmaculada Moreno-Gimeno3 and Majlinda Lako1,*

STEM CELLS Feb 2016  34(2), pages 311–321,    http://dx.doi.org:/10.1002/stem.2240

 

The purpose of this study was to generate human embryonic stem cell (hESC) lines harboring the green fluorescent protein (GFP) reporter at the endogenous loci of the Cone-Rod Homeobox (CRX) gene, a key transcription factor in retinal development. Zinc finger nucleases (ZFNs) designed to cleave in the 3′ UTR of CRX were transfected into hESCs along with a donor construct containing homology to the target region, eGFP reporter, and a puromycin selection cassette. Following selection, polymerase chain reaction (PCR) and sequencing analysis of antibiotic resistant clones indicated targeted integration of the reporter cassette at the 3′ of the CRX gene, generating a CRX-GFP fusion. Further analysis of a clone exhibiting homozygote integration of the GFP reporter was conducted suggesting genomic stability was preserved and no other copies of the targeting cassette were inserted elsewhere within the genome. This clone was selected for differentiation towards the retinal lineage. Immunocytochemistry of sections obtained from embryoid bodies and quantitative reverse transcriptase PCR of GFP positive and negative subpopulations purified by fluorescence activated cell sorting during the differentiation indicated a significant correlation between GFP and endogenous CRX expression. Furthermore, GFP expression was found in photoreceptor precursors emerging during hESC differentiation, but not in the retinal pigmented epithelium, retinal ganglion cells, or neurons of the developing inner nuclear layer. Together our data demonstrate the successful application of ZFN technology to generate CRX-GFP labeled hESC lines, which can be used to study and isolate photoreceptor precursors during hESC differentiation. Stem Cells 2016;34:311–321

 

A New Tool for Photoreceptor Production to Treat Vision Loss

     

Review of “Using Zinc Finger Nuclease Technology to Generate CRX-Reporter Human Embryonic Stem Cells as a Tool to Identify and Study the Emergence of Photoreceptors Precursors during Pluripotent Stem Cell Differentiation” from Stem Cells by Stuart P. Atkinson

The production of replacement cells from human pluripotent stem cell (hPSC) sources has great potential for the treatment of certain forms of vision impairment and blindness. The production of functional stem cell-derived retinal-pigmented epithelium (RPE) is already a notable success, although the equivalent success in photoreceptor cell production has so far lagged behind, due partly to the lack of robust human cell surface markers to allow their purification.

To get round this problem, canny researchers from the laboratory of Majlinda Lako (Newcastle University, United Kingdom) have used zinc finger nuclease (ZFN) gene editing technology to create a reporter embryonic stem cell (ESC) line suitable for the enhanced production of photoreceptor cells [1].

The authors targeted a green fluorescent protein (GFP) reporter into the endogenous locus of the Cone-Rod Homeobox (CRX) transcription factor gene which is known to be selectively expressed post-mitotic retinal photoreceptor precursors. The integration of this reporter into hESCs did not negatively affect genomic stability or pluripotency and, following 3D differentiation to form laminated neural retina [2], GFP expression faithfully mimicked the known expression patterns of CRX (See Figure).

In-depth expression analysis of CRX-positive cells then demonstrated the restriction of GFP-CRX to only two cell types within the 90-day differentiation protocol: RECOVERIN-expressing photoreceptor precursors situated in the developing outer nuclear layer of the optic cup and a subpopulation of non-proliferative retinal progenitors. Importantly, the study detected the expression of genes known to be activated by CRX, so suggesting that GFP-targeting does not affect the functionality of the transcription factor.

In conclusion, the authors have created a CRX-GFP-labeled hESC line which can be used to identify, purify, and study photoreceptor precursors during hESC differentiation, in the hope of improving differentiation protocols, discovering cell surface markers, and developing clinically applicable strategies for transplantation. A great tool for those working towards generating treatments for vision impairment and blindness.

References

  1. Collin J, Mellough CB, Dorgau B, et al. Using Zinc Finger Nuclease Technology to Generate CRX-Reporter Human Embryonic Stem Cells as a Tool to Identify and Study the Emergence of Photoreceptors Precursors During Pluripotent Stem Cell Differentiation. STEM CELLS 2016;34:311-321.
  2. Mellough CB, Collin J, Khazim M, et al. IGF-1 Signaling Plays an Important Role in the Formation of Three-Dimensional Laminated Neural Retina and Other Ocular Structures From Human Embryonic Stem Cells. Stem Cells 2015;33:2416-2430.

 

Read Full Post »

Metformin and vitamin B12 deficiency?

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Years of taking popular diabetes drug tied to risk of B12 deficiency

 

Long-term Metformin Use and Vitamin B12 Deficiency in the Diabetes Prevention Program Outcomes Study

 

Metformin linked to vitamin B12 deficiency

David Holmes   Nature Reviews Endocrinology(2016)    http://dx.doi.org:/10.1038/nrendo.2016.39

Secondary analysis of data from the Diabetes Prevention Program Outcomes Study (DPPOS), one of the largest and longest studies of metformin treatment in patients at high risk of developing type 2 diabetes mellitus, shows that long-term use of metformin is associated with vitamin B12deficiency.

Aroda, V. R. et al. Long-term metformin use and vitamin B12 deficiency in the Diabetes Prevention Program Outcomes Study. J. Clin. Endocrinol. Metab. http://dx.doi.org/10.1210/jc.2015-3754 (2016)

 

Long-term Follow-up of Diabetes Prevention Program Shows Continued Reduction in Diabetes Development

http://www.diabetes.org/newsroom/press-releases/2014/long-term-follow-up-of-diabetes-prevention-program-shows-reduction-in-diabetes-development.html

San Francisco, California
June 16, 2014

Treatments used to decrease the development of type 2 diabetes continue to be effective an average of 15 years later, according to the latest findings of the Diabetes Prevention Program Outcomes Study, a landmark study funded by the National Institutes of Health (NIH).

The results, presented at the American Diabetes Association’s 74th Scientific Sessions®, come more than a decade after the Diabetes Prevention Program, or DPP, reported its original findings. In 2001, after an average of three years of study, the DPP announced that the study’s two interventions, a lifestyle program designed to reduce weight and increase activity levels and the diabetes medicinemetformin, decreased the development of type 2 diabetes in a diverse group of people, all of whom were at high risk for the disease, by 58 and 31 percent, respectively, compared with a group taking placebo.

The Diabetes Prevention Program Outcomes Study, or DPPOS, was conducted as an extension of the DPP to determine the longer-term effects of the two interventions, including further reduction in diabetes development and whether delaying diabetes would reduce the development of the diabetes complications that can lead to blindness, kidney failure, amputations and heart disease. Funded largely by the NIH’s National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), the new findings show that the lifestyle intervention and metformin treatment have beneficial effects, even years later, but did not reduce microvascular complications.

Delaying Type 2 Diabetes

Participants in the study who were originally assigned to the lifestyle intervention and metformin during DPP continued to have lower rates of type 2 diabetes development than those assigned to placebo, with 27 percent and 17 percent reductions, respectively, after 15 years.

“What we’re finding is that we can prevent or delay the onset of type 2 diabetes, a chronic disease, through lifestyle intervention or with metformin, over a very long period of time,” said David M. Nathan, MD, Chairman of the DPP/DPPOS and Professor of Medicine at Harvard Medical School. “After the initial randomized treatment phase in DPP, all participants were offered lifestyle intervention and the rates of diabetes development fell in the metformin and former placebo groups, leading to a reduction in the treatment group differences over time.  However, the lifestyle intervention and metformin are still quite effective at delaying, if not preventing, type 2 diabetes,” Dr. Nathan said. Currently, an estimated 79 million American adults are at high-risk for developing type 2 diabetes.

Microvascular Complications
The DPPOS investigators followed participants for an additional 12 years after the end of the DPP to determine both the extent of diabetes prevention over time and whether the study treatments would also decrease the small vessel -or microvascular- complications, such as eye, nerve and kidney disease. These long-term results did not demonstrate significant differences among the lifestyle intervention, metformin or placebo groups on the microvascular complications, reported Kieren Mather, MD, Professor of Medicine at Indiana University School of Medicine and a study investigator.

“However, regardless of type of initial treatment, participants who didn’t develop diabetes had a 28 percent lower occurrence of the microvascular complications than those participants who did develop diabetes. These findings show that intervening in the prediabetes phase is important in reducing early stage complications,” Dr. Mather noted. The absence of differences in microvascular complications among the intervention groups may be explained by the small differences in average glucose levels among the groups at this stage of follow-up.

Risk for Cardiovascular Disease

The DPP population was relatively young and healthy at the beginning of the study, and few participants had experienced any severe cardiovascular events, such as heart attack or stroke, 15 years later. The relatively small number of events meant that the DPPOS researchers could not test the effects of interventions on cardiovascular disease. However, the research team did examine whether the study interventions, or a delay in the onset of type 2 diabetes, improved cardiovascular risk factors.

“We found that cardiovascular risk factors, such as hypertension, are generally improved by the lifestyle intervention and somewhat less by metformin,” said Ronald Goldberg, MD, Professor of Medicine at the University of Miami and one of the DPPOS investigators. “We know that people with type 2 diabetes are at much higher risk for heart disease and stroke than those who do not have diabetes, so a delay in risk factor development or improvement in risk factors may prove to be beneficial.”

Long-term Results with Metformin

The DPP/DPPOS is the largest and longest duration study to examine the effects of metformin, an inexpensive, well-known and generally safe diabetes medicine, in people who have not been diagnosed with diabetes. For DPPOS participants, metformin treatment was associated with a modest degree of long-term weight loss. “Other than a small increase in vitamin B-12 deficiency, which is a recognized consequence of metformin therapy, it has been extremely safe and well-tolerated over the 15 years of our study,” said Jill Crandall, MD, Professor of Medicine at Albert Einstein College of Medicine and a DPPOS investigator. “Further study will help show whether metformin has beneficial effects on heart disease and cancer, which are both increased in people with type 2 diabetes.”

Looking to the Future

In addition to the current findings, the DPPOS includes a uniquely valuable population that can help researchers understand the clinical course of type 2 diabetes.  Since the participants did not have diabetes at the beginning of the DPP, for those who have developed diabetes, the data show precisely when they developed the disease, which is rare in previous studies. “The DPP and DPPOS have given us an incredible wealth of information by following a very diverse group of people with regard to race and age as they have progressed from prediabetes to diabetes,” said Judith Fradkin, MD, Director of the NIDDK Division of Diabetes, Endocrinology and Metabolic Diseases. “The study provides us with an opportunity to make crucial discoveries about the clinical course of type 2 diabetes.”

Dr. Fradkin noted that the study population held promise for further analyses because researchers would now be able to examine how developing diabetes at different periods of life may cause the disease to progress differently. “We can look at whether diabetes behaves differently if you develop it before the age of 50 or after the age of 60,” she said. “Thanks to the large and diverse population of DPPOS that has remained very loyal to the study, we will be able to see how and when complications first develop and understand how to intervene most effectively.”

She added that NIDDK had invited the researchers to submit an application for a grant to follow the study population for an additional 10 years.

The Diabetes Prevention Program Outcomes Study was funded under NIH grant U01DK048489 by the NIDDK; National Institute on Aging; National Cancer Institute; National Heart, Lung, and Blood Institute; National Eye Institute; National Center on Minority Health and Health Disparities; and the Office of the NIH Director; Eunice Kennedy Shriver National Institute of Child Health and Human Development; Office of Research on Women’s Health; and Office of Dietary Supplements, all part of the NIH, as well as the Indian Health Service, Centers for Disease Control and Prevention and American Diabetes Association. Funding in the form of supplies was provided by Merck Sante, Merck KGaA and LifeScan.

The American Diabetes Association is leading the fight to Stop Diabetes® and its deadly consequences and fighting for those affected by diabetes. The Association funds research to prevent, cure and manage diabetes; delivers services to hundreds of communities; provides objective and credible information; and gives voice to those denied their rights because of diabetes. Founded in 1940, our mission is to prevent and cure diabetes and to improve the lives of all people affected by diabetes. For more information please call the American Diabetes Association at 1-800-DIABETES (1-800-342-2383) or visit http://www.diabetes.org. Information from both these sources is available in English and Spanish.

Association of Biochemical B12Deficiency With Metformin Therapy and Vitamin B12Supplements  

The National Health and Nutrition Examination Survey, 1999–2006

Lael ReinstatlerYan Ping QiRebecca S. WilliamsonJoshua V. Garn, and Godfrey P. Oakley Jr.
Diabetes Care February 2012 vol. 35 no. 2 327-333 
     http://dx.doi.org:/10.2337/dc11-1582

OBJECTIVE To describe the prevalence of biochemical B12deficiency in adults with type 2 diabetes taking metformin compared with those not taking metformin and those without diabetes, and explore whether this relationship is modified by vitamin B12supplements.

RESEARCH DESIGN AND METHODS Analysis of data on U.S. adults ≥50 years of age with (n = 1,621) or without type 2 diabetes (n = 6,867) from the National Health and Nutrition Examination Survey (NHANES), 1999–2006. Type 2 diabetes was defined as clinical diagnosis after age 30 without initiation of insulin therapy within 1 year. Those with diabetes were classified according to their current metformin use. Biochemical B12 deficiency was defined as serum B12concentrations ≤148 pmol/L and borderline deficiency was defined as >148 to ≤221 pmol/L.

RESULTS Biochemical B12 deficiency was present in 5.8% of those with diabetes using metformin compared with 2.4% of those not using metformin (P = 0.0026) and 3.3% of those without diabetes (P = 0.0002). Among those with diabetes, metformin use was associated with biochemical B12 deficiency (adjusted odds ratio 2.92; 95% CI 1.26–6.78). Consumption of any supplement containing B12 was not associated with a reduction in the prevalence of biochemical B12deficiency among those with diabetes, whereas consumption of any supplement containing B12 was associated with a two-thirds reduction among those without diabetes.

CONCLUSIONS Metformin therapy is associated with a higher prevalence of biochemical B12 deficiency. The amount of B12recommended by the Institute of Medicine (IOM) (2.4 μg/day) and the amount available in general multivitamins (6 μg) may not be enough to correct this deficiency among those with diabetes.

It is well known that the risks of both type 2 diabetes and B12deficiency increase with age (1,2). Recent national data estimate a 21.2% prevalence of diagnosed diabetes among adults ≥65 years of age and a 6 and 20% prevalence of biochemical B12 deficiency (serum B12<148 pmol/L) and borderline deficiency (serum B12 ≥148–221 pmol/L) among adults ≥60 years of age (3,4).

The diabetes drug metformin has been reported to cause a decrease in serum B12 concentrations. In the first efficacy trial, DeFronzo and Goodman (5) demonstrated that although metformin offers superior control of glycosylated hemoglobin levels and fasting plasma glucose levels compared with glyburide, serum B12 concentrations were lowered by 22% compared with placebo, and 29% compared with glyburide therapy after 29 weeks of treatment. A recent, randomized control trial designed to examine the temporal relationship between metformin and serum B12 found a 19% reduction in serum B12 levels compared with placebo after 4 years (6). Several other randomized control trials and cross-sectional surveys reported reductions in B12ranging from 9 to 52% (716). Although classical B12 deficiency presents with clinical symptoms such as anemia, peripheral neuropathy, depression, and cognitive impairment, these symptoms are usually absent in those with biochemical B12 deficiency (17).

Several researchers have made recommendations to screen those with type 2 diabetes on metformin for serum B12 levels (6,7,1416,1821). However, no formal recommendations have been provided by the medical community or the U.S. Prevention Services Task Force. High-dose B12 injection therapy has been successfully used to correct the metformin-induced decline in serum B12 (15,21,22). The use of B12supplements among those with type 2 diabetes on metformin in a nationally representative sample and their potentially protective effect against biochemical B12 deficiency has not been reported. It is therefore the aim of the current study to use the nationally representative National Health and Nutrition Examination Survey (NHANES) population to determine the prevalence of biochemical B12deficiency among those with type 2 diabetes ≥50 years of age taking metformin compared with those with type 2 diabetes not taking metformin and those without diabetes, and to explore how these relationships are modified by B12 supplement consumption.

Design overview

NHANES is a nationally representative sample of the noninstitutionalized U.S. population with targeted oversampling of U.S. adults ≥60 years of age, African Americans, and Hispanics. Details of these surveys have been described elsewhere (23). All participants gave written informed consent, and the survey protocol was approved by a human subjects review board.

Setting and participants

Our study included adults ≥50 years of age from NHANES 1999–2006. Participants with positive HIV antibody test results, high creatinine levels (>1.7 mg/dL for men and >1.5 mg/dL for women), and prescription B12 injections were excluded from the analysis. Participants who reported having prediabetes or borderline diabetes (n = 226) were removed because they could not be definitively grouped as having or not having type 2 diabetes. We also excluded pregnant women, those with type 1 diabetes, and those without diabetes taking metformin. Based on clinical aspects described by the American Diabetes Association and previous work in NHANES, those who were diagnosed before the age of 30 and began insulin therapy within 1 year of diagnosis were classified as having type 1 diabetes (24,25). Type 2 diabetes status in adults was dichotomized as yes/no. Participants who reported receiving a physician’s diagnosis after age 30 (excluding gestational diabetes) and did not initiate insulin therapy within 1 year of diagnosis were classified as having type 2 diabetes.

Outcomes and follow-up

The primary outcome was biochemical B12 deficiency determined by serum B12 concentrations. Serum B12 levels were quantified using the Quantaphase II folate/vitamin B12 radioassay kit from Bio-Rad Laboratories (Hercules, CA). We defined biochemical B12 deficiency as serum levels ≤148 pmol/L, borderline deficiency as serum B12 >148 to ≤221 pmol/L, and normal as >221 pmol/L (26).

The main exposure of interest was metformin use. Using data collected in the prescription medicine questionnaire, those with type 2 diabetes were classified as currently using metformin therapy (alone or in combination therapy) versus those not currently using metformin. Length of metformin therapy was used to assess the relationship between duration of metformin therapy and biochemical B12 deficiency. In the final analysis, two control groups were used to allow the comparison of those with type 2 diabetes taking metformin with those with type 2 diabetes not taking metformin and those without diabetes.

To determine whether the association between metformin and biochemical B12 deficiency is modified by supplemental B12 intake, data from the dietary supplement questionnaire were used. Information regarding the dose and frequency was used to calculate average daily supplemental B12 intake. We categorized supplemental B12 intake as 0 μg (no B12 containing supplement), >0–6 μg, >6–25 μg, and >25 μg. The lower intake group, >0–6 μg, includes 6 μg, the amount of vitamin B12 typically found in over-the-counter multivitamins, and 2.4 μg, the daily amount the IOM recommends for all adults ≥50 years of age to consume through supplements or fortified food (1). The next group, >6–25 μg, includes 25 μg, the amount available in many multivitamins marketed toward senior adults. The highest group contains the amount found in high-dose B-vitamin supplements.

 

In the final analysis, there were 575 U.S. adults ≥50 years of age with type 2 diabetes using metformin, 1,046 with type 2 diabetes not using metformin, and 6,867 without diabetes. The demographic and biological characteristics of the groups are shown in Table 1. Among metformin users, mean age was 63.4 ± 0.5 years, 50.3% were male, 66.7% were non-Hispanic white, and 40.7% used a supplement containing B12. The median duration of metformin use was 5 years. Compared with those with type 2 diabetes not taking metformin, metformin users were younger (P < 0.0001), reported a lower prevalence of insulin use (P < 0.001), and had a shorter duration of diabetes (P = 0.0207). Compared with those without diabetes, metformin users had a higher proportion of nonwhite racial groups (P< 0.0001), a higher proportion of obesity (P < 0.0001), a lower prevalence of macrocytosis (P = 0.0017), a lower prevalence of supplemental folic acid use (P = 0.0069), a lower prevalence of supplemental vitamin B12 use (P = 0.0180), and a lower prevalence of calcium supplement use (P = 0.0002). There was a twofold difference in the prevalence of anemia among those with type 2 diabetes versus those without, and no difference between the groups with diabetes.    

Association of Biochemical B12Deficiency With Metformin Therapy and Vitamin B12Supplements

Demographic and biological characteristics of U.S. adults ≥50 years of age: NHANES 1999–2006

Table 1
The geometric mean serum B12 concentration among those with type 2 diabetes taking metformin was 317.5 pmol/L. This was significantly lower than the geometric mean concentration in those with type 2 diabetes not taking metformin (386.7 pmol/L; P = 0.0116) and those without diabetes (350.8 pmol/L; P = 0.0011). As seen in Fig. 1, the weighted prevalence of biochemical B12 deficiency adjusted for age, race, and sex was 5.8% for those with type 2 diabetes taking metformin, 2.2% for those with type 2 diabetes not taking metformin (P = 0.0002), and 3.3% for those without diabetes (P = 0.0026). Among the three aforementioned groups, borderline deficiency was present in 16.2, 5.5, and 8.8%, respectively (P < 0.0001). Applying the Fleiss formula for calculating attributable risk from cross-sectional data (27), among all of the cases of biochemical B12 deficiency, 3.5% of the cases were attributable to metformin use; and among those with diabetes, 41% of the deficient cases were attributable to metformin use. When the prevalence of biochemical B12 deficiency among those with diabetes taking metformin was analyzed by duration of metformin therapy, there was no notable increase in the prevalence of biochemical B12 deficiency as the duration of metformin use increased. The prevalence of biochemical B12 deficiency was 4.1% among those taking metformin <1 year, 6.3% among those taking metformin ≥1–3 years, 4.1% among those taking metformin >3–10 years, and 8.1% among those taking metformin >10 years (P = 0.3219 for <1 year vs. >10 years). Similarly, there was no clear increase in the prevalence of borderline deficiency as the duration of metformin use increased (15.9% among those taking metformin >10 years vs. 11.4% among those taking metformin <1 year; P = 0.4365).
Figure 1
Weighted prevalence of biochemical B12 deficiency and borderline deficiency adjusted for age, race, and sex in U.S. adults ≥50 years of age: NHANES 1999–2006. Black bars are those with type 2 diabetes on metformin, gray bars are those with type 2 diabetes not on metformin, and the white bars are those without diabetes. *P = 0.0002 vs. type 2 diabetes on metformin. †P < 0.0001 vs. type 2 diabetes on metformin. ‡P = 0.0026 vs. type 2 diabetes on metformin.
Table 2 presents a stratified analysis of the weighted prevalence of biochemical B12 deficiency and borderline deficiency by B12supplement use. For those without diabetes, B12 supplement use was associated with an ∼66.7% lower prevalence of both biochemical B12deficiency (4.8 vs. 1.6%; P < 0.0001) and borderline deficiency (16.6 vs. 5.5%; P < 0.0001). A decrease in the prevalence of biochemical B12deficiency was seen at all levels of supplemental B12 intake compared with nonusers of supplements. Among those with type 2 diabetes taking metformin, supplement use was not associated with a decrease in the prevalence of either biochemical B12 deficiency (5.6 vs. 5.3%; P= 0.9137) or borderline deficiency (15.5 vs. 8.8%; P = 0.0826). Among the metformin users who also used supplements, those who consumed >0–6 μg of B12 had a prevalence of biochemical B12 deficiency of 14.1%. However, consumption of a supplement containing >6 μg of B12 was associated with a prevalence of biochemical B12 deficiency of 1.8% (P = 0.0273 for linear trend). Similar trends were seen in the association of supplemental B12 intake and the prevalence of borderline deficiency. For those with type 2 diabetes not taking metformin, supplement use was also not associated with a decrease in the prevalence of biochemical B12 deficiency (2.1 vs. 2.0%; P = 0.9568) but was associated with a 54% reduction in the prevalence of borderline deficiency (7.8 vs. 3.4%; P = 0.0057 for linear trend).
Table 2
Comparison of average daily B12 supplement intake by weighted prevalence of biochemical B12 deficiency (serum B12 ≤148 pmol/L) and borderline deficiency (serum B12 >148 to ≤221 pmol/L) among U.S. adults ≥50 years of age: NHANES 1999–2006.
Table 3 demonstrates the association of various risk factors with biochemical B12 deficiency. Metformin therapy was associated with biochemical B12 deficiency (odds ratio [OR] 2.89; 95% CI 1.33–6.28) and borderline deficiency (OR 2.32; 95% CI 1.31–4.12) in a crude model (results not shown). After adjusting for age, BMI, and insulin and supplement use, metformin maintained a significant association with biochemical B12 deficiency (OR 2.92; 95% CI 1.28–6.66) and borderline deficiency (OR 2.16; 95% CI 1.22–3.85). Similar to Table 2, B12 supplements were protective against borderline (OR 0.43; 95% CI 0.23–0.81), but not biochemical, B12 deficiency (OR 0.76; 95% CI 0.34–1.70) among those with type 2 diabetes. Among those without diabetes, B12 supplement use was ∼70% protective against biochemical B12 deficiency (OR 0.26; 95% CI 0.17–0.38) and borderline deficiency (OR 0.27; 95% CI 0.21–0.35).
Table 3
Polytomous logistic regression for potential risk factors of biochemical B12 deficiency and borderline deficiency among U.S. adults ≥50 years of age: NHANES 1999–2006, OR (95% CI)

The IOM has highlighted the detection and diagnosis of B12 deficiency as a high-priority topic for research (1). Our results suggest several findings that add to the complexity and importance of B12 research and its relation to diabetes, and offer new insight into the benefits of B12 supplements. Our data confirm the relationship between metformin and reduced serum B12 levels beyond the background prevalence of biochemical B12 deficiency. Our data demonstrate that an intake of >0–6 μg of B12, which includes the dose most commonly found in over-the-counter multivitamins, was associated with a two-thirds reduction of biochemical B12 deficiency and borderline deficiency among adults without diabetes. This relationship has been previously reported with NHANES and Framingham population data (4,29). In contrast, we did not find that >0–6 μg of B12 was associated with a decrease in the prevalence of biochemical B12 deficiency or borderline deficiency among adults with type 2 diabetes taking metformin. This observation suggests that metformin reduces serum B12 by a mechanism that is additive to or different from the mechanism in older adults. It is also possible that metformin may exacerbate the deficiency among older adults with low serum B12. Our sample size was too small to determine which amount >6 μg was associated with maximum protection, but we did find a dose-response trend.

We were surprised to find that those with type 2 diabetes not using metformin had the lowest prevalence of biochemical B12 deficiency. It is possible that these individuals may seek medical care more frequently than the general population and therefore are being treated for their biochemical B12 deficiency. Or perhaps, because this population had a longer duration of diabetes and a higher proportion of insulin users compared with metformin users, they have been switched from metformin to other diabetic treatments due to low serum B12 concentrations or uncontrolled glucose levels and these new treatments may increase serum B12 concentrations. Despite the observed effects of metformin on serum B12 levels, it remains unclear whether or not this reduction is a public health concern. With lifetime risks of diabetes estimated to be one in three and with metformin being a first-line intervention, it is important to increase our understanding of the effects of oral vitamin B12 on metformin-associated biochemical deficiency (20,21).

The strengths of this study include its nationally representative, population-based sample, its detailed information on supplement usage, and its relevant biochemical markers. This is the first study to use a nationally representative sample to examine the association between serum B12 concentration, diabetes status, and metformin use as well as examine how this relationship may be modified by vitamin B12 supplementation. The data available regarding supplement usage provided specific information regarding dose and frequency. This aspect of NHANES allowed us to observe the dose-response relationship in Table 2 and to compare it within our three study groups.

This study is also subject to limitations. First, NHANES is a cross-sectional survey and it cannot assess time as a factor, and therefore the results are associations and not causal relationships. A second limitation arises in our definition of biochemical B12 deficiency. There is no general consensus on how to define normal versus low serum B12levels. Some researchers include the functional biomarker methylmalonic acid (MMA) in the definition, but this has yet to be agreed upon (3034). Recently, an NHANES roundtable discussion suggested that definitions of biochemical B12 deficiency should incorporate one biomarker (serum B12 or holotranscobalamin) and one functional biomarker (MMA or total homocysteine) to address problems with sensitivity and specificity of the individual biomarkers. However, they also cited a need for more research on how the biomarkers are related in the general population to prevent misclassification (34). MMA was only measured for six of our survey years; one-third of participants in our final analysis were missing serum MMA levels. Moreover, it has recently been reported that MMA values are significantly greater among the elderly with diabetes as compared with the elderly without diabetes even when controlling for serum B12 concentrations and age, suggesting that having diabetes may independently increase the levels of MMA (35). This unique property of MMA in elderly adults with diabetes makes it unsuitable as part of a definition of biochemical B12 deficiency in our specific population groups. Our study may also be subject to misclassification bias. NHANES does not differentiate between diabetes types 1 and 2 in the surveys; our definition may not capture adults with type 2 diabetes exclusively. Additionally, we used responses to the question “Have you received a physician’s diagnosis of diabetes” to categorize participants as having or not having diabetes. Therefore, we failed to capture undiagnosed diabetes. Finally, we could only assess current metformin use. We cannot determine if nonmetformin users have ever used metformin or if they were not using it at the time of the survey.

Our data demonstrate several important conclusions. First, there is a clear association between metformin and biochemical B12 deficiency among adults with type 2 diabetes. This analysis shows that 6 μg of B12 offered in most multivitamins is associated with two-thirds reduction in biochemical B12 deficiency in the general population, and that this same dose is not associated with protection against biochemical B12 deficiency among those with type 2 diabetes taking metformin. Our results have public health and clinical implications by suggesting that neither 2.4 μg, the current IOM recommendation for daily B12 intake, nor 6 μg, the amount found in most multivitamins, is sufficient for those with type 2 diabetes taking metformin.

This analysis suggests a need for further research. One research design would be to identify those with biochemical B12 deficiency and randomize them to receive various doses of supplemental B12chronically and then evaluate any improvement in serum B12concentrations and/or clinical outcomes. Another design would use existing cohorts to determine clinical outcomes associated with biochemical B12 deficiency and how they are affected by B12supplements at various doses. Given that a significant proportion of the population ≥50 years of age have biochemical B12 deficiency and that those with diabetes taking metformin have an even higher proportion of biochemical B12 deficiency, we suggest that support for further research is a reasonable priority.

 

Discussion:
One research design would be to identify those with biochemical B12 deficiency and randomize them to receive various doses of supplemental B12chronically and then evaluate any improvement in serum B12concentrations and/or clinical outcomes. Another design would use existing cohorts to determine clinical outcomes associated with biochemical B12 deficiency and how they are affected by B12supplements at various doses.
This is of considerable interest.  As far as I can see, there is insufficient data presented to discern all of the variables entangled.  In a study of 8000 hemograms several years ago, it was of some interest that there were a large percentage of patients who were over age 75 years having a MCV of 94 – 100, not considered indicative of macrocytic anemia.  It would have been interesting to explore that set of the data further.
UPDATED 3/17/2020
 2019 May 7;11(5). pii: E1020. doi: 10.3390/nu11051020.

Monitoring Vitamin B12 in Women Treated with Metformin for Primary Prevention of Breast Cancer and Age-Related Chronic Diseases.

Abstract

Metformin (MET) is currently being used in several trials for cancer prevention or treatment in non-diabetics. However, long-term MET use in diabetics is associated with lower serum levels of total vitamin B12. In a pilot randomized controlled trial of the Mediterranean diet (MedDiet) and MET, whose participants were characterized by different components of metabolic syndrome, we tested the effect of MET on serum levels of B12, holo transcobalamin II (holo-TC-II), and methylmalonic acid (MMA). The study was conducted on 165 women receiving MET or placebo for three years. Results of the study indicate a significant overall reduction in both serum total B12 and holo-TC-II levels according with MET-treatment. In particular, in the MET group 26 of 81 patients and 10 of the 84 placebo-treated subjects had B12 below the normal threshold (<221 pmol/L) at the end of the study. Considering jointly all B12, Holo-TC-II, and MMA, 13 of the 165 subjects (10 MET and 3 placebo-treated) had at least two deficits in the biochemical parameters at the end of the study, without reporting clinical signs. Although our results do not affect whether women remain in the trial, B12 monitoring for MET-treated individuals should be implemented.

ntroduction

Metformin (MET) is the first-line treatment for type-2 diabetes and has been used for decades to treat this chronic condition [1]. Given its favorable effects on glycemic control, weight patterns, insulin requirements, and cardiovascular outcomes, MET has been recently proposed in addition to lifestyle interventions to reduce metabolic syndrome (MS) and age-related chronic diseases [2]. Observational studies have also suggested that diabetic patients treated with MET had a significantly lower risk of developing cancer or lower cancer mortality than those untreated or treated with other drugs [3,4]. For this reason, a number of clinical trials are in progress in different solid cancers.
One of the limitations in implementing long-term use of MET to prevent chronic conditions in healthy subjects relates to its potential lowering effect on vitamin B12 (B12). The aim of the present study was to assess the effect of three years of MET treatment in a randomized, controlled trial considering both B12 levels and biomarkers of its metabolism and biological effectiveness.
Cobalamin, also known as B12, is a water-soluble, cobalt-containing vitamin. All forms of B12 are converted intracellularly into adenosyl-Cbl and methylcobalamin—the biologically active forms at the cellular level [5]. Vitamin B12 is a vital cofactor of two enzymes: methionine synthase and L-methyl-malonyl-coenzyme. A mutase in intracellular enzymatic reactions related to DNA synthesis, as well as in amino and fatty acid metabolism. Vitamin B12, under the catalysis of the enzyme l-methyl-malonyl-CoA mutase, synthesizes succinyl-CoA from methylmalonyl-CoA in the mitochondria. Deficiency of B12, thus results in elevated methylmalonic acid (MMA) levels.
Dietary B12 is normally bound to proteins. Food-bound B12 is released in the stomach under the effect of gastric acid and pepsin. The free vitamin is then bound to an R-binder, a glycoprotein in gastric fluid and saliva that protects B12 from the highly acidic stomach environment. Pancreatic proteases degrade R-binder in the duodenum and liberate B12; finally, the free vitamin is then bound by the intrinsic factor (IF)—a glycosylated protein secreted by gastric parietal cells—forming an IF-B12 complex [6]. The IF resists proteolysis and serves as a carrier for B12 to the terminal ileum where the IF-B12 complex undergoes receptor (cubilin)-mediated endocytosis [7]. The vitamin then appears in circulation bound to holo-transcobalamin-I (holo-TC-I), holo-transcobalamin-II (holo-TC-II), and holo-transcobalamin-III (holo-TC-III). It is estimated that 20–30% of the total circulating B12 is bound to holo-TC-II and only this form is available to the cells [7]. Holo-TC-I binds 70–80% of circulating B12, preventing the loss of the free unneeded portion [6]. Vitamin B12 is stored mainly in the liver and kidneys.
Many mechanisms have been proposed to explain how MET interferes with the absorption of B12: diminished absorption due to changes in bacterial flora, interference with intestinal absorption of the IF–B12 complex (and)/or alterations in IF levels. The most widely accepted current mechanism suggests that MET antagonizes the calcium cation and interferes with the calcium-dependent IF–B12 complex binding to the ileal cubilin receptor [8,9]. The recognition and treatment of B12 deficiency is important because it is a cause of bone marrow failure, macrocytic anemia, and irreversible neuropathy [10].
In general, previous studies on diabetics have observed a reduction in serum levels of B12 after both short- and long-term MET treatment [1]. A recent review on observational studies showed significantly lower levels of B12 and an increased risk of borderline or frank B12 deficiency in patients on MET than not on MET [1]. The meta-analysis of four trials (only one double-blind) found a significant overall mean B12 reducing effect of MET after six weeks to three months of use [1]. A secondary analysis (13 years after randomization) of the Diabetes Prevention Program Outcomes Study, which randomized over 3000 persons at high risk for type 2 diabetes to MET or placebo, showed a 13% increase in the risk of B12 deficiency per year of total MET use [3]. In this study, B12 levels were measured from samples obtained in years 1 and 9. Stored serum samples from other time points, including baseline, were not available, and potentially informative red blood cell indices that might have demonstrated the macrocytic anemia, typical of B12 deficiency, were not recorded [3]. The HOME (Hyperinsulinaemia: the Outcome of its Metabolic Effects) study, a large randomized controlled trial investigating the long-term effects of MET versus placebo in patients with type 2 diabetes treated with insulin, showed that the addition of MET improved glycemic control, reduced insulin requirements, prevented weight gain but lowered serum B12 over time, and raised serum homocysteine, suggesting tissue B12 deficiency [4]. A recent analysis of 277 diabetics from the same trial showed that serum levels of MMA, the specific biomarker for tissue B12 deficiency [5], were significantly higher in people treated with MET than those receiving placebo after four years (on average) [4].
The risk of MET-associated B12 deficiency may be higher in older individuals and those with poor dietary habits. Prospective studies have found negative associations between obesity and B12 in numerous ethnicities [11,12]. An energy-dense but micronutrient-insufficient diet consumed by individuals who are overweight or obese might explain this [12]. Furthermore, obesity is associated with low-grade inflammation and these physiological changes have been shown to be associated, in several studies, with elevated C-reactive protein and homocysteine and with low concentrations of B12 and other vitamins [13,14].
As part of a pilot randomized controlled trial of the Mediterranean diet (MedDiet) and MET for primary prevention of breast cancer and other chronic age-related diseases in healthy women with tracts of MS [15] we tested the effect of MET on serum levels of B12, holo-TC-II, and MMA.

Other articles of note on the Mediterranean Diet in this Online Open Access Scientific Journal Include

Read Full Post »

« Newer Posts - Older Posts »