Feeds:
Posts
Comments

Archive for the ‘Explanatory’ Category

Lonely Receptors: RXR – Jensen, Chambon, and Evans

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Intelligence

Series E. 2; 7.2

 

Nuclear receptors provoke RNA production in response to steroid hormones

Albert Lasker Basic Medical Research Award

Pierre Chambon, Ronald Evans and Elwood Jensen

For the discovery of the superfamily of nuclear hormone receptors and elucidation of a unifying mechanism that regulates embryonic development and diverse metabolic pathways.

Hormones control a vast array of biological processes, including embryonic development, growth rate, and body weight. Scientists had known since the early 1900s that tiny hormone doses dramatically alter physiology, but they had no idea that these signaling molecules did so by prodding genes. The 1950s, when Jensen began his work, was the great era of enzymology. Conventional wisdom held that estradiol—the female sex hormone that instigates growth of immature reproductive tissue such as the uterus—entered the cell and underwent a series of chemical reactions that produced a particular compound as a byproduct. This compound—NADPH—is essential for many enzymes’ operations but its small quantities normally limit their productivity. A spike in NADPH concentrations would stimulate growth or other activities by unleashing the enzymes, the reasoning went.

In 1956, Jensen (at the University of Chicago) decided to scrutinize what happened to estradiol within its target tissues, but he had a problem: The hormone is physiologically active in minute quantities, so he needed an extremely sensitive way to track it. He devised an apparatus that tagged it with tritium—a radioactive form of hydrogen—at an efficiency level that had not previously been achieved. This innovation allowed him to detect a trillionth of a gram of estradiol.

When he injected this radioactive substance into immature rats, he noticed that most tissues—skeletal muscle, kidneys and liver, for example—started expelling it within 15 minutes. In contrast, tissues known to respond to the hormone—those of the reproductive tract—held onto it tightly. Furthermore, the hormone showed up in the nuclei of cells, where genes reside. Something there was apparently grabbing the estradiol.

Jensen subsequently showed that his radioactive hormone remained chemically unchanged once inside the cell. Estrogen did not act by being metabolized and producing NADPH, but presumably by performing some job in the nucleus. Subsequent work by Jensen and Jack Gorski established that estradiol converts a protein in the cytoplasm, its receptor, into a form that can migrate to the nucleus, embrace DNA, and turn on specific genes.

From 1962 to 1980, molecular endocrinologists built on Jensen’s work to discover the receptors for the other major steroid hormones—testosterone, progesterone, glucocorticoids, aldosterone, and the steroid-like vitamin D. In addition to Jensen and Gorski, many scientists—notably Bert O’Malley, Jan-Ake Gustafsson, Keith Yamamoto, and the late Gordon Tompkins—made crucial observations during the early days of steroid receptor research.

Clinical Applications of Estrogen-Receptor Detection

Clinicians knew that removing the ovaries or adrenal glands of women with breast cancer would stop tumor growth in one out of three patients, but the molecular basis for this phenomenon was mysterious. Jensen showed that breast cancers with low estrogen-receptor content do not respond to surgical treatment. Receptor status could therefore indicate who would benefit from the procedure and who should skip an unnecessary operation. In the mid-1970s, Jensen and his colleague Craig Jordan found that women with cancers that contain large amounts of estrogen receptor are also likely to benefit from tamoxifen, an anti-estrogen compound that mimics the effect of removing the ovaries or adrenal glands. The other patients—those with small numbers of receptors—could immediately move on to chemotherapy that might combat their disease rather than waiting months to find out that the tumors were growing despite tamoxifen treatment. By 1980, Jensen’s test had become a standard part of care for breast cancer patients.

In the meantime, Jensen set about generating antibodies that bound the receptor—a tool that provided a more reliable way to measure receptor quantities in excised breast tumor specimens. His work has transformed the treatment of breast cancer patients and saves or prolongs more than 100,000 lives annually.

Long-Lost Relatives

By the early 1980s, interest in molecular endocrinology had shifted toward the rapidly developing area of gene control. Chambon and Evans had long wondered how genes turn on and off, and recognized nuclear hormone signaling as the best system for studying regulated gene transcription. They wanted to know exactly how nuclear receptors provoke RNA production in response to steroid hormones. To manipulate and analyze the receptors, they would need to isolate the genes for them.

By late 1985 and early 1986, Evans (at the Salk Institute in La Jolla) and Chambon (at the Institute of Genetics and Molecular and Cellular Biology in Strasbourg, France) had pieced together the glucocorticoid and estrogen receptor genes, respectively. They noticed that the sequences resembled that of v-erbA, a miscreant viral protein that fosters uncontrolled cell growth. This observation raised the possibility that v-erbA and its well-behaved cellular counterpart, c-erbA, would also bind DNA and control gene activity in response to some chemical activator, or ligand. In 1986, Evans and Björn Vennström simultaneously reported that c-erbA was a thyroid hormone receptor that was related to the steroid hormone receptors, thus uniting the fields of thyroid and steroid biology.

Chambon and Evans set to work deconstructing the glucocorticoid and estrogen receptors. By creating mutations at different spots and probing which activities the resulting proteins lost, they dissected the receptor into three domains: one bound hormone, one bound DNA, and one activated target genes. The structure of each domain strongly resembled the analogous one in the other receptor.

Chambon and Evans wanted to match other members of the growing receptor gene family with their chemical triggers. Because the DNA- and ligand-binding regions functioned independently, it was possible to hook the DNA-binding domain of, say, the glucocorticoid receptor to the ligand-binding domain of another receptor whose ligand was unknown. The ligand for that receptor would then activate a glucocorticoid-responsive test gene.

Evans would use this method to identify ligands for several novel members of the nuclear receptor family, and both he and Chambon exploited it to discover a physiologically crucial receptor. In the late 1970s, scientists had suggested that the physiologically active derivative of vitamin A, retinoic acid, could exert its effects by binding to a nuclear receptor. This nutrient is essential from fertilization through adulthood, and researchers were eager to understand its activities on a molecular level. During embryonic development, deficiency of retinoic acid impairs formation of most organs, and the compound can hinder cancer cell proliferation. So Chambon set out to find a receptor that responded to retinoic acid. He isolated a member of the nuclear receptor gene family whose production increased in breast cancer cells that slowed their growth upon exposure to the chemical. Simultaneously, Evans identified the same protein. He tested whether more than a dozen compounds activated an unknown receptor and one passed: retinoic acid.

Remarkably, in 1986, the two scientists had independently—and unbeknownst to each other—identified the same retinoic acid receptor, a molecule of tremendous significance. The discovery of this molecule provided an entry point for detailing vitamin A biology.

Rx for Lonely Receptors: RXR

The list of presumptive nuclear receptors was growing quickly as scientists realized that the common DNA sequences provided a handle with which to grab these molecules from the genome. Because their chemical activators weren’t known, they were called “orphan” receptors, and researchers were keen on “adopting” them to ligands. Some of these ligands, they reasoned, would represent previously unknown classes of gene activators. The test system that Chambon and Evans used to match up retinoic acid with its receptor, in which they stitched an unknown ligand-binding domain to a DNA-binding domain for a receptor with known target sequences, could be harnessed to accomplish this task.

Evans had identified some potential nuclear receptors from fruit flies. He decided to pursue a human orphan receptor that closely resembled one of these receptor genes, reasoning that a protein that functioned in both flies and mammals was likely to perform an important job.

This receptor responded to retinoic acid in intact cells but did not bind it in the test tube, so Evans called it the Retinoid X Receptor (RXR), thinking that its ligand was some retinoic acid derivative. In cells, enzymes convert retinoic acid to metabolites and it seemed possible that one of these compounds was RXR’s ligand. In 1992, Evans’s group and one at Hoffmann-La Roche discovered that 9-cis-retinoic acid, a stereoisomer of retinoic acid, could activate RXR, identifying the first new receptor ligand in 25 years. This finding launched the orphan receptor field because it provided strong evidence that the strategy could unearth previously unknown ligands.

In the meantime, Chambon had found that the purified retinoic acid receptor, in contrast to the estrogen receptor, did not bind efficiently to its target DNA. Other nuclear receptors, too, needed help grasping genes. In the test tube, the retinoic acid, thyroid hormone, and vitamin D3 receptors could attach well to their target DNA only when supplemented with cellular material, which presumably contained some crucial substance. Chambon and Michael Rosenfeld independently purified a single protein that performed this feat, and it turned out to be none other than RXR. This ability of RXR to pair with other receptors—forming so-called heterodimers—would turn out to be key for switching on many orphan receptors. These heterodimeric couplings yield large numbers of distinct gene-controlling entities.

Chambon revealed the power of mixing and matching in these molecular duos through his thorough and extensive genetic manipulations in mice. He has shown that vitamin A exerts its wide-ranging effects on organ development in the embryo through the action of eight different forms of the retinoic acid receptor and six different forms of RXR, interacting with each other in a multitude of combinations.

Clinical Applications of the Superfamily Work

The concept of RXR as a promiscuous heterodimeric partner for certain nuclear receptors led to the unexpected identification of a number of clinically relevant receptors. These proteins include the peroxisome proliferator-activated receptor (PPAR), which stimulates fat-cell maturation and sits at the center of Type 2 diabetes and a number of lipid-related disorders; the liver X receptors (LXRs) and bile acid receptor (FXR), which help manage cholesterol homeostasis; and the steroid and xenobiotic receptor (PXR), which turns on enzymes that dispose of chemicals that need to be detoxified, such as drugs.

Because the nuclear receptors wield such physiological power, they have provided excellent targets for disease treatment. The anti-diabetes compounds glitazones, for example, work by stimulating PPAR, and the clinically used lipid-lowering medications called fibrates work by binding a closely related receptor, PPAR. Retinoic acid therapy has dramatically altered the prognosis of people with acute promyelocytic leukemia by triggering specialization of the immature white blood cells that accumulate in these individuals. The three-dimensional structure of nuclear receptors with and without their ligands, which Chambon and his colleagues first solved, promises to accelerate drug discovery in the whole field.

Nuclear hormone receptors have touched on human health in other ways as well. Genetic perturbations in the genes for these proteins cause a variety of illnesses. For example, certain forms of rickets arise from mutations in the vitamin D receptor and several disorders of male sexual differentiation stem from defects in the androgen receptor.

The discoveries of Jensen, Chambon, and Evans revealed an unimagined superfamily of proteins. At the start of this work almost 50 years ago, no one would have anticipated that steroids, thyroid hormone, retinoids, vitamin D, fatty acids, bile acids, and many lipid-based drugs transmit their signal through similar pathways. Four dozen human nuclear receptors are now known, and scientists are working out the roles of these proteins in normal and aberrant physiology. These discoveries have revolutionized the fields of endocrinology and metabolism, and pointed toward new tactics for drug discovery.

by Evelyn Strauss, Ph.D.

 

The 2004 Lasker Award for Basic Medical Research will be presented to Elwood Jensen, Ph.D., the Charles B. Huggins Distinguished Service Professor Emeritus in the Ben May Institute for Cancer Research at the University of Chicago, one of three scientists whose discoveries “revolutionized the fields of endocrinology and metabolism,” according to the award citation. Jensen’s work had a rapid, direct and lasting impact on treatment and prevention of breast cancer.

The Lasker Awards are the nation’s most distinguished honor for outstanding contributions to basic and clinical medical research. Often called “America’s Nobels,” the Lasker Award has been awarded to 68 scientists who subsequently went on to receive the Nobel Prize, including 15 in the last 10 years.

Jensen will share the basic medical research award with two colleagues, Pierre Chambon, of the Institute of Genetics and Molecular and Cellular Biology (Strasbourg, France), and Ronald M. Evans of the Salk Institute for Biological Studies (La Jolla, California) and the Howard Hughes Medical Institute.

They were selected for their discovery of the “superfamily of nuclear hormone receptors and the elucidation of a unifying mechanism that regulates embryonic development and diverse metabolic pathways.” The implications of this research for understanding human disease and accelerating drug discovery “have been profound and hold much promise for the future,” notes the announcement from the Lasker Foundation.

Jensen is being honored for his pioneering research on how steroid hormones, such as estrogen, exert their influence. His discoveries explained how these hormones work, which has led to the development of drugs that can enhance or inhibit the process.

Hormones control a vast array of biological processes, including embryonic development, growth rate and body weight. Before Jensen, however, the way which hormones cause these effects was “a complete mystery,” recalled Gene DeSombre, Ph.D., professor emeritus at the University of Chicago, who worked with Jensen in the Ben May Institute as a post-doctoral fellow and then as a colleague.

In the 1950s, biochemists thought a hormone entered a cell, where a series of oxidation and reductions reactions with the estrogen provided needed energy for the growth stimulation and other specific actions shown by estrogens.

From the late 1950s to the 1970s Jensen entirely overturned that notion. Working with estrogen, he proved that hormones do not undergo chemical change. Instead, they bind to a receptor protein within the cell. This hormone-receptor complex then travels to the cell nucleus, where it regulates gene expression.

At the time, this idea was heresy. “That really got him into some hot water,” recalled DeSombre. “Jensen struggled quite a lot,” echoes Shutsung Liao, Ph.D., another Ben May colleague, who subsequently found a similar system for testosterone action. But for Jensen, just getting into hot water was a struggle. When he first presented preliminary data at a 1958 meeting in Vienna, only five people attended, three of whom were the other speakers. More than 1,000 attended a simultaneous symposium on the metabolic processing of estrogen.

In the next 20 years, Jensen convinced his colleagues by publishing a series of major and highly original discoveries in four related areas of hormone research:

  • Jensen discovered the estrogen receptor, the first receptor found for any hormone. In 1958, using a radioactive marker, he showed that only the tissues that respond to estrogen, such as those of the female reproductive tract, were able to concentrate injected estrogen from the blood. This specific uptake suggested that these cells must contain binding proteins, which he called “estrogen receptors.”
  • In 1967, Jensen and Jack Gorski of the University of Wisconsin showed that these putative receptors were macromolecules that could be extracted from these tissues. With this method, Jensen showed that when estrogen bound to this receptor, the compound then migrated to the nucleus where it bound avidly and activated specific genes, stimulating new RNA synthesis.
  • By 1968, Jensen had devised a reliable test for the presence of estrogen receptors in breast cancer cells. It had been known for decades that about one-third of premenopausal women who had advanced breast cancer would respond to estrogen blockade brought about by removing their ovaries, the source of estrogen, but there was no way to predict which women would respond. In 1971, Jensen showed that women with receptor-rich breast cancers often have remissions following removal of the sources of estrogen, but cancers that contain few or no estrogen receptors do not respond to estrogen-blocking therapy.
  • By 1977, Jensen and Geoffrey Greene, Ph.D., also in the University of Chicago’s Ben May Institute, had developed monoclonal antibodies directed against estrogen receptors, which enabled then to quickly and accurately detect and count estrogen receptors in breast and other tumors. By 1980, this test had become a standard part of care for breast cancer patients

This work “transformed the treatment of breast cancer patients,” notes the Lasker Foundation, “and saves or prolongs more than a 100,000 lives annually.”

”Jensen’s revolutionary discovery of estrogen receptors is beyond doubt one of the major achievements in biochemical endocrinology of our time,” said DeSombre. “His work is hallmarked by great technical ingenuity and conceptual novelty. His promulgation of simple yet profound ideas concerning the role of receptors in estrogen action have been of the greatest importance for research on the basic and clinical physiology not only of estrogens but also of all other categories of steroid hormones.”

By the early 1970s, Jensen was searching for chemical, rather than surgical, ways to shield estrogen-dependent tumors from circulating hormones. He and colleague Craig Jordan (then at the Worcester Foundation for Experimental Biology in Massachusetts) subsequently found that women with cancers that contain large amounts of estrogen receptor are also likely to benefit from tamoxifen, a compound that blocks some of the effects of estrogen. Patients with few or no receptors could immediately move on to chemotherapy rather than waiting months to find out that the tumors were growing despite tamoxifen treatment.

Following Jensen’s lead, researchers soon found that the receptors for the other major steroid hormones, such as testosterone, progesterone, and cortisone, worked essentially the same way.

In 1986, Pierre Chambon and Ronald Evans separately but simultaneously discovered that the steroid hormone receptors were merely the tip of the iceberg of what would turn out to be a large family of structurally related nuclear receptors, now known to consist of 48 members. Evans and Chambon unearthed a number of these receptors, which revealed new regulatory systems that control the body’s response to essential nutrients (such as Vitamin A), fat-soluble signaling molecules (such as fatty acids and bile acids), and drugs (such as the glitazones used to treat Type 2 diabetes and retinoic acid for certain forms of acute leukemia).

These three individuals “created the field of nuclear hormone receptor research, which now occupies a large area of biological and medical investigation,” said Dr. Joseph L. Goldstein, chairman of the international jury of researchers that selects recipients of the Lasker Awards, and recipient of the Lasker Award for Basic Medical Research and the Nobel Prize in Medicine in 1985.

They revealed the “unexpected and unifying mechanism by which many signaling molecules regulate a plethora of key physiological pathways that operate from embryonic development through adulthood. They discovered a family of proteins that allows chemicals as diverse as steroid hormones, Vitamin A, and thyroid hormone to perform in the body.”

Jensen, known for concluding his lectures in verse, neatly summed up what his extraordinary series of discoveries might mean to a woman who has been diagnosed with breast cancer:

“A lady with growth neoplastic
Thought surgical ablation too drastic.
She preferred that her ill
Could be cured with a pill,
Which today is no longer fantastic.”

JBC THEMATIC MINIREVIEW SERIES 2011

Nuclear Receptors in Biology and Diseases

Thematic Minireview Series on Nuclear Receptors in Biology and Diseases

Sohaib Khan and Jerry B Lingrel

Although a connection between breast cancer and the ovary was made by Sir George Beatson in 1896 and estrogen was purified in 1920, it remained puzzling as to how the hormone exerted its biological effects. In the late 1950s, when Elwood Jensen delved into this problem by asking, essentially, “What does tissue do with this hormone?” little did he know that his quest would lead to the establishment of the nuclear receptor field. The late 1950s was the era of intermediary metabolism and enzymology, when steroid hormones were considered likely substrates in the formation of metabolites that functioned as cofactors in an essential metabolic pathway. The biological responses to estrogens and other steroids were thought to be mediated by enzymes. Against this background and prevailing dogma, Jensen and colleagues defined the biochemical mechanisms by which steroid hormones exert their effects. While working at the University of Chicago’s Ben May Institute for Cancer Research, they synthesized tritium-labeled estradiol and concurrently developed a new method to measure its uptake in biological material. These tools enabled them to determine the biochemical fate of physiological amounts of hormone. They discovered that the reproductive tissues of the immature rat contain characteristic hormone-binding components with which estradiol reacts to induce uterine growth without itself being chemically changed. From the close correlation between the inhibition of binding and inhibition of growth response, Jensen established that the binding substances were receptors. Thus, we saw the birth of the first member of the nuclear receptor family (known as the estrogen receptor). These findings stimulated the search for other physiological receptors, and the pioneering works by Pierre Chambon, Ronald Evans, Jan-Åke Gustafsson, Bert W. O’Malley, and Keith Yamamoto led to the discoveries of the glucocorticoid receptor (GR),2 progesterone receptor, retinoic acid receptor, and orphan receptors. In a rather short span of time, the nuclear receptor family has grown into a 49-member-strong “superfamily.” This is a family whose members, functioning as sequence-specific transcription factors, have defined the many intricacies of the mechanism of transcription. These ligand-dependent transcription factors generally possess similar “domain organizations,” of which the DNA-binding domain and the ligand-binding domain are critical in amplifying the hormonal signals via the receptor target genes. The nuclear receptor family is divided into four groups: (i) Group 1 is composed of steroid hormone receptors that control target gene transcription by binding as homodimers to response element (RE) palindromes; (ii) in Group 2, the nuclear receptors heterodimerize with retinoid X receptor and generally bind to direct repeat REs; (iii) Group 3 consists of those orphan receptors that function as homodimers and bind to direct repeat REs; and (iv) orphan receptors in Group 4 function as monomers and bind to single REs.

Since the early demonstration by Jack Gorski and Jensen that the estrogen receptor (ER) activates transcription, the nuclear receptor field has come a long way. In addition to the first cloning of the polymerase II transcription factors (GR and ER cDNAs), of note is the discovery of steroid receptor coactivators (SRCs), a truly major piece of the transcriptional jigsaw puzzle, described by the laboratories of O’Malley and Myles Brown. The induction of coactivators and corepressors in the transcriptional machinery has expanded tremendously our understanding of this complex process. We now know that ligand binding to the respective receptors triggers a fascinating chain of events, including the translocation of the receptors to the nucleus, ligand-induced changes in the receptor conformations, receptor dimerization, interaction with the target gene promoter elements, recruitment of coactivators (or corepressors), chromatin remodeling, and subsequent interaction with the polymerase II complex to initiate transcription.

By virtue of their abilities to regulate a myriad of human developmental and physiological functions (reproduction, development, metabolism), nuclear receptors have been implicated in a wide range of diseases, such as cancer, diabetes, obesity, etc. Not surprisingly, drug companies are spending billions of dollars to develop medicines for cancer and metabolic disorders that involve nuclear receptors. More than 50 years after the discovery of the ER, the scientific community owes Jensen and other founding members of the nuclear receptor family much gratitude, for they have taken us through a remarkable expedition filled with eureka moments to understand how hormones and other ligands function!

This thematic minireview series will cover a range of topics in the nuclear receptor field. The minireviews include the current studies of identifying subtypes of the GR. Different receptors arise from alternative mRNA splicing and from the use of different promoter start sites and post-translational modifications, such as phosphorylation. The series covers the physiological roles of the different GRs. The field of orphan nuclear receptors and the search for possible ligands also are reviewed. One minireview concentrates largely on the following nuclear receptors: peroxisome proliferator-activated receptor (PPAR) α, PPARγ, Rev-erbα, and retinoic acid receptor-related orphan receptor α. ERα was the first identified and has been studied the most, whereas ERβ has not been studied in the same detail. ERβ is very important, and one of the minireviews provides a summary of the new biological functions that are being ascribed to it. Also, the development of small molecule inhibitors for the ER will be considered. An important aspect of nuclear receptor function is how these receptors function in transcription. The role of transcriptional coactivators in nuclear receptor gene regulation will be reviewed as well as how signal amplification and interaction are involved in transcription regulation by steroids. The SRC/p160 family of coregulators includes SRC-1, SRC-2, and SRC-3, and the latter has been shown to act as an oncogene, particularly in breast cancer. Molecular analysis of its role in breast cancer progression and metastasis will be the focus of one of the minireviews. In addition, interactions of nuclear receptors with the genome will be reviewed, as will the role of the homeodomain protein HoxB13 in specifying the cellular response to androgens. Mining nuclear receptor cistromes and how nuclear receptors reset metabolism also will be considered. The association of nuclear receptors (e.g. PPARδ) with physiological functions, such as circadian rhythm and muscle functions, will also be addressed. Finally, the role of nuclear receptors in disease using the retinoid X receptor α/β knock-out and transgenic mouse model skin syndromes and asthma will be reviewed. These are diverse and important topics that are critical in understanding the regulation of nuclear receptors and the biological roles they play in normal function and disease.

The Nuclear Receptor Superfamily: A Rosetta Stone for Physiology

Ronald M. Evans
Howard Hughes Medical Institute, Gene Expression Laboratory, The Salk Institute for Biological Studies, La Jolla, California 92037
Molecular Endocrinology 19(6):1429–143   http://dx.doi.org:/10.1210/me.2005-0046

In the December 1985 issue of Nature, we described the cloning of the first nuclear receptor cDNA encoding the human glucocorticoid receptor (GR) (1). In the 20 yr since that event, our field has witnessed a quantum leap by the subsequent discovery and functional elaboration of the nuclear receptor superfamily (2)—a family whose history is linked to the evolution of the entire animal kingdom and whose actions, by decoding the genome, span the vast diversity of biological functions from development to physiology, pathology, and treatment. A messenger is an envoy or courier charged with transmitting a communication or message. In one sense, the cloning of that first messenger (the GR) represented the completion of a prediction that began with Elwood Jensen’s characterization of the first steroid receptor protein (3) and continued with the pioneering work of others in the steroid receptor field (including Gorski, O’Malley, Gustafsson, and Yamamoto). Yet, like the discovery of the Rosetta stone in 1799, the revelation of the GR sequence heralded a completely unpredictable demarcation in the field, helping to solve mysteries unearthed nearly 100 yr ago as well as opening a portal to the future. The beginnings of the adventure lie in disciplines such as medicine and nutrition, which gave rise to the emergent field of endocrinology in the first half of the last century. The purification of chemical messengers ultimately known as hormones from organs and vitamins from foods spurred the study of these compounds and their physiologic effects on the body. At about the same time, the field of molecular biology was emerging from the disciplines of chemistry, physics, and their application to biological problems such as the structure of DNA and the molecular events surrounding its replication and transcription. It would not be until the late 1960s and 1970s that endocrinology and molecular biology would begin to intersect as the link between receptors and transcriptional control were being laid down. During this time, the work of Jensen (4) and Gorski (5) identified a high-affinity estrogen receptor (ER) that suggested an action in the nucleus. Gordon Tomkins and his associates (J. Baxter, G. Ringold, E. B. Thompson, H. Samuels, H. Bourne, and others) were one of the most creative forces studying glucocorticoid action (6). Concurrent work by O’Malley, Gustafsson, and Yamamoto provided further, important evidence supporting a link between steroid receptor action and transcription (see accompanying perspective articles in this issue of Molecular Endocrinology). But whereas the steroid hormone field continued to evolve in this direction, it is of interest to note that the mechanism of action of thyroid hormone and retinoids remained clouded and controversial until the eventual cloning of their receptors in the late 1980s. Likewise, no one had foreseen the possibility that other lipophilic molecules (like oxysterols, bile acids, and fatty acids) would also function through a similar mechanism, or that other steroid receptor-like proteins existed that would play an important role in transcriptional regulation of so many diverse pathways. Thus, the GR isolation in 1985 led to the concept of a hidden superfamily of receptors that in a very real way provided the needed molecular code to unravel the puzzle of physiologic homeostasis.

Unconventional Gene-Ology

The study of RNA tumor viruses was ascendant, and the concept that they evolved by pirating key signaling pathways greatly influenced my future studies. With this training, I went on to work with Jim Darnell at the Rockefeller University on adenovirus transcription, a model brought to the lab by Lennart Philipson. At the time, adenovirus was one of the best tools to study programmed gene expression in an animal cell. My sole focus was to localize the elusive major late promoter, which provided my first Nature paper (7). Ed Ziff, a newly hired assistant professor from Cambridge, brought innovative unpublished DNA and RNA sequencing techniques that, after much technical angst, allowed us to sequence the major late promoter and derive the structure of the first eukaryotic polymerase II promoter (8). This thrilling result convinced me that the problem of gene control could be solved at the molecular level. Our next goal, which I shared with Michael Harpold in the Darnell lab, was to translate the concepts developed around adenovirus into cellular systems. My model was to analyze the glucocorticoid and thyroid hormone regulation of the GH gene. Under the strict federal guidelines for newly approved recombinant DNA research, we cloned the GH cDNA in 1977 and the first genomic clones in 1978 (9) after I moved on to The Salk Institute. However, to fully address the hormone signaling problem, I realized that it would be necessary to clone the GR and thyroid hormone receptors (TRs), which began in earnest in 1981. Up until that time, the purification and cloning of any polymerase II transcription factor had eluded researchers because of their low abundance. Four years later, the GR would be the first transcription factor for a defined response element to be cloned, sequenced, and functionally identified.

A Rock and A Hard Place

A key question was whether the GR protein encoded by the receptor was sufficient, when expressed in a heterologous cell, to convey the hormonal message. Before the publication, a new postdoc, Vincent Giguere, began tinkering with the isolated GR, trying to address this question. The rate of development of any field is limited by the existing techniques and depends on the development of new ones. Vincent devised a revolutionary technique—the cotransfection assay that required two plasmids to be taken up in the same cell, the expression vector to be transcribed, the encoded protein to be functional and an inducible promoter linked to a chloramphenicol acetyltransferase reporter in the nucleus ready to flicker on (10, 12). With so many variables and unknowns, I was stunned and expressionless when it worked the very first time. Cotransfection was an easy, fast, and quantitative technique. It would become (and still remains) the dominant assay to characterize receptor function. It would also become the mainstay for drug discovery in the pharmaceutical industry. The development of this technique proved a great advantage because existing technology involved creating stable cell lines, a tedious process prone to integration artifacts that ultimately could not match the explosive pace of the field. Indeed, within 4 months Stan and Vincent had fully characterized 27 insertional mutants delineating the DBD, LBD, and two activation domains (12). The route to understanding the signaling mechanism now had a solid structural foundation. A serendipitous gift to my retroviral origins was the homology of the GR sequence to the v-erbA oncogene product of the avian erythroblastosis virus genome (13). With this discovery, erbA advanced to a candidate nuclear transcription factor potentially involved in a signal transduction pathway. Thus, while Stan concentrated on the GR, Cary began to delve into the erbA discovery. Within months of the GR publication, the human c-erbA gene was in hand (14). Unbeknownst to us, Bjorn Vennstrom, one of the first to characterize the avian erythroblastosis virus genome, had also isolated c-erbA and was searching for a function. Based on the low homology of the LBD region to the GR and ER, both groups deduced that the imaginary erbA ligand would be nonsteroidal.

The work of our two groups (15, 16), published in December of 1986, broadened the principles of the signal transduction pathway by demonstrating that thyroid and steroid hormone receptor signaling had a common evolutionary origin and provided an entree to understand how mutations within a receptor could activate it to an oncogene. Although we did not know it at the time, this work would also lead us to the concept of the corepressor. In the meantime, my student, Catherine Thompson, zeroed in on an erb-A-related gene and soon identified a second TR expressed at high levels in the central nervous system (17). Thus came into existence the and forms of the TR. Jeff Arriza, the third graduate student in the lab, purified a genomic fragment that had weakly hybridized to the GR resulting in the isolation of the human mineralocorticoid receptor (MR) (18). MR proved to have an at least 10-fold higher affinity for glucocorticoids than the GR itself and was further distinguished by its ability to bind and be activated by aldosterone. This enabled the development of GR- and MR-selective drugs such as the recent MR antagonist eplerenone. Thus, in a 2-yr time span our lab had progressed on three distinct, albeit related, receptor systems, and in doing so molecular biology and endocrinology were irrevocably linked. The field of molecular endocrinology (and coincidentally the eponymous journal) was born.

Ligands From Stone

I have often been asked how we could handle so many divergent systems. Indeed, from a medical perspective, these systems seem widely unrelated. Studies of ER, progesterone receptor, and androgen receptor (AR) fall under reproductive physiology, vitamin D under bone and mineral metabolism, with vitamin A part of nutritional science. Medical fields are naturally idiosyncratic because of the specialized knowledge required to conduct experiments. With my training as a molecular biologist, physiology was the complex output of genes and thus control of gene expression was the overriding problem. This conceptual approach had a great unifying effect because all receptors transduce their signaling through the gene. As an “outsider,” my goal was to exploit multiple receptor systems to seek general principles. This philosophical approach afforded us a freedom to redefine the signaling problem from the nucleus outward and thus even poorly characterized, even unknown, physiologic systems fell into the crosshairs of our molecular gun.

Vincent, while screening a testes Fig. 1. Models of Nuclear Receptor Structure Top, Original hand-shaped wire model (circa 1992) of the nuclear receptor DBD. Bottom, Schematic representation of the GR DBD. Conserved residues in zinc fingers, P-box and D-box are indicated isolated what would become the vitamin A or retinoic acid receptor (RAR) (19). Initially, Vincent thought he had isolated the AR, although this later proved not to be the case. By that stage, the lab had perfected a new technique—the domain swap—by which the GR DBD could be introduced into any receptor and confers on the chimeric protein the ability to activate a mouse mammary tumor virus reporter. This clever technique, independently developed in the Chambon lab, would prove to be essential. Effectively, the domain swap would enable us to screen for ligands without any knowledge of their physiologic function. Activation of a target gene was all that was needed! By creating this GR chimera, Vincent was able to screen the new receptor against a ligand cocktail including androgens, steroids, thyroid hormone, cholesterol, and the vitamin A metabolite retinoic acid. From the first assay, it was clear that he had isolated a high-affinity selective RAR that had no response to any other test ligand. Thus, without knowing any true direct target gene for retinoic acid, we were nonetheless able to isolate and characterize its receptor. Remarkably, Martin Petkovich in the Chambon lab isolated the same gene. Once again, this is an example where a new technique offered an entirely new approach to a problem. Both papers were published in the December 1987 issue of Nature (19, 20). With the combination of steroids, thyroid hormones, and vitamin A, the three elemental components of the nuclear receptor superfamily were in hand. By the time the RAR papers were published, Vincent with Na Yang, had already isolated two estrogen-related receptors termed ERR1 and 2 that would represent the first true orphan receptors in the evolving superfamily (21). A third receptor (ERR3) would be isolated 10 yr later (22). The three ERRs are distinguished by their ability to activate through ER response elements, but required no ligand. However, of potential major medical relevance, estrogen antagonists such as 4-hydroxy-tamoxifen silences ERR constitutive activity (23). The superfamily was growing exponentially, transforming into a new field, driven by a new breed of exceptional students and fellows attracted by the mechanics of transcription and its emerging link to physiology. For example, the RAR and TR offered an unprecedented look at understanding the action of vitamin A as a morphogen and the role of thyroxin in setting the basal metabolic rate of the body. We were a relatively small group, and our decision to work on multiple different receptor systems created a unique environment. Because there was so little overlap between projects, postdocs and students readily discussed all results, exchanged reagents and freely collaborated, resulting in a tremendous acceleration of progress. The high level of camaraderie was powered by the joie de vivre of the exciting discoveries and the ability of our family of students and postdocs to each adopt their own receptors. We all felt we were in a golden age and even more was to come.

In 1989, Jan Sap in Vennstrom’s group and Klaus Damm in our group demonstrated that the TR becomes oncogenic by mutation in the LBD (24, 25). Although we expected ligand-independent activation, it was clearly a constituitive repressor becoming the first example of a dominant-negative oncogene. The concept of the dominant-negative oncogene had been proposed one year earlier by Ira Herskowitz (26). This discovery changed our thinking on hormone action, and repression soon would be shown to be a common feature of receptor antagonists. David Mangelsdorf, who had arrived in the lab the year before was captivated by the glow of weakly hybridizing DNA bands and, in 1989, had amassed his own collection of orphan receptors, among which was the future retinoid X receptor (RXR) (27). In search for biological activity, a candidate ligand was found in lipid extracts from outdated human blood. However, the key test came from demonstrating that addition of all-trans retinoic acid to cultured cells would lead to its rapid metabolism coupled with the release of an inducing activity for RXR, which we termed retinoid X. David and his benchmate, Rich Heyman, began working on the chemistry of this inducer along with Gregor Eichele and Christine Thaller, then at Baylor College of Medicine (Houston, TX). This work led to the identification of 9-cis retinoic acid by our lab and a group at Hoffman LaRoche (Nutley, NJ) (28, 29). Like the retinal molecule in rhodopsin, 9-cis-retinoic acid represents the exploitation of retinoid isomerization by nature to control a key signaling pathway. More importantly, in the 39 yr since the discovery of aldosterone in 1953, this revelation would reawaken and reinvent the single most defining but dormant tool of endocrinology—ligand discovery. Indeed, the discovery that new receptors could lead to new ligands opened up an entirely new avenue of research. Like the puzzle of the structure of the benzene ring, which was solved in 1890 when Fredrick Kekule dreamed of a snake biting its own tail, the physiologic head of the “endocrine snake” and the molecular biology tail had come full circle. The era of reverse endocrinology was now upon us.

Response Elements: Deciphering The Scripts

One problem in addressing the downstream effects of our newly discovered receptors was that their response elements and target genes were by definition unknown. Kaz Umesono delved into this mystery and would produce a paradigm shift that would both solve the problem and further unify the field. With the view that the DBD functioned as a molecular receptor for its cognate hormone response element, meticulous mutational studies revealed two key DBD sequences, termed the P-box and D-box, that controlled the process (30).

The D-box was shown to direct dimerization, a feature previously thought to be unique to the LBD. One perplexing point was that the P-boxes of the nonsteroidal receptors were conserved, leading to the improbable prediction that many different receptors would recognize the same target sequence. By manual compilation and comparison of all known response elements, Kaz proposed a core hexamer— AGGTCA—as the prototypic common target sequence. By requiring the half-site to be an obligate hexamer an underlying pattern—the direct repeat—emerged. In the direct repeat paradigm, we proposed that half-site spacing, not sequence difference, was the key ingredient to distinguishing the response elements. The metric was referred to as the 3-4-5 rule (31). According to the rule, direct repeats of AGGTCA spaced by three nucleotides, would be a vitamin D response element (DR-3), the same repeat spaced by four nucleotides a thyroid hormone response element (DR-4), and the same repeat spaced by five nucleotides a vitamin A response element (DR-5). Eventually, all steps from 0–5 on the DR ladder would be filled (Fig. 2). The validity of this paradigm was ensured by a crystal structure solved in collaboration with Paul Sigler’s group at Yale (32). Indeed, of the remaining 40 nonsteroidal receptors, all but three can be demonstrated to have a preferred binding site within some component of the direct repeat ladder. Exceptions include SHP and DAX, which lack DBDs, and farnesoid X receptor (FXR) that binds to the ecdysone response element as a palindrome with zero spacing. Kaz’s insight, by drawing commonality from diversity, came to solve a problem that impacted on virtually every receptor. Remarkably, each new receptor in the superfamily could immediately be assigned a place on the ladder. The ladder also provided a simple means to conduct a ligand screening assay in absence of any knowledge of an endogenous target gene. Kaz’s ladder was a turbo charge for the field. The next major advance in the field was the discovery of the RXR heterodimer. Although we knew that retinoid and thyroid receptors required a nuclear competence factor for DNA binding, its identity was unknown. We tested RXR, but our initial experiments were flawed. Of the first four papers describing the discovery, that from Chambon’s lab was most elegant because they simply purified an activity to homogeneity to find RXR (33)! Rosenfeld was first to publish, and Ozato, Pfahl and Kliewer all concurred (34–37). Tony Oro and Pang Yao in our lab soon published that the ecdysone receptor functions as a heterodimer with ultraspiracle, the insect homolog of RXR (38, 39), revealing that the ancient origins of the heterodimer which arose before the divergence of vertebrates and invertebrates.

Reverse Endocrinology: Decoding Physiology

The orphan receptors would transform our view of endocrine physiology with unexpected links to toxicology, nutrition, cholesterol, and triglyceride metabolism as well as to a myriad of diseases including atherosclerosis, diabetes, and cancer. The three RXR isoforms formed the core with 14 heterodimer partners including the vitamin D receptor (VDR), TR/, and RAR//. The initial adopters of orphan receptors included Giguere, Mangelsdorf, Weinberger, Bruce Blumberg, Steve Kliewer, and Barry Forman. Barry unlocked the first secret to for peroxisome proliferator-activated receptor (PPAR) by identifying prostaglandin J2 (PGJ2) as a high-affinity ligand (40). The second step, in collaboration with Peter Tontonoz in Bruce Spiegleman’s lab, revealed that PGJ2 was adipogenic in cell lines and perhaps more importantly that the synthetic antidiabetic drug Troglitazone was a potent PPAR agonist (41). Similar work was conducted and published by Kliewer, who had now moved to Glaxo (42). By acquiring a ligand, a physiologic response, and a drug, PPAR was suddenly transported to the center of a physiologic cyclone that would spin into its own specialty field. Since 1995, more than 1000 papers (see PubMed) have been published on PPAR and its natural and synthetic ligands. This early work illuminated the molecular strategy of reverse endocrinology and the emerging importance of the orphan receptors in human disease and drug discovery. Cary returned to the lab for a sabbatical and, with Barry, demonstrated that FXR was responsive to farnesoids and other molecules in the mevalonate pathway. The findings by Mangelsdorf that liver X receptors (LXRs) bound oxysterols (43) and by Kliewer, Mangelsdorf, and Forman that FXR is a bile acid receptor (44–46) provided a whole new conceptual approach to cholesterol and triglyceride homeostasis. The steroid and xenobiotic receptors (SXR)/pregnane X receptor (PXR) (47–49) and the constituitive androstane receptor (CAR) (50) respond to xenobiotics to activate genes for P450 Fig. 2. Examples of Receptor Heterodimer Combinations that Fill the Direct Repeat (DR) Response Element Ladder from DR1 to DR5 Evans enzymes, conjugation and transport systems that detoxify drugs, foreign chemicals, and endogenous steroids. RXR and its associated heterodimeric partners quickly established a new branch of physiology, shedding its dependence on endocrine glands and allowing the body to decode signals from environmental toxins, dietary nutrients, and common metabolites of intermediary metabolism.

Continued…

ROCK OF AGES

The human body is, after all a living machine, a complex device that consumes and uses energy to sustain itself, defend against predators, and ultimately reproduce. One might reasonably ask, “If the superfamily acts through a common molecular template, can the family as a whole be viewed as a functional entity?” In other words, is there yet some overarching principle that we have yet to grasp. . . and might this imaginary principle lie at the heart of systems physiology? Simply stated, what led to the evolution of integrated physiology as the functional output of the superfamily? One obvious speculation is survival. To survive, all organisms must be able to acquire, absorb, distribute, store, and use energy. The receptors are exquisitely evolved to manage fuel—everything from dietary and endogenous fats (PPARs), cholesterol (LXR, FXR), sugar mobilization (GR), salt (MR), and calcium (VDR) balance and maintenance of basal metabolic rate (TR). Because only a fraction of the material we voluntarily place in our bodies is nutritional, the xenobiotic receptors (PXR, CAR) are specialized to defend against the innumerable toxins in our environment. Survival also means reproduction, which is controlled by the gonadal steroid receptors (progesterone receptor, ER, AR). However, fertility is dependent on nutritional status, indicating the presumptive communication between these two branches of the family. The third key component managed by the nuclear receptor family is inflammation. During viral, bacterial, or fungal infection, the inflammatory response defends the body while suppressing appetite, conserving fuel, and encouraging sleep (associated with cytokine release). However, if needed, even an ill body is capable of defending itself by releasing adrenal steroids, mobilizing massive amounts of fuel, and transiently suppressing inflammation. In fact, clinically, (with the exception of hormone replacement) glucocorticoids are only used as antiinflammatory agents. Other receptors including the RARs, LXRs, PPAR and , and vitamin D receptor protect against inflammation. Thus, nature evolved within the structure of the receptor the combined ability to manage energy and inflammation, indicating the important duality between these two systems. In aggregate, this commonality between distinct physiologic branches suggests that the superfamily might be approached as an intact functional dynamic entity.

Historically, endocrinologists and geneticists rarely saw eye to eye. As I have indicated in this perspective article, the disciplines have now become united in a new subject—transcriptional physiology. With this in mind, we might expect the existence of larger organizational principles that establish how the various evolutionary branches of the superfamily integrate to form whole body physiology. The existence of molecular rules governing the function and evolution of a megagenetic entity like the nuclear receptor superfamily, if correct, may be useful in understanding complex human disease and provide a conceptual basis to create more effective pharmacology. With so much accomplished in the last 20 yr (see Fig. 3), there are glimpses of clarity—enough to see the enormity and wonder of the problem and enough to know there is still a long and challenging voyage ahead. But who knows, the next breakthrough may only be a stone’s throw away.

http://press.endocrine.org/doi/pdf/10.1210/me.2005-0046

 

Pierre Chambon MD

Recipient of the Canada Gairdner International Award, 2010
“For the elucidation of fundamental mechanisms of transcription in animal cells and to the discovery of the nuclear receptor superfamily.”

Institut de Génétique et de Biologie Moléculaire et Cellulaire (IGBMC), Illkirch-Graffenstaden, France

Dr. Pierre Chambon is Honorary Professor at the College de France (Paris), and Emeritus Professor at the Faculté de Médecine of the Strasbourg University. He was the Founder and former Director of the IGBMC, and also the Founder and former Director of the Institut Clinique de la Souris (ICS/MCI), in Strasbourg.

Dr. Pierre Chambon is a world expert in the fields of gene structure, and transcriptional control of gene expression. During the last 25 years his studies on the structure and function of nuclear receptors has changed our concept of signal transduction and endocrinology. By cloning the estrogen and progesterone receptors, and discovering the retinoic acid receptor family, he markedly contributed to the discovery of the superfamily of nuclear receptors and to the elucidation of their universal mechanism of action that links transcription, physiology and pathology. Through extensive site-directed mutagenesis and genetic studies in the mouse, Pierre Chambon has unveiled the paramount importance of nuclear receptor signaling in embryonic development and homeostasis at the adult stage. The discoveries of Pierre Chambon have revolutionized the fields of development, endocrinology and metabolism, and their disorders, pointing to new tactics for drug discovery, and finding important applications in biotechnology and modern medicine.

These scientific achievements are logically inscribed in an uninterrupted series of discoveries made by Pierre Chambon over the last 45 years in the field of transcriptional control of gene expression in higher eukaryotes: discovery of PolyADPribose (1963), discovery of multiple RNA polymerases differently sensitive to a-amanitin (1969), contribution to elucidation of chromatin structure: the Nucleosome (1974), discovery of animal split genes (1977), discovery of enhancer elements (1981), discovery of multiple promoter elements and their cognate factors (1980-1993).

Pierre Chambon has received numerous international awards, including the 2004 Lasker Basic Medical Research Award for the discovery of the superfamily of nuclear hormone receptors and the elucidation of a unifying mechanism that regulates embryonic development and diverse metabolic pathways. He is a member of the French Académie des Sciences, and also a Foreign Member of the National Academy of Sciences (USA) and of the Royal Swedish Academy of Sciences. Pierre Chambon serves on a number of editorial boards, including Cell, and Molecular Cell. Pierre Chambon is author of more than 900 publications. He has been ranked fourth among most prominent life scientists for the 1983-2002 period.

An Interview with Pierre Chambon
2004 Albert Lasker Basic Medical Research Award
http://www.laskerfoundation.org/media/v_chambon.htm

Pierre Chambon, MD

​Honorary Professor at the Collège-de-France and Professor of Molecular Biology and Genetics, Institute for Advanced Study, University of Strasbourg; Group Leader, Institut de Génétique et de Biologie Moléculaire et Cellulaire (IGBMC), Illkirch-Graffenstaden, Strasburg, France

A pioneer in the fields of gene structure and transcriptional control of gene expression, Dr. Chambon has fundamentally changed our understanding of signal transduction, which has led to revolutionary new tactics for drug discovery. His work elucidated how molecules that promote gene transcription are organized and regulated in eukaryotic organisms and, independently of Dr. Ronald Evans, he discovered in 1987 the retinoid receptor families, which led to the discovery and characterization of the superfamily of nuclear hormone receptors, including steroid and retinoid receptors.

Dr. Chambon’s previous research led to the discovery of PolyADPribose, multiple RNA polymerases differentially sensitive to α-amaniti, and has markedly contributed to the elucidation of the nucleosome and chromatin structure, as well as to the discovery of animal split genes, DNA sequences called enhancer elements, and multiple promoter elements and their cognate factors. These discoveries have greatly enhanced understanding of embryonic development and cell differentiation. To further studies of various nuclear receptors, Dr. Chambon has developed a method that allows in the mouse the generation of somatic mutations of any gene, at any time, and in any specific cell type, a tool valuable in generating mouse models of cancer.

In 1994, Dr. Chambon took on the role of founding a major research institute in France. As the first director of IGBMC, he built the institute to encompass hundreds of top researchers and multiple research programs funded by public agencies and private industry. In 2002, he founded and was the first director of the Institut Clinique de la Souris in Strasbourg. In these positions, he has succeeded in supporting and influencing a generation of scientists.

Career Highlights

​2010  Canada Gairdner International Award

2004  Albert Lasker Basic Medical Research Award

2003  Alfred P. Sloan, Jr., Prize, General Motors Cancer Foundation

1999  Louisa Gross Horwitz Prize, Columbia University

1998  Robert A. Welch Award in Chemistry

1991  Prix Louis-Jeantet de médecine, Fondation Louis-Jeantet

1990  Sir Hans Krebs Medal, Federation of European Biochemical Societies

1988  King Faisal International Prize for Science, King Faisal Foundation

1987  Harvey Prize, Technicon-Israel Institute of Technology

more…

 

Minireviews In This Series:

Thematic Minireview Series on Nuclear Receptors in Biology and Diseases

Sohaib Khan and Jerry B Lingrel

Steroid Receptor Coactivator (SRC) Family: Masters of Systems Biology

Brian York and Bert W. O’Malley

Estrogen Signaling via Estrogen Receptor β

Chunyan Zhao, Karin Dahlman-Wright, and Jan-Åke Gustafsson

Small Molecule Inhibitors as Probes for Estrogen and Androgen Receptor Action

David J. Shapiro, Chengjian Mao, and Milu T Cherian

Cellular Processing of the Glucocorticoid Receptor Gene and Protein: New Mechanisms for Generating Tissue Specific Actions of Glucocorticoids

Robert H Oakley and John A Cidlowski

Endogenous Ligands for Nuclear Receptors: Digging Deeper

Michael Schupp and Mitchell A. Lazar

 

 

 

Read Full Post »

Obesity

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Intervention

2010 Douglas L. ColemanJeffrey M. Friedman

Shaw Laureates 2009 Life Science and Medicine

Douglas L. Coleman (6 October 1931 – 16 April 2014) was a scientist and professor at The Jackson Laboratory, in Bar Harbor, Maine. His work predicted that the ob gene encoded the hormoneleptin,[1] later co-discovered in 1994 by Jeffrey Friedman, Rudolph Leibel and their research teams at Rockefeller University.[2][3][4][5][6][7][8] This work has had a major role in our understanding of the mechanisms regulating body weight and that cause of humanobesity.[9]

Coleman was born in Stratford, Ontario. He obtained his BS degree from McMaster University in 1954 and his PhD in Biochemistry from the University of Wisconsin in 1958. He was elected a member of the US National Academy of Sciences in 1998. He won the Shaw Prize in 2009,[10] the Albert Lasker Award for Basic Medical Research in 2010, the 2012 BBVA Foundation Frontiers of Knowledge Award in the Biomedicine category and the 2013 King Faisal International Prize for Medicine[11] jointly with Jeffrey M. Friedman[9] for the discovery of leptin.

http://www.nytimes.com/2014/04/26/us/douglas-l-coleman-82-dies-found-a-genetic-cause-of-obesity.html

The Genetics of Obesity

Winner of the  2013 KFIP Prize for  Medicine

Professor Douglas Coleman was born on October 5, 1931, in Stratford, Ontario, Canada. He obtained a B.Sc. in Chemistry in 1954 from McMaster University in Hamilton, Ontario, then went to the University of Wisconsin in Madison, WI, U.S.A., where he obtained M.S. and Ph.D. degrees in Biochemistry in 1956 and 1958, respectively. He served as a Research Assistant at the University of Wisconsin from 1954-1957 and as E.I. Dupont de Nemours Fellow from 1957-1958. He joined the Jackson Laboratory in Bar Harbor, ME, where he spent his entire career rising from Associate Staff Scientist In 1958 to Senior Staff Scientist in 1968. He also served as Assistant Director for Research from 1969-1970 and Interim Director  from 1975-1976. Upon his retirement in 1991, he was appointed Senior Staff Scientist Emeritus at Jackson. He was also consultant to the National Health Institutes, serving on the Metabolism Study Section from 1972-1974 and was frequently consulted on various other special study sections involving genetic diabetes, obesity and nutrition. He also served as Visiting Professor at the University of Geneva (1979-1980).

Professor Coleman’s research interests focus on biochemical genetics, regulation of metabolism, obesity, diabetes and hormone action. He is best known for his studies on the obesity-diabetes syndrome. He discovered the db gene, one of the two genes responsible for the genetic events regulating appetite control. He carried out a series of fundamental experiments with parabiotic mice which demonstrated the hormone-hormone receptor axis of leptin and the leptin receptor long before their discovery. The discoveries of Coleman and Friedman represent one of the most important biological breakthroughs in recent decades.

Professor Coleman received several prestigious awards and honors, including the Claude Bernard Medal by the European Diabetes Foundation in 1977, the Distinguished Alumni Award in Science by McMaster University in 1999, the Gairdner International Award in 2005, the Shaw Prize for Life Sciences and Medicine in 2009 (jointly with Jeffrey M. Friedman), the Albert Lasker Basic Medical Research Award (jointly with Jeffrey M. Friedman) and the Outstanding Forest Stewardship Award (Maine Forest Service). He was elected to the National Academy of Sciences in 1991, and was awarded Honorary D.Sc. from Louisiana State University in 2005 and Honorary D.Sc. from McMaster University in 2006. He is a member of the American Association of Biological Chemists.

Professor Douglas Leonard Coleman was awarded the prize because the research findings by him and Professor Friedman led to the identification and characterization of the leptin pathway. This seminal discovery has had a major impact on our understanding of the biology of obesity, describing some of the key afferent pathways in body weight regulation active in man. Their fundamental discoveries have also helped in the recognition of more illuminating views of the endocrine system. Because of their major contribution to the field of the genetics of obesity they have been awarded King Faisal International Prize in Medicine for the year 2013.

Leaping for leptin: the 2010 Albert Lasker Basic Medical Research Award goes to Douglas Coleman and Jeffrey M. Friedman

Ushma S. Neill

J Clin Invest. 2010 Oct 1; 120(10): 3413–3418.
Published online 2010 Sep 21. doi:  10.1172/JCI45094

Douglas Coleman never intended to study diabetes or obesity. Jeffrey M. Friedman had childhood dreams of being a veterinarian. But together, the two scientists have opened the field of obesity research to molecular exploration. On September 21, the Albert and Mary Lasker Foundation announced that they will award Coleman and Friedman (Figure (Figure1)1) with the 2010 Albert Lasker Basic Medical Research Award in recognition of their contributions toward the discovery of leptin, a hormone that regulates appetite and body weight. This hormone provides a key means by which changes in nutritional state are sensed and in turn modulate the function of many other physiologic processes. The story of the discovery of the first molecular target of obesity is one of tenacity and determination.

Figure 1

Douglas Coleman (left) and Jeffrey M. Friedman (right) share the 2010 Albert Lasker Basic Medical Research Award for the discovery of leptin, a breakthrough that opened obesity research to molecular exploration.

From Canada to Maine

Douglas Coleman was raised in Ontario, Canada, the only child of English immigrant parents, who encouraged him to excel in school; he recalled, “Although my parents never had the luxury of completing high school, they always encouraged me to pursue a higher education, and in high school, I developed a keen interest in chemistry and biology.” Coleman pursued his interest in chemistry at McMaster University. It was there he met his future wife, Beverly Benallick, “the only girl to graduate in Chemistry in the Class of 1954.” During his time at McMaster University, Coleman began to focus on organic chemistry and had the fortune of working with, “a very dynamic professor, Sam Kirkwood, who not only taught me the rudiments of biochemistry, but also instilled an appreciation of the scientific method.” Kirkwood encouraged Coleman to continue his biochemistry studies at the University of Wisconsin, at which he received a PhD in 1958.

In those days, postdoctoral fellowships were rare, and graduates had two options: academia or industry. Coleman took a third option, as an associate staff scientist at what was then known as the Roscoe B. Jackson Memorial Laboratory in Bar Harbor, Maine. Coleman has noted, “My intention was to stay one or two years, expanding my skills in multiple fields, especially genetics and immunology. To my great pleasure, The Jackson Laboratory provided a rich environment, including world-class animal models of disease, interactive colleagues, and a backyard that included the stunning beauty of Acadia National Park.” The Coleman family put down roots, raising their three sons there as Coleman rose through the ranks to senior staff scientist and served terms as assistant director of research and interim director (Figure (Figure2).2). He noted, “Without a doubt, I was lucky in my choice of starting my career at The Jackson Laboratory. It was a wonderful place in which to work, and I never pursued another position.”

Figure 2

Coleman at the bench at The Jackson Laboratory in 1960.

Making magic from a mutant

His early work involved muscular dystrophy and the development of a new field, mammalian biochemical genetics, establishing that genes control enzyme turnover as well as structure. However, his focus changed when a colleague asked for his help characterizing a mutant (Figure (Figure3)3) that had spontaneously arisen at the labs. He recalled, “Initially, I had no intention of studying the diabetes/obesity syndrome, but in 1965, a spontaneous mouse mutation was discovered, and I began research that would consume much of my scientific thought for the better part of three decades.” The new mutant was polydipsic and polyuric as well as being massively obese and hyperphagic. His colleague, Katherine Hummel, was studying diabetes insipidus and asked if he could determine whether the new mutant had diabetes insipidus or mellitus. He reported back that it was diabetes mellitus: “Her initial response was that she was not interested, but I convinced her that with a little further work we could produce a solid manuscript announcing this potentially valuable mutant to the world.” This mouse owed its phenotype to two defective copies of a gene that researchers dubbed diabetes (db) (1).

Figure 3

Wild-type and obese mice.

When Coleman and his colleagues began characterizing the db/db mouse, they began to ponder whether some circulating factor might regulate the severity of diabetes: perhaps a factor in the normal mouse could inhibit the development of the obesity and diabetes found in the db/db mutant. Conversely, perhaps a circulating factor present in the db/db mouse might cause the diabetes-like syndrome in the normal mouse. If the hypothetical factor was carried through the blood, Coleman reasoned, they could test for its presence by linking the blood supplies of the various mouse strains — an experimental setup called parabiosis. Fortunately, others at The Jackson Laboratory were using parabiosis to assess whether any circulating factors were involved in anemic mutants, and they were able to show Coleman how to do it successfully.

When Coleman hooked the wild-type mice and the db/db mice together, rather than overeating, as the db/dbmice did, the wild-type mice stopped eating and died from starvation (Figure (Figure44 and ref. 2). His hypothesis was correct: the db/db mice indeed must have released a factor that inhibited the wild-type animals’ drive to eat, but the mutant animals could not respond to it.

Figure 4

Summary of parabiosis experiments performed by Coleman.

Coleman needed more proof of this mystery circulating factor regulating food intake. He turned to another overweight mouse that also had arisen by chance at The Jackson Laboratory, this one called “obese,” whose aberrant physiology arises from two defective copies of a different gene (ob) (3). Unfortunately, the ob/obmouse was on a different genetic background, and due to immune-mediated rejection, parabiosis could only be performed successfully on mice with the same strain background. Coleman described his need for resolve, “Since the obese and diabetic mutants were on different genetic backgrounds, it took years for me to be able to perform all of the desired pairings.”

Coleman persevered and finally got the strains to match so he could successfully hook them together in a parabiosis experiment. When joined to a db/db mouse, the ob/ob mouse stopped eating and starved to death, while the db/db mouse remained obese, just as the normal mice had in the previous experiment. In contrast, attaching wild-type mice to ob/ob animals did nothing to the wild-type mice and caused the ob/ob mice to limit their food consumption and gain less weight (Figure (Figure4).4). Coleman concluded that the ob/ob mice failed to produce a hormone that inhibits eating, while the db/db mice overproduced it but lack the receptor to transmit the hormonal signal (4).

Coleman faced some skepticism for his conclusion that obesity was not just about willpower and eating habits but also involved chemical and genetic factors. In this regard, he said, “When I published these findings, the long-standing dogma was that obesity was a behavioral problem (a lack of willpower) and not a physiological problem (a hormonal imbalance). I had to deal with this behavioral dogma most of my career.”

To validate his hypothesis, Coleman would need to identify the db and ob genes and protein products, a task that proved to be an insurmountable challenge at the time. He noted, “Definitive proof of my conclusions required isolating the satiety factor — a feat that resisted rigorous experimentation.” That is, until Jeffrey Friedman set his sights upon the task.

 After his third year of internal medicine residency at Albany Medical Center Hospital, Friedman  had no concrete plans for the following year, as he was not scheduled to begin a fellowship at the Brigham and Women’s Hospital in Boston until a year later. Friedman recalled, “I had no particular plans for the gap year, and John Balint, one of my professors, thought I might like research — why he thought I might have some particular aptitude, I can’t really tell. He said, ‘I have this friend at Rockefeller [Mary Jeanne Kreek], why don’t you go spend a year with her and see if you like research?’ I didn’t know what else I was going to do. My mother thought I should go spend the year as a ship’s doctor.”
A fat chance

Friedman was enraptured by what Kreek studied: how molecules control behavior. “That was 1981 and it was beginning to be evident that molecular biology was going to have a big impact, so instead of going to the Brigham for a fellowship, I abandoned medicine and decided to get a PhD with Jim Darnell [2002 Lasker award winner for his work in RNA processing and cytokine signaling], who was one of the leaders in molecular biology,” he noted. Friedman’s thesis was on the regulation of liver gene expression — how genes are turned on and off as liver regenerates. However, there was something he did on the side that was more impactful: Kreek had asked him to work with Bruce Schneider, another faculty member at Rockefeller University, to make an RIA for β-endorphin. However, Schneider’s primary interest was not in β-endorphin, but rather in cholecystokinin (CCK). In 1979, Rosalyn Yalow had published a paper in which she reported reduced levels of CCK in the brain of ob/ob mice and boldly claimed that CCK was the circulating factor that caused the ob/ob mice to be fat (5). Friedman recalled, “Well, Bruce had the exact opposite data, this was published in the JCI (6), and this started a battle with Yalow over who was correct. To address this, in 1982 Don Powell, Bruce, and I set out to clone the Cck gene so we could map it. We collaborated with Peter D’Eustachio at NYU, who showed that it was on chromosome 9 (7); ob is on 6, db is on 4. I still have Peter’s notebook entry from that time in which he wrote, ‘CCK does not map to chromosome 6, home ofob.’” So the question for Friedman became, if the circulating hormone is not CCK, then what is? When he started his own laboratory in 1986 at Rockefeller, he set out to find it, and as he recalls, “In a way what theob mouse represented to me was another instance where a molecule was controlling a behavior, the same as in Mary Jeanne’s lab.”

Do these genes make me look fat?

In the mid ’80s, positional cloning was not easy, but Friedman turned to the then-new techniques of physical gene mapping, complimented by conventional genetic mapping in mice. It had long been known that the obgene resided somewhere on mouse chromosome 6, but narrowing down the region was arduous, as the trait is recessive, necessitating the breeding of several generations. Friedman and his laboratory first determined which DNA markers were inherited along with the obese phenotype in over 1,600 mice crossbred from obese and nonobese strains. He remembers, “It was a mind numbing exercise you hoped someday would lead somewhere.” Since the genetic and physical maps are colinear, DNA markers that were linked to ob in genetic crosses could be used to clone the surrounding DNA. Using this approach, they eventually identified the portion of the genome in which all markers were always coinherited with ob among the progeny of the crosses. This region defined the chromosomal region in which the ob gene resided. As they had predicted when the crosses were set up, this region corresponded to an approximately 300,000–base pair region on chromosome 6. They then screened recombinant clones across this region for exon-intron boundaries, which indicate the presence of genes. One of the first three genes they isolated was expressed exclusively in adipose tissue, and the expression of the mutant gene was found to be 20 times greater in one of the ob/ob mutants than in controls. In a second mutant, the gene was not expressed at all, providing clear evidence that this gene encoded the ob gene. When they looked in the human genome, they found an ob homolog that was 84% identical with the mouse ob gene, establishing ob as a highly conserved, biologically important gene (8).

Once a fat-specific gene was found in the vicinity of ob, he remembered being almost numb with excitement as a set of confirmatory experiments unfolded. “I went in late on a Saturday night, and I found a radioactive probe for this gene, and I found a blot with RNA from fat tissue of normal and mutant mice. I hybridized the blot that evening and washed it at 1 in the morning. I couldn’t sleep, and I woke up at 5 or 6 and developed the blot. When I looked at the data, I immediately knew that we had cloned ob. When I saw it, I was in the darkroom, and I pulled up the film and looked at it under the light and got weak-kneed. I sort of fell backwards against the wall. This gene was in the right region of the chromosome, it was fat specific, and its expression was altered in two independent strains of ob mice. Before this, we didn’t know where ob would be expressed — and while fat was one of the tissues I considered, in principle the gene could have been expressed in any specialized cell type anywhere that had no obvious relationship to fat. But on the other hand, seeing a gene in the right region expressed exclusively in the fat . . . that gets your attention.” When he found out at 6 in the morning, he called his wife and said “we did it!,” and then, a few hours later he, called his former PhD advisor Jim Darnell: “I told him but I wasn’t sure he believed me.” That afternoon, he met some friends at Pete’s Tavern, “and we opened a bottle of champagne, and I told them, ‘I think this is going to be pretty big.’”

Next Friedman set his sights on actually identifying the product secreted by the ob gene and validating Coleman’s circulating hormone hypothesis. Together with Stephen Burley, his laboratory engineered E. colito fabricate the secreted protein, generated antibodies that would bind it, and showed that humans and rodents secrete it. In the last sentence of the 1995 Science paper describing these findings, Friedman “propose[d] that this 16-KD protein be called leptin, derived from the Greek root leptos, meaning thin” (9). The paper also showed that db/db mice made excess quantities of leptin, as predicted by Coleman, and its levels in plasma decreased in normal animals and obese humans after weight loss. He remembered, “It was an unbelievable time in the lab. The idea that there was this hormone that regulated body weight, and that we had found it, was just unimaginable. I’d wake up in the middle of the night just smiling.”

As for the name leptin, it has not only a Greek root, but a French one too. At a meeting, Friedman met Frenchman Roger Guillemin, who won a Nobel Prize for his work on peptide hormone production by the brain. A few weeks after the meeting, Friedman got a letter from him that he recalls saying, “I really liked what you had to say, but I have one quibble: you refer to these as obesity genes, but I think they are lean genes because the normal allele keeps you thin. But calling them lean genes sounds awkward. The nicest sounding root for thin is from Greek, so I propose you call ob and db ‘lepto-genes.’” So when it came time to name it, Friedman remembered Guillemin’s suggestion, and therein, the name leptin was coined.

Leptin’s legacy

Later in 1995, another group described the leptin receptor (10), and then subsequently, Friedman and another group showed that this leptin receptor is encoded by the db gene and has multiple forms, one of which is defective in Coleman’s originally described db/db mice (11, 12). Friedman also showed that the leptin receptor is especially abundant in the hypothalamus in which leptin can activate signal transduction and phosphorylation of the Stat3 transcription factor (13).

Over the years, numerous laboratories have studied leptin’s mechanism of action. Leptin acts on receptors expressed in groups of neurons in the hypothalamus, in which it inhibits appetite, in part, by counteracting the effects of neuropeptide Y, a potent feeding stimulant secreted by cells in the gut and in the hypothalamus, by thwarting the effects of anandamide, another potent feeding stimulant, and by promoting the synthesis of α-MSH (melanocyte stimulating hormone), an appetite suppressant (14). Leptin is produced in large amounts by white adipose tissue but can also be produced in lesser amounts by brown adipose tissue, syncytiotrophoblasts, ovaries, skeletal muscle, stomach, mammary epithelial cells, bone marrow, pituitary, and liver. Leptin’s actions are also not limited to regulating food intake, as it is has been shown to have roles in fertility, immunity, angiogenesis, and surfactant production. Friedman adds that the hormone, “has effects on many physiological systems, including the immune system where it modulates T cells, macrophages, and platelets. It now appears that leptin provides a key means by which nutritional state can regulate a host of other physiological systems.” While most of these actions are mediated by effects on the CNS, two of many key questions are, which of leptin’s effects on peripheral systems are direct, and which are indirect via the brain?

A magic bullet?

The first proof that leptin was important in humans came in 1997 when Stephen O’Rahilly and colleagues found two morbidly obese children who carried a mutation in the leptin gene (15). These researchers went on to show that leptin-replacement therapy could be useful in individuals with leptin mutations (16). Injection of leptin into these children led to rapid weight loss and markedly reduced food intake (Figure (Figure5).5). Leptin-replacement therapy also has potent effects in other clinical settings, including lipodystrophy, a disease state in which animals and humans have little white fat and develop severe diabetes, with profound insulin resistance and high plasma lipid levels. Because this syndrome is associated with low circulating levels of leptin, Shimomura and colleagues tested the effects of leptin-replacement therapy in mice and showed that it was highly effective (17); similar efficacy was later shown in humans (18). More recently, leptin treatment has shown a profound anti-diabetic effect in type 1 diabetic animals (19). Leptin replacement has also been shown to be of clinical benefit in other states of leptin deficiency, including hypothalamic amenorrhea (20).

Figure 5

Effects of r-metHuLeptin on the weight a child with congenital leptin deficiency.

Excited by leptin’s potential for the treatment of obesity, the biotech company Amgen paid $20 million to Rockefeller to license the hormone. With so much of the world’s population overweight or obese, a treatment or cure would be a major advance in public health and would likely be very lucrative. Amgen sponsored a large clinical trial, giving leptin to overweight adults, but while a subset of obese patients lost significant amounts of weight on leptin, the average magnitude of the effect was minimal, dampening hopes that leptin was the magic bullet in the obesity fight (21). After the trial, Amgen announced that they had suspended studies of the effects of leptin for the treatment of human obesity.

Friedman says he understands why the trials failed: “Even before leptin was tested in obese patients, we knew from animal studies that this hormone was not likely to be a panacea for every obese patient and that the response seen in ob/ob mice wasn’t going to be the typical case for obese humans. Leptin levels are elevated in obese humans, suggesting that obesity is often associated with leptin resistance and raising the possibility that increasing already high levels was going to be of arguable benefit.” The key to making leptin work may be in coaxing the brain to respond to leptin: some people are simply not sensitive enough or they develop resistance. Friedman predicts that through personalized medicine, doctors may at some point be able to identify which obese people will respond to leptin. In the meantime, there is some clinical evidence that leptin’s ability to reduce weight among obese patients can be restored by combining it with other agents (22).

The thrill of discovery

For all the social implications, potential profits, and medical possibilities, Friedman is circumspect but proud about the discovery of leptin, saying, “whether it finds its way into general usage as an antiobesity drug, the use of modern methods to identify and target the components of the leptin- signaling pathway will, I believe, form the basis for new pharmacological approaches to the treatment of obesity and other nutritional disorders.” Coleman agrees, stating that “with the discovery of leptin and the subsequent cloning of the leptin receptor, the field exploded. With these findings, two long-standing misconceptions were definitively laid to rest: obesity was not merely a behavioral problem but rather had a significant physiological component; and adipose tissue was not merely a fat-storage site but rather an important endocrine organ.”

Both Coleman and Friedman (Figure (Figure6)6) were overwhelmed and humbled by the news that they would receive the 2010 Lasker Award for Basic Medical Research. Coleman notes, “I have always viewed this award as one of the most esteemed of the several truly prestigious biomedical research awards, and it is with great pride and humility that I accept this prestigious prize. I was also especially delighted to learn that I would be sharing this award with Jeffrey Friedman, who always acknowledged my earlier contributions to our field.” Friedman added, “It is an honor to join a group of other winners who really are at the highest level of science. To be placed among them is just hard to fathom.”

Figure 6

Coleman and Friedman, together at The Jackson Laboratory, in 1995.

Coleman retired from his scientific career in 1991. He has said that at his retirement ceremony “someone commented that my career was characterized by the ability to use the simplest technique to answer the most complex biological questions.” Friedman, however, is still at the bench and active as ever in his hunt to determine exactly how leptin regulates food intake. Through their determination and persistence, the two have provided a molecular framework for understanding obesity, but they have different opinions about how much luck played into their findings. Coleman has noted that he favors the Louis Pasteur quote, “Luck favors the prepared mind.” But Friedman has a different perspective, stating “my story suggests that in many cases, the prepared mind is favored by chance.”

Acknowledgments

As Coleman was away and unavailable for comment during the preparation of this article, his quotations were taken from an autobiography he wrote when accepting the Shaw prize in 2009, from his acceptance remarks for the Lasker prize, and from a profile written by Luther Young posted on the Bangor Daily Newsin 2009 ( http://www.bangordailynews.com/story/Hancock/Scientists-work-at-Jackson-Lab-lauded,118612?print=1).

References
1. Hummel KP, Dickie MM, Coleman DL. Diabetes, a new mutation in the mouse. Science.1966;153(740):1127–1128. doi: 10.1126/science.153.3740.1127. [PubMed] [Cross Ref]
2. Coleman DL, Hummel KP. Effects of parabiosis of normal with genetically diabetic mice. Am J Physiol.1969;217(5):1298–1304. [PubMed]
3. Ingalls AM, Dickie MM, Snell GD. Obese, a new mutation in the house mouse. J Hered.1950;41(12):317–318. [PubMed]
4. Coleman DL. Effects of parabiosis of obese with diabetes and normal mice. Diabetologia. 1973;9(4):294–298. doi: 10.1007/BF01221857. [PubMed] [Cross Ref]
5. Straus E, Yalow RS. Gastrointestinal peptides in the brain. Fed Proc. 1979;38(9):2320–2324. [PubMed]
6. Schneider BS, Monahan JW, Hirsch J. Brain cholecystokinin and nutritional status in rats and mice. J Clin Invest. 1979;64(5):1348–1356. doi: 10.1172/JCI109591. [PMC free article] [PubMed] [Cross Ref]
7. Friedman JM, Schneider BS, Barton DE, Francke U. Level of expression and chromosome mapping of the mouse cholecystokinin gene: implications for murine models of genetic obesity. Genomics.1989;5(3):463–469. doi: 10.1016/0888-7543(89)90010-4. [PubMed] [Cross Ref]
8. Zhang Y, Proenca R, Maffei M, Barone M, Leopold L, Friedman JM. Positional cloning of the mouse obese gene and its human homologue. Nature. 1994;372(6505):425–432. doi: 10.1038/372425a0. [PubMed][Cross Ref]
9. Halaas JL, et al. Weight-reducing effects of the plasma protein encoded by the obese gene. Science.1995;269(5223):543–546. doi: 10.1126/science.7624777. [PubMed] [Cross Ref]
10. Tartaglia LA, et al. Identification and expression cloning of a leptin receptor, OB-R. Cell.1995;83(7):1263–1271. doi: 10.1016/0092-8674(95)90151-5. [PubMed] [Cross Ref]
11. Chen H, et al. Evidence that the diabetes gene encodes the leptin receptor: identification of a mutation in the leptin receptor gene in db/db mice. Cell. 1996;84(3):491–495. doi: 10.1016/S0092-8674(00)81294-5.[PubMed] [Cross Ref]
12. Lee GH, et al. Abnormal splicing of the leptin receptor in diabetic mice. Nature. 1996;379(6566):632–635. doi: 10.1038/379632a0. [PubMed] [Cross Ref]
13. Vaisse C, Halaas JL, Horvath CM, Darnell JE, Jr, Stoffel M, Friedman JM. Leptin activation of Stat3 in the hypothalamus of wild-type and ob/ob mice but not db/db mice. Nat Genet. 1996;14(1):95–97. [PubMed]
14. Friedman JM, Halaas JL. Leptin and the regulation of body weight in mammals. Nature.1998;395(6704):763–770. doi: 10.1038/27376. [PubMed] [Cross Ref]
15. Montague CT, et al. Congenital leptin deficiency is associated with severe early-onset obesity in humans.Nature. 1997;387(6636):903–908. doi: 10.1038/43185. [PubMed] [Cross Ref]
16. Farooqi IS, et al. Beneficial effects of leptin on obesity, T cell hyporesponsiveness, and neuroendocrine/metabolic dysfunction of human congenital leptin deficiency. J Clin Invest.2002;110(8):1093–1103. [PMC free article] [PubMed]
17. Shimomura I, Hammer RE, Ikemoto S, Brown MS, Goldstein JL. Leptin reverses insulin resistance and diabetes mellitus in mice with congenital lipodystrophy. Nature. 1999;401(6748):73–76. doi: 10.1038/43448.[PubMed] [Cross Ref]
18. Oral EA, et al. Leptin-replacement therapy for lipodystrophy. N Engl J Med. 2002;346(8):570–578. doi: 10.1056/NEJMoa012437. [PubMed] [Cross Ref]
19. Wang MY, et al. Leptin therapy in insulin-deficient type I diabetes. Proc Natl Acad Sci U S A.2010;107(11):4813–4819. doi: 10.1073/pnas.0909422107. [PMC free article] [PubMed] [Cross Ref]
20. Welt CK, et al. Recombinant human leptin in women with hypothalamic amenorrhea. N Engl J Med.2004;351(10):987–997. doi: 10.1056/NEJMoa040388. [PubMed] [Cross Ref]
21. Heymsfield SB, et al. Recombinant leptin for weight loss in obese and lean adults: a randomized, controlled, dose-escalation trial. JAMA. 1999;282(16):1568–1575. doi: 10.1001/jama.282.16.1568.[PubMed] [Cross Ref]
22. Roth JD, et al. Leptin responsiveness restored by amylin agonism in diet-induced obesity: evidence from nonclinical and clinical studies. Proc Natl Acad Sci U S A. 2008;105(20):7257–7262. doi: 10.1073/pnas.0706473105. [PMC free article] [PubMed] [Cross Ref]
Autobiography of Jeffrey M Friedman

My laboratory identified leptin, a hormone that is produced by fat tissue. Leptin acts on the brain to modulate food intake and functions as an afferent signal in a feedback loop that regulates weight. My route to this hormone is filled with a number of chance events and turns of fate that were in no way predictable at the time that I started my career.I grew up in the suburbs of New York City in a village where children had enormous freedom. I recall from an early age riding my bicycle everywhere without my parents, or anyone else for that matter, knowing my whereabouts. My father was a radiologist and my mother was a teacher. No one in my family or community had pursued an academic career and at the time I was completely unaware of the possibility that one could make a career in science. In my family, the highest level of achievement was to become a doctor and, despite my earliest dreams of a career as a professional athlete (made unlikely by a notable lack of talent) and a later wish to become a veterinarian, I became a doctor.I was originally trained in internal medicine with some subspecialty training in gastroenterology. In medical school and as a medical resident, I participated in some modest research studies. The first piece of work I completed related to the effects of dietary salt on the regulation of blood pressure. After completing this project, I excitedly submitted a paper for publication. I remember one of the reviews verbatim: “This paper should not be published in the Journal of Clinical Investigation or anywhere else.” Fortunately, one of my mentors in medical school still thought I might have some aptitude for research. He suggested that I go to The Rockefeller University to work in a basic science research laboratory. I joined the laboratory of Dr Mary-Jeanne Kreek to study the effects of endorphins in the development of narcotic addiction.I was fascinated by the idea that endogenous molecules could alter behaviour and emotional state. At The Rockefeller University, I met another scientist, Bruce Schneider. Bruce was studying cholecystokinin (CCK), a peptide hormone that is secreted by intestinal cells. CCK aids digestion by stimulating the secretion of enzymes from the pancreas and bile from the gallbladder. CCK had also been found in neurons of the brain, although its function there was less clear. In the late 1970s, it was shown that injections of CCK reduce food intake. This finding appealed to me as another example of how a single molecule can change behavior. One other fact also piqued my interest: There were indications that the levels of CCK were decreased in a genetically obese ob/ob mouse. These mutant mice are massively obese as a consequence of a defect in a single gene. The mice eat excessively and weigh 3 to 5 times as much as normal mice. It was thus hypothesized that CCK functions as an endogenous appetite suppressant and that a deficiency of CCK caused the obesity evident in ob/ob mice. Fascinated by this possibility, I set out to establish the possible role of CCK in the pathogenesis of obesity in these animals. To do this I was going to need additional training in basic research, so I abandoned my plans to continue medical training in gastroenterology and instead entered the PhD program at The Rockefeller University.As a PhD student I worked in the laboratory of Jim Darnell, studying the regulation of gene expression in liver, and learning the basic tools of molecular biology. But I carried my interest in the ob/ob gene with me. At the end of my graduate studies, two colleagues and I successfully isolated the CCK gene from mouse. One of the first studies we performed after isolating the gene was to determine its chromosomal position. We found that the CCK gene was not on chromosome 6, where the ob mutation had been localized, which thus excluded defective CCK as the cause of the obesity. The question thus remained: What is the nature of the defective gene in ob/ob mice?

After receiving my PhD in 1986, I became an assistant professor at The Rockefeller University and set out to answer this question. The culmination of what proved to be an 8-year odyssey was the identification of the ob gene in 1994. We now know that the ob gene encodes the hormone leptin. The discovery of this hormone, a singular event in my life, was absolutely exhilarating. The realization that nature had happened upon such a simple and elegant solution for regulating weight was the closest thing I have ever had to a religious experience. Subsequent studies revealed that injections of leptin dramatically decrease the food intake of mice and other mammals. My current studies now focus on several questions, including the one that originally aroused my interest in this mutation: How is it that a single molecule – leptin – profoundly influences feeding behavior? An esteemed colleague of mine remarked recently that I had searched for the ob gene primarily so that I could approach the question I had started with. It is as yet unclear whether I will succeed in understanding how a single molecule can influence a complex behaviour.

  1. Coleman, DL (1978). “Obese and Diabetes: two mutant genes causing diabetes-obesity syndromes in mice”. Diabetologia 14: 141–148. doi:10.1007/bf00429772.
  2. Jump up^ Green ED, Maffei M, Braden VV, Proenca R, DeSilva U, Zhang Y, Chua SC Jr, Leibel RL, Weissenbach J, Friedman JM. (August 1995). “The human obese (OB) gene: RNA expression pattern and mapping on the physical, cytogenetic, and genetic maps of chromosome 7”.Genome Research 5 (1): 5–12. doi:10.1101/gr.5.1.5.PMID 8717050.
  3. Jump up^ Shell E (January 1, 2002). “Chapter 4: On the Cutting Edge”. The Hungry Gene: The Inside Story of the Obesity Industry. Atlantic Monthly Press. ISBN 978-1-4223-5243-4.
  4. Jump up^ Shell E (January 1, 2002). “Chapter 5: Hunger”. The Hungry Gene: The Inside Story of the Obesity Industry. Atlantic Monthly Press.ISBN 978-1-4223-5243-4.
  5. Jump up^ Zhang Y, Proenca R, Maffei M, Barone M, Leopold L, Friedman JM (December 1994). “Positional cloning of the mouse obese gene and its human homologue”. Nature 372 (6505): 425–432.doi:10.1038/372425a0. PMID 7984236.
  6. Jump up^ Rosenbaum M (1998). “Leptin”. The Scientist Magazine.
  7. Jump up^ Okie S (February 11, 2005). “Chapter 2: Obese Twins and Thrifty Genes”. Fed Up!: Winning the War Against Childhood Obesity. Joseph Henry Press, an imprint of the National Academies Press. ISBN 978-0-309-09310-1.
  8. Jump up^ Zhang, Y; Proenca, P; Maffei, M; Barone, M; Leopold, L; Friedman, JM. (1994). “Positional cloning of the mouse obese gene and its human homologue”. Nature 372 (6505): 425–432.doi:10.1038/372425a0. PMID 7984236.
  9. ^ Jump up to:a b Friedman, Jeffrey (2014). “Douglas Coleman (1931–2014) Biochemist who revealed biology behind obesity”. Nature 509 (7502): 564. doi:10.1038/509564a. PMID 24870535.
  10. Jump up^ Shaw Prize 2009
  11. Jump up^ King Faisal Prize 2013 for Medicine

A Metabolic Master Switch Underlying Human Obesity

Researchers find pathway that controls metabolism by prompting fat cells to store or burn fat

Aug 21, 2015  http://www.technologynetworks.com/Metabolomics/news.aspx?ID=182195

Researchers find pathway that controls metabolism by prompting fat cells to store or burn fat.

Obesity is one of the biggest public health challenges of the 21st century. Affecting more than 500 million people worldwide, obesity costs at least $200 billion each year in the United States alone, and contributes to potentially fatal disorders such as cardiovascular disease, type 2 diabetes, and cancer.

But there may now be a new approach to prevent and even cure obesity, thanks to a study led by researchers at MIT and Harvard Medical School. By analyzing the cellular circuitry underlying the strongest genetic association with obesity, the researchers have unveiled a new pathway that controls human metabolism by prompting our adipocytes, or fat cells, to store fat or burn it away.

“Obesity has traditionally been seen as the result of an imbalance between the amount of food we eat and how much we exercise, but this view ignores the contribution of genetics to each individual’s metabolism,” says senior author Manolis Kellis, a professor of computer science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and of the Broad Institute.

New mechanism found

The strongest association with obesity resides in a gene region known as “FTO,” which has been the focus of intense scrutiny since its discovery in 2007. However, previous studies have failed to find a mechanism to explain how genetic differences in the region lead to obesity.

“Many studies attempted to link the FTO region with brain circuits that control appetite or propensity to exercise,” says first author Melina Claussnitzer, a visiting professor at CSAIL and instructor in medicine at Beth Israel Deaconess Medical Center and Harvard Medical School. “Our results indicate that the obesity-associated region acts primarily in adipocyte progenitor cells in a brain-independent way.”

To recognize the cell types where the obesity-associated region may act, the researchers used annotations of genomic control switches across more than 100 tissues and cell types. They found evidence of a major control switchboard in human adipocyte progenitor cells, suggesting that genetic differences may affect the functioning of human fat stores.

To study the effects of genetic differences in adipocytes, the researchers gathered adipose samples from healthy Europeans carrying either the risk or the non-risk version of the region. They found that the risk version activated a major control region in adipocyte progenitor cells, which turned on two distant genes, IRX3 and IRX5.

Control of thermogenesis

Follow-up experiments showed that IRX3 and IRX5 act as master controllers of a process known as thermogenesis, whereby adipocytes dissipate energy as heat, instead of storing it as fat. Thermogenesis can be triggered by exercise, diet, or exposure to cold, and occurs both in mitochondria-rich brown adipocytes that are developmentally related to muscle, and in beige adipocytes that are instead related to energy-storing white adipocytes.

“Early studies of thermogenesis focused primarily on brown fat, which plays a major role in mice, but is virtually nonexistent in human adults,” Claussnitzer says. “This new pathway controls thermogenesis in the more abundant white fat stores instead, and its genetic association with obesity indicates it affects global energy balance in humans.”

The researchers predicted that a genetic difference of only one nucleotide is responsible for the obesity association. In risk individuals, a thymine (T) is replaced by a cytosine (C) nucleobase, which disrupts repression of the control region and turns on IRX3 and IRX5. This then turns off thermogenesis, leading to lipid accumulation and ultimately obesity.

By editing a single nucleotide position using the CRISPR/Cas9 system — a technology that allows researchers to make precise changes to a DNA sequence — the researchers could switch between lean and obese signatures in human pre-adipocytes. Switching the C to a T in risk individuals turned off IRX3 and IRX5, restored thermogenesis to non-risk levels, and switched off lipid storage genes.

“Knowing the causal variant underlying the obesity association may allow somatic genome editing as a therapeutic avenue for individuals carrying the risk allele,” Kellis says. “But more importantly, the uncovered cellular circuits may allow us to dial a metabolic master switch for both risk and non-risk individuals, as a means to counter environmental, lifestyle, or genetic contributors to obesity.”

Success in human and mouse cells

The researchers showed that they could indeed manipulate this new pathway to reverse the signatures of obesity in both human cells and mice.

In primary adipose cells from either risk or non-risk individuals, altering the expression of either IRX3 or IRX5 switched between energy-storing white adipocyte functions and energy-burning beige adipocyte functions.

Similarly, repression of IRX3 in mouse adipocytes led to dramatic changes in whole-body energy balance, resulting in a reduction of body weight and all major fat stores, and complete resistance to a high-fat diet.

“By manipulating this new pathway, we could switch between energy storage and energy dissipation programs at both the cellular and the organismal level, providing new hope for a cure against obesity,” Kellis says.

The researchers are currently establishing collaborations in academia and industry to translate their findings into obesity therapeutics. They are also using their approach as a model to understand the circuitry of other disease-associated regions in the human genome.

Flipping a Genetic Switch on Obesity?

Illustration of a DNA switchWhen weight loss is the goal, the equation seems simple enough: consume fewer calories and burn more of them exercising. But for some people, losing and keeping off the weight is much more difficult for reasons that can include a genetic component. While there are rare genetic causes of extreme obesity, the strongest common genetic contributor discovered so far is a variant found in an intron of the FTO gene. Variations in this untranslated region of the gene have been tied to differences in body mass and a risk of obesity [1]. For the one in six people of European descent born with two copies of the risk variant, the consequence is carrying around an average of an extra 7 pounds [2].

Now, NIH-funded researchers reporting in The New England Journal of Medicine [3] have figured out how this gene influences body weight. The answer is not, as many had suspected, in regions of the brain that control appetite, but in the progenitor cells that produce white and beige fat. The researchers found that the risk variant is part of a larger genetic circuit that determines whether our bodies burn or store fat. This discovery may yield new approaches to intervene in obesity with treatments designed to change the way fat cells handle calories.

The team—led by Melina Claussnitzer of Beth Israel Deaconess Medical Center, Boston, and Manolis Kellis of the Massachusetts Institute of Technology (MIT), Cambridge—started with a basic question: where in the body does this variant act to influence weight? For the answer, the team turned to the NIH-funded Roadmap Epigenomics Project. There, they found comprehensive data on 127 human cell types and the occurrence of common chemical modifications that act like volume knobs to turn gene activity “up” or “down” based on changes in the way DNA is packaged. While the FTO gene is active in the human brain, the team couldn’t connect any differences there with obesity.

They began to wonder whether this obesity-risk variant affected FTO at all (and prior studies had suggested this [4]). Maybe it operated at a distance to change the expression of other protein-coding genes? Sure enough, further study in fat collected from patients showed that the obesity risk variant works in those progenitor cells to control the activity of two other genes, IRX3 andIRX5, both found quite a distance away.

The fat in people with the obesity risk variant and greater expression of IRX3 and IRX5 genes contains fewer beige cells than normal. Beige cells, which were discovered just three years ago [5], are produced sometimes by fat cell progenitors to burn rather than stockpile energy. This new evidence suggests that beige fat may play an unexpectedly important role in protecting against obesity.

Using a method they developed last year [6], the researchers traced the effects of the obesity risk variant to a single nucleotide change—a small typo in the DNA sequence that changes a “T” to a “C.” They then used the nifty CRISPR-Cas genome editing system (see Copy-Editing the Genome) to switch between this obesity risk variant and the protective variant in human cells. As the researchers did this, they saw fat cells turn energy-burning heat production off and back on again. In other words, the obesity signature in the cells could be turned on and off at the flip of this genetic switch!

They also showed in mice that the shift toward energy-burning beige cells led to weight loss. Animals engineered in a way that blocked Irx3 expression in adipose tissue became significantly thinner with no change in their eating or exercise habits. This new collection of evidence suggests that treatments designed to program fat cells to burn more energy (such as antagonists against the IRX3 or IRX5 proteins) might have similar benefits in people, and the researchers are working with collaborators in academia and industry to pursue this line of investigation.

This is a great example of how discoveries about genetic factors in common disease, uncovered by applying the genome-wide association study (GWAS) approach to large numbers of affected and unaffected individuals, are revealing critical and previously unknown pathways in human biology and medicine. This case also points out how our terminology may need attention, however; for the last several years, this genetic variant for obesity has been called “the FTO variant,” perhaps it should now be called “the IRX3/5 variant.”

Genes, of course, are only part of the story. It’s still important to eat healthy, limit your portions, and maintain a regular exercise program. Leading an active lifestyle both keeps weight down and improves the overall sense of well being.

References:

[1] FTO genotype is associated with phenotypic variability of body mass index.Yang J, Loos RJ, Powell JE, TM, Frayling TM, Hirschhorn JN, Goddard ME, Visscher PM, et al. Nature. 2012 Oct 11;490(7419):267-72.

[2] A common variant in the FTO gene is associated with body mass index and predisposes to childhood and adult obesity. Frayling TM, Timpson NJ, Weedon MN, Morris AD, Smith GD, Hattersley AT, McCarthy MI, et al. Science. 2007 May 11;316(5826):889-94.

[3] FTO Obesity Variant Circuitry and Adipocyte Browning in Humans. Claussnitzer M, Dankel SN, Kim KH, Quon G, Meuleman W, Haugen C, Glunk V, Sousa IS, Beaudry JL, Puviindran V, Abdennur NA, Liu J, Svensson PA, Hsu YH, Drucker DJ, Mellgren G, Hui CC, Hauner H, Kellis M. N Engl J Med. 2015 Aug 19. [Epub ahead of print]

[4] Obesity-associated variants within FTO form long-range functional connections with IRX3. Smemo S, Tena JJ, Kim KH, Hui CC, Gomez-Skarmeta JL, Nobrega MA, et al. Nature 2014 Mar 20; 507(7492):371-375.

[5] Beige adipocytes are a distinct type of themogenic fat cell in mouse and human. Wu J, Boström P, Sparks LM, Schrauwen P, Spiegelman BM. Cell 2012 Jul 20:150(2):366-376.

[6] Leveraging cross-species transcription factor binding site patterns: from diabetes risk loci to disease mechanisms. Claussnitzer M, Dankel SN, Klocke Mellgren G, Hauner H, Laumen H, et al. Cell. 2014 Jan 16;156(1-2):343-58.

Links:

Manolis Kellis (Massachusetts Institute of Technology, Cambridge)

What are overweight and obesity? (National Heart, Lung, and Blood Institute/NIH)

NIH Roadmap Epigenomics Project

NIH Support: National Human Genome Research Institute; National Institute of General Medical Sciences

MiR-93 Controls Adiposity via Inhibition of Sirt7 and Tbx3

CELL REPORTS · AUGUST 2015
Impact Factor: 8.36 · DOI: 10.1016/j.celrep.2015.08.006 

https://www.researchgate.net/publication/281394525_MiR-93_Controls_Adiposity_via_Inhibition_of_Sirt7_and_Tbx3

Conquering obesity has become a major socioeconomic challenge. Here, we show that reduced expression of the miR-25-93-106b cluster, or miR-93 alone, increases fat mass and, subsequently, insulin resistance. Mechanistically, we discovered an intricate interplay between enhanced adipocyte precursor turnover and increased adipogenesis. First, miR-93 controls Tbx3, thereby limiting self-renewal in early adipocyte precursors. Second, miR-93 inhibits the metabolic target Sirt7, which we identified as a major driver of in vivo adipogenesis via induction of differentiation and maturation of early adipocyte precursors. Using mouse parabiosis, obesity in mir-25-93-106b(-/-) mice could be rescued by restoring levels of circulating miRNA and subsequent inhibition of Tbx3 and Sirt7. Downregulation of miR-93 also occurred in obese ob/ob mice, and this phenocopy of mir-25-93-106b(-/-) was partially reversible with injection of miR-93 mimics. Our data establish miR-93 as a negative regulator of adipogenesis and a potential therapeutic option for obesity and the metabolic syndrome.

Read Full Post »

George A. Miller, a Pioneer in Cognitive Psychology, Is Dead at 92

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Intelligence

Series E. 2; 5.10

5.10 George A. Miller, a Pioneer in Cognitive Psychology, Is Dead at 92

By PAUL VITELLOAUG. 1, 2012

http://www.nytimes.com/2012/08/02/us/george-a-miller-cognitive-psychology-pioneer-dies-at-92.html?_r=0

Miller started his education focusing on speech and language and published papers on these topics, focusing on mathematicalcomputational and psychological aspects of the field. He started his career at a time when the reigning theory in psychology was behaviorism, which eschewed any attempt to study mental processes and focused only on observable behavior. Working mostly at Harvard UniversityMIT and Princeton University, Miller introduced experimental techniques to study the psychology of mental processes, by linking the new field of cognitive psychology to the broader area of cognitive science, including computation theory and linguistics. He collaborated and co-authored work with other figures in cognitive science and psycholinguistics, such as Noam Chomsky. For moving psychology into the realm of mental processes and for aligning that move with information theory, computation theory, and linguistics, Miller is considered one of the great twentieth-century psychologists. A Review of General Psychology survey, published in 2002, ranked Miller as the 20th most cited psychologist of that era.[2]

Remembering George A. Miller

The human mind works a lot like a computer: It collects, saves, modifies, and retrieves information. George A. Miller, one of the founders of cognitive psychology, was a pioneer who recognized that the human mind can be understood using an information-processing model. His insights helped move psychological research beyond behaviorist methods that dominated the field through the 1950s. In 1991, he was awarded the National Medal of Science for his significant contributions to our understanding of the human mind.

http://www.psychologicalscience.org/index.php/publications/observer/2012/october-12/remembering-george-a-miller.html

Working memory

From the days of William James, psychologists had the idea memory consisted of short-term and long-term memory. While short-term memory was expected to be limited, its exact limits were not known. In 1956, Miller would quantify its capacity limit in the paper “The magical number seven, plus or minus two”. He tested immediate memory via tasks such as asking a person to repeat a set of digits presented; absolute judgment by presenting a stimulus and a label, and asking them to recall the label later; and span of attention by asking them to count things in a group of more than a few items quickly. For all three cases, Miller found the average limit to be seven items. He had mixed feelings about the focus on his work on the exact number seven for quantifying short-term memory, and felt it had been misquoted often. He stated, introducing the paper on the research for the first time, that he was being persecuted by an integer.[1] Miller also found humans remembered chunks of information, interrelating bits using some scheme, and the limit applied to chunks. Miller himself saw no relationship among the disparate tasks of immediate memory and absolute judgment, but lumped them to fill a one-hour presentation. The results influenced the budding field of cognitive psychology.[15]

WordNet

For many years starting from 1986, Miller directed the development of WordNet, a large computer-readable electronic reference usable in applications such as search engines.[12] Wordnet is a dictionary of words showing their linkages by meaning. Its fundamental building block is a synset, which is a collection of synonyms representing a concept or idea. Words can be in multiple synsets. The entire class of synsets is grouped into nouns, verbs, adjectives and adverbs separately, with links existing only within these four major groups but not between them. Going beyond a thesaurus, WordNet also included inter-word relationships such as part/whole relationships and hierarchies of inclusion.[16] Miller and colleagues had planned the tool to test psycholinguistic theories on how humans use and understand words.[17] Miller also later worked closely with the developers at Simpli.com Inc., on a meaning-based keyword search engine based on WordNet.[18]

Language psychology and computation

Miller is considered one of the founders of psycholinguistics, which links language and cognition in psychology, to analyze how people use and create language.[1] His 1951 book Language and Communication is considered seminal in the field.[5] His later book, The Science of Words (1991) also focused on language psychology.[19] He published papers along with Noam Chomsky on the mathematics and computational aspects of language and its syntax, two new areas of study.[20][21][22] Miller also researched how people understood words and sentences, the same problem faced by artificial speech-recognition technology. The book Plans and the Structure of Behavior (1960), written with Eugene Galanter and Karl H. Pribram, explored how humans plan and act, trying to extrapolate this to how a robot could be programmed to plan and do things.[1] Miller is also known for coining Miller’s Law: “In order to understand what another person is saying, you must assume it is true and try to imagine what it could be true of”.[23]

Language and Communication, 1951[edit]

Miller’s Language and Communication was one of the first significant texts in the study of language behavior. The book was a scientific study of language, emphasizing quantitative data, and was based on the mathematical model of Claude Shannon‘s information theory.[24] It used a probabilistic model imposed on a learning-by-association scheme borrowed from behaviorism, with Miller not yet attached to a pure cognitive perspective.[25] The first part of the book reviewed information theory, the physiology and acoustics of phonetics, speech recognition and comprehension, and statistical techniques to analyze language.[24]The focus was more on speech generation than recognition.[25] The second part had the psychology: idiosyncratic differences across people in language use; developmental linguistics; the structure of word associations in people; use of symbolism in language; and social aspects of language use.[24]

Reviewing the book, Charles E. Osgood classified the book as a graduate-level text based more on objective facts than on theoretical constructs. He thought the book was verbose on some topics and too brief on others not directly related to the author’s expertise area. He was also critical of Miller’s use of simple, Skinnerian single-stage stimulus-response learning to explain human language acquisition and use. This approach, per Osgood, made it impossible to analyze the concept of meaning, and the idea of language consisting of representational signs. He did find the book objective in its emphasis on facts over theory, and depicting clearly application of information theory to psychology.[24]

Plans and the Structure of Behavior, 1960[edit]

In Plans and the Structure of Behavior, Miller and his co-authors tried to explain through an artificial-intelligence computational perspective how animals plan and act.[26] This was a radical break from behaviorism which explained behavior as a set or sequence of stimulus-response actions. The authors introduced a planning element controlling such actions.[27] They saw all plans as being executed based on input using a stored or inherited information of the environment (called the image), and using a strategy called test-operate-test-exit (TOTE). The image was essentially a stored memory of all past context, akin to Tolman‘scognitive map. The TOTE strategy, in its initial test phase, compared the input against the image; if there was incongruity the operate function attempted to reduce it. This cycle would be repeated till the incongruity vanished, and then the exit function would be invoked, passing control to another TOTE unit in a hierarchically arranged scheme.[26]

Peter Milner, in a review in the Canadian Journal of Psychology, noted the book was short on concrete details on implementing the TOTE strategy. He also critically viewed the book as not being able to tie its model to details from neurophysiology at a molecular level. Per him, the book covered only the brain at the gross level of lesion studies, showing that some of its regions could possibly implement some TOTE strategies, without giving a reader an indication as to how the region could implement the strategy.[26]

The Psychology of Communication, 1967[edit]

Miller’s 1967 work, The Psychology of Communication, was a collection of seven previously published articles. The first “Information and Memory” dealt with chunking, presenting the idea of separating physical length (the number of items presented to be learned) and psychological length (the number of ideas the recipient manages to categorize and summarize the items with). Capacity of short-term memory was measured in units of psychological length, arguing against a pure behaviorist interpretation since meaning of items, beyond reinforcement and punishment, was central to psychological length.[28]

The second essay was the paper on magical number seven. The third, ‘The human link in communication systems,’ used information theory and its idea of channel capacity to analyze human perception bandwidth. The essay concluded how much of what impinges on us we can absorb as knowledge was limited, for each property of the stimulus, to a handful of items.[28] The paper on “Psycholinguists” described how effort in both speaking or understanding a sentence was related to how much of self-reference to similar-structures-present-inside was there when the sentence was broken down into clauses and phrases.[29] The book, in general, used the Chomskian view of seeing language rules of grammar as having a biological basis—disproving the simple behaviorist idea that language performance improved with reinforcement—and using the tools of information and computation to place hypotheses on a sound theoretical framework and to analyze data practically and efficiently. Miller specifically addressed experimental data refuting the behaviorist framework at concept level in the field of language and cognition. He noted this only qualified behaviorism at the level of cognition, and did not overthrow it in other spheres of psychology.[28]

https://en.wikipedia.org/wiki/George_Armitage_Miller

Read Full Post »

The structure of our visual and auditory system

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Intelligence

Series E. 2; 5.8

Revised 9/30/2015

Torsten N. Wiesel (1921— )
President Emeritus
Vincent and Brooke Astor Professor Emeritus – Rockefeller Univerity
1981 Nobel Prize in Physiology or Medicine

Torsten N. Wiesel

The structure of our visual system, beginning at the eyes and ending at the primary visual cortex at the back of the brain, is a little like a maze, intricately constructed to send visual signals through myriad portals and passageways to reach just the right neurons at the end of the path. In the 1950s, H. Keffer Hartline, a member of The Rockefeller Institute for Medical Research, charted the first avenues of that maze when he revealed how the visual stimulus received by the retina is divided, altered and sharpened by the optic nerve network in order to send a more useful picture to the brain. Former Rockefeller president Torsten N. Wiesel, along with his colleague David H. Hubel, continued Dr. Hartline’s exploration at their Harvard Medical School laboratory by delving further back, into the brain, and described for the first time how the system develops innately, how experience shapes it further and how it analyzes visual signals. For this work, Drs. Wiesel and Hubel shared the 1981 Nobel Prize in Physiology or Medicine.

The complex array of stimuli in our visual field passes first through several distinct layers of cells known collectively as the retina. Next they are analyzed by the optic nerve and make their way to the lateral geniculate nucleus (LGN), the first visual processing center in the brain, located in the thalamus of each brain hemisphere. From the LGN, the signals are sent to the primary visual cortex, also known as the striate cortex. Working with cats and rhesus macaque monkeys, Drs. Wiesel and Hubel recorded the electrical impulses of cortical cells in response to various patterns flashed before the eyes. They coined the terms “simple” and “complex” for cells that respond to only one type of stimulus and those that respond to multiple and opposite stimuli.

To understand the differentiation, the scientists conducted a series of experiments to observe the brain’s response when one eye is kept closed for different periods of time. They discovered that animals with one eye closed for the first three months of life become blind in that eye. Examinations revealed no change in the eye itself or in the retina; the LGN cells devoted to that eye had shrunken but still responded to stimulation of the deprived eye as efficiently as those for the normal eye. The difference, they concluded, must therefore be in the striate cortex.

Awards: Nobel Prize in Physiology or Medicine, Louisa Gross Horwitz Prize, National Medal of Science for Biological Sciences

Swedish-born American neurobiologist Torsten N. Wiesel was raised at Beckomberga Hospital, the mental institute where his father was chief psychiatrist. He described himself as a lazy student until his late teens, before embarking on a career of research into the physiology of vision. Wiesel was awarded the Nobel Prize for Medicine in 1981, along with his long-time collaborator David H. Hubel, for mapping the visual or striate cortex, the posterior section of the cerebral cortex. Roger W. Sperry shared that year’s Nobel honors, for work conducted at CalTech. Wiesel also demonstrated the importance of early diagnosis of childhood visual problems.

When a reporter informed him he had won the Nobel Prize, Wiesel’s first response was, “Oh, no, I was afraid of that”, explaining that he feared the hubbub might prove a distraction from his work. In the 1990s he was President of Rockefeller University, and since 2000, he has been Secretary-General of the Human Frontier Science Program, a group which supports collaboration across different scientific fields. He also served more than a decade as Chair of the Human Rights Committee for the National Academy of Sciences.

In 2001, he was named to a high-level post at the National Institutes of Health, but his nomination was scuttled by then-Secretary of Health and Human Services Tommy Thompson, with the official explanation that he had “signed too many full-page letters in the New York Times critical of President Bush.” Wiesel responded, “I have not signed a statement against Bush, but nonetheless for some reason I am on the administration’s blacklist. Perhaps [it is because of] my human rights activities and being contrary in general.”

Structure and evolution of hearing

9.2015 The Scientist

Inner Ear Cartography

Scientists map the position of cells within the organ of Corti.  by Ruth Williams

Age-related hearing loss caused by damage to the sensory hair cells within the cochlea is extremely common, but studying the inner ear is tough. “It’s in the densest bone in the body, so you don’t have access,” says John Brigande of Oregon Health and Science University in Portland. Even if you can extract cells, he says, “there are so darn few of them.” Despite these technical difficulties, researchers have gleaned gene-expression information about different cell types within the organ of Corti—home to the sensory cells within the cochlea. But “it’s not only important to know what a cell expresses,” says Robert Durruthy-Durruthy, a postdoc in the Stanford University lab of Stefan Heller. “It’s also important to know where it can be found within a tissue.” To this end, Durruthy-Durruthy, Heller, and postdoc Jörg Waldhaus have derived a 2-D map of organ of Corti cells from neonatal mice. First, the team sorted all cell types across the medial-to-lateral axis (or width) of the organ based on marker gene expression. The approximately 900 sorted cells, representing nine cell types, were then each quantitatively analyzed for the expression of 192 selected genes. Computational analysis of these expression data then enabled reconstruction of the cells’ positions along the organ’s apical-to-basal (length) and medial-to-lateral axes. In principle, the technique, which harnesses gene-expression information to determine cells’ spatial organization, could be applied to generate 2-D maps of any complex tissue, says Durruthy-Durruthy. Within the mammalian cochlea, apical cells retain regenerative capacity for a few weeks after birth, but basal cells do not. “Spatial mapping allows us to get at the differences [between these cells],” says Brigande, and that could ultimately highlight possible ways to reinstate regeneration in the adult ear. (Cell Reports, 11:1385-99, 2015) To build a map of cells within the organ of Corti—where sound is translated to neural activity— scientists divide the cochlea in two. Each half of the organ of Corti is then broken up into its constituent cells, which comprise nine cell types (represented by the nine colors) spanning the organ’s medial-to-lateral axis.

FROM CELLS TO GENE-EXPRESSION: Each cell is analyzed for the expression of 192 selected genes. Based on the pattern of expression, a cell is given a position within the organ of Corti along both the basal-apical and the medial-lateral axes. Each column represents one of the nine cell types.

Human Hearing: A Primer

Semicircular canals of the vestibular system OSSICLES Ear canal Tympanic membrane (eardrum) When sound enters the ear canal, it vibrates the tympanic membrane, or eardrum. These vibrations are passed through the inner ear via three small bones called ossicles: the malleus, the incus, and the stapes. Finally, vibrations of the stapes stimulate the movement of a fluid called perilymph within the bony labyrinth of the inner ear.

Cochlea – How the human ear translates sound waves into nervous impulses.

Perilymph fills the both the vestibular and tympanic ducts of the cochlea. Between these two channels lies the cochlear duct, which is home to the organ of Corti. There, the soundinduced movement of perilymph in the cochlea is translated to an electrical signal that is sent to the brain for processing.

The organ of Corti sits on the basilar membrane, which separates the cochlear duct from the tympanic duct. As the basilar membrane vibrates in response to fluid movement, it pushes the organ along the tectorial membrane, which shifts laterally over the hair cells. This shift bends projections at the tips of the cells, called stereocilia, resulting in the generation of electrical signals.

The bending of the stereocilia results in the depolarization of the inner hair cell and initiates a nerve impulse through the spinal ganglion neuron at the base of the cell. A series of outer hair cells serves to mechanically amplify the vibrations that trigger the inner hair cells to fire. High-frequency sounds stimulate hair cells at the base of the cochlea, while low-frequency sounds stimulate hair cells at the apex.

Aural History -The form and function of the ears of modern land vertebrates cannot be understood without knowing how they evolved.

by Geoffrey A. Manley

Unlike eyes, which are generally instantly recognizable, ears differ greatly in their appearance throughout the animal kingdom. Some hearing structures may not be visible at all. For example, camouflaged in the barn owl’s facial ruff—a rim of short, brown feathers surrounding the bird’s white face—are clusters of stiff feathers that act as external ears on either side of its head. These feather structures funnel sound collected by two concave facial disks to the ear canal openings, increasing the bird’s hearing sensitivity by 20 decibels—approximately the difference between normal conversation and shouting.

Similar increases in sensitivity result from the large and often mobile external structures, or pinnae, of many mammals, such as cats and bats. Internally, the differences among hearing organs are even more dramatic.

Although fish can hear, only amphibians and true land vertebrates—including the aquatic species that descended from them, such as whales and pinnipeds—have dedicated hearing organs. In land vertebrates belonging to the group Amniota, including lizards, birds, and mammals, sound usually enters through an external canal and impinges on an eardrum that is connected through middle-ear bones to the inner ear. There, hundreds or thousands of sensory hair cells are spread along an elongated membrane that acts as a spectral analyzer, with the result that each local group of hair cells responds best to a certain range of pitches, or sound frequencies. The hair cells then feed this information into afferent nerve fibers that carry the information to the brain. (See “Hearing Primer” on page 34.)

Together, these hair cells and nerve fibers encode a wide range of sounds that enter the ear on that side of the head. Two ears complete the picture, allowing animals’ brains to localize the source of the sounds they hear by comparing the two inputs. Although it seems obvious that the ability to process nearby sounds would be enormously useful, modern amniote ears in fact arose quite late in evolutionary history, and to a large extent independently in different lineages. As a result, external, middle, and inner ears of various amniotes are characteristically different.1

Moreover, the early evolution of these dedicated auditoryorgans in land vertebrates led to the loss of the heavy otolithic membrane that overlies the hair-cell bundles of vestibular organs and is responsible for their slow responses. What remains is the watery macromolecular gel known as the tectorial membrane, which assures that local groups of hair cells move synchronously, resulting in greater sensitivity.

Good high-frequency hearing did not exist from the start, however. For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing. They had not yet evolved any mechanism for absorbing sound energy from air; they lacked the middle ear and eardrum that are vital for the function of modern hearing organs. As such, ancestral amniotes most likely perceived only sounds of relatively low frequency and high amplitude that reached the inner ear via the limbs or, if the skull were rested on the ground, through the tissues of the head. It is unclear what kind of stimuli could have existed that would have led to the retention of such hearing organs for such a long time.

The magnificent middle ear

During the Triassic period, some 250 to 200 million years ago, a truly remarkable thing happened. Independently, but within just 20 million to 30 million years of one another, all three amniote lineages evolved a tympanic middle ear from parts of the skull and the jaws.2 The tympanic middle ear is the assemblage of tiny bones that connects at one end to an eardrum and at the other end to the oval window, an aperture in the bone of the inner ear. Despite the temporal coincidence in the evolution of these structures in the three amniote lineages and the functional similarities of the adaptations, the groups were by this time so far separated that the middle ears evolved from different structures into two different configurations. The single middle-ear bone, the columella, of archosaurs and lepidosaurs derived from the hyomandibular, a bone that earlier had formed a large strut connecting the braincase to the outer skull.

Research on hearing organs have revealed the remarkable history of this unexpected diversity of ears. Divergence from a common origin Amniote vertebrates comprise three lineages of extant groups that diverged roughly 300 million years ago: the lepidosaurs, which include lizards and snakes; the archosaurs, which include crocodilians and birds; and mammals, which include egg-laying, pouched, and placental mammals. By comparing the skulls of the extinct common ancestors of these three lineages, as well as the ears of the most basal modern amniotes, researchers have concluded that ancestral amniotes had a small (perhaps less than 1 millimeter in length) but dedicated hearing organ: a sensory epithelium called a basilar papilla, with perhaps a few hundred sensory hair cells supported by a thin basilar membrane that is freely suspended in fluid. These rudimentary structures evolved from the hair cells of vestibular organs, which help organisms maintain their balance by responding to physical input, such as head rotation or gravity. Initially, the hearing organ only responded to low-frequency sounds. On their apical surface, all hair cells have tight tufts or bundles of large, hairlike villi known as stereovilli (or, more commonly stereocilia, even though they are not true cilia), which give hair cells their name. Between these stereovilli are proteinaceous links, most of which are closely coupled to sensory transduction channels that respond to a tilting of the stereovilli bundles caused by sound waves.

The amniote hearing organ evolved as a separate group of hair cells that lay between two existing vestibular epithelia. Low-frequency vestibular hair cells became specialized to transduce higher frequencies, requiring much faster response rates. This change is attributable in part to modifications in the ion channels of the cell membrane, such that each cell is “electrically tuned” to a particular frequency, a phenomenon still observed in some modern amniote ears.

MODERN INNER EARS

Arose starting about 200 million years ago In all three lineages, hair cells are arranged along the auditory papilla from low- to high-frequency sensitivity, called a tonotopic organization. In both archosaurs and mammals, one type of hair cell serves to amplify the sound signal received by the other type. In lepidosaurs, the auditory papilla ranges from a few hundred micrometers to 2 millimeters in length and contains two types of hair cells: one with taller bundles and fewer stereovilli that responds to sounds below 1 kHz and another with shorter, thicker bundles that responds to higher-frequency pitches.

The archosaur papilla, which reaches lengths of up to 10 millimeters in some owls, contains many thousands of hair cells of types: tall hair cells, which serve to detect sound, and short hair cells, which amplify the signal. In most mammals, the auditory papilla, called the organ of Corti, evolved to be so long that it began to coil on top of itself. The papilla ranges from 1.5 to 4 coils and 7 millimeters (mouse) to 75 millimeters (blue whale) in length. Mammals have two types of hair cells: covering the oval window, which detect sound, and outer hair cells, which inner hair cells amplify it.

Tectorial membrane

Tall hair cell

Short hair cell

Basilar membrane

Inner

hair cell

Outer

hair cell

In modern representatives, the columella is long and thin, with several, usually cartilaginous extensions known as the extracolumella. One of these, the “inferior process,” connects the inner surface of the eardrum and the columella, which then connects to the footplate that covers the oval window of the inner ear. This two-part system forms a lever that, together with the pressure increase incurred by transmitting from the much larger eardrum to the footplate, greatly magnifies sound entering the inner ear.

In the mammals of the Triassic, the equivalent events were more complex, but the functional result was remarkably similar. Mammal ancestors reduced the number of bones in the lower jaw from seven to one and, in the process, formed a new jaw joint. Initially, the old and new jaw structures existed in parallel, but over time the old joint moved towards the rear of the head. This event, which at any other time would likely have led to the complete loss of the old joint bones, occurred simultaneously with the origin of the mammalian tympanic middle ear. Older paleontological and newer developmental evidence from Shigeru Kuratani’s lab at RIKEN in Japan indicate that the mammalian eardrum evolved at a lower position on the skull relative to that of the other amniotes, a position outside the old jaw joint.3 In time, the bones of this old joint, together with the hyomandibula, became the three bony ossicles (malleus, incus, and stapes) of the new middle ear. Like the middle ear of archosaurs and lepidosaurs, these ossicles form a lever system that, along with the large area difference between eardrum and footplate, greatly magnifies sound input. Thus, remarkably, these complex events led independently to all modern amniotes possessing a middle ear that, at frequencies below 10 kHz, works equally effectively despite the diverse structures and origins. There is also evidence that the three-ossicle mammalian middle ear itself evolved at least twice—in egg-laying mammals such as the platypus, and in therians, which include marsupials and placentals—with similar outcomes.

More…

Aural History

The form and function of the ears of modern land vertebrates cannot be understood without knowing how they evolved.

http://www.the-scientist.com//?articles.view/articleNo/43806/title/Aural-History/

By Geoffrey A. Manley | September 1, 2015

PHOTO CREDITS SEE END OF ARTICLE
http://www.the-scientist.com/Sept2015/feature1.jpg

Unlike eyes, which are generally instantly recognizable, ears differ greatly in their appearance throughout the animal kingdom. Some hearing structures may not be visible at all. For example, camouflaged in the barn owl’s facial ruff—a rim of short, brown feathers surrounding the bird’s white face—are clusters of stiff feathers that act as external ears on either side of its head. These feather structures funnel sound collected by two concave facial disks to the ear canal openings, increasing the bird’s hearing sensitivity by 20 decibels—approximately the difference between normal conversation and shouting. Similar increases in sensitivity result from the large and often mobile external structures, or pinnae, of many mammals, such as cats and bats. Internally, the differences among hearing organs are even more dramatic.

Although fish can hear, only amphibians and true land vertebrates—including the aquatic species that descended from them, such as whales and pinnipeds—have dedicated hearing organs. In land vertebrates belonging to the group Amniota, including lizards, birds, and mammals, sound usually enters through an external canal and impinges on an eardrum that is connected through middle-ear bones to the inner ear. There, hundreds or thousands of sensory hair cells are spread along an elongated membrane that acts as a spectral analyzer, with the result that each local group of hair cells responds best to a certain range of pitches, or sound frequencies. The hair cells then feed this information into afferent nerve fibers that carry the information to the brain. (See “Human Hearing: A Primer.”)

For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing.

Together, these hair cells and nerve fibers encode a wide range of sounds that enter the ear on that side of the head. Two ears complete the picture, allowing animals’ brains to localize the source of the sounds they hear by comparing the two inputs. Although it seems obvious that the ability to process nearby sounds would be enormously useful, modern amniote ears in fact arose quite late in evolutionary history, and to a large extent independently in different lineages. As a result, external, middle, and inner ears of various amniotes are characteristically different.1 New paleontological studies and comparative research on hearing organs have revealed the remarkable history of this unexpected diversity of ears.

Divergence from a common origin

Amniote vertebrates comprise three lineages of extant groups that diverged roughly 300 million years ago: the lepidosaurs, which include lizards and snakes; the archosaurs, which include crocodilians and birds; and mammals, which include egg-laying, pouched, and placental mammals. By comparing the skulls of the extinct common ancestors of these three lineages, as well as the ears of the most basal modern amniotes, researchers have concluded that ancestral amniotes had a small (perhaps less than 1 millimeter in length) but dedicated hearing organ: a sensory epithelium called a basilar papilla, with perhaps a few hundred sensory hair cells supported by a thin basilar membrane that is freely suspended in fluid. These rudimentary structures evolved from the hair cells of vestibular organs, which help organisms maintain their balance by responding to physical input, such as head rotation or gravity. Initially, the hearing organ only responded to low-frequency sounds. On their apical surface, all hair cells have tight tufts or bundles of large, hairlike villi known as stereovilli (or, more commonly stereocilia, even though they are not true cilia), which give hair cells their name. Between these stereovilli are proteinaceous links, most of which are closely coupled to sensory transduction channels that respond to a tilting of the stereovilli bundles caused by sound waves.

The amniote hearing organ evolved as a separate group of hair cells that lay between two existing vestibular epithelia. Low-frequency vestibular hair cells became specialized to transduce higher frequencies, requiring much faster response rates. This change is attributable in part to modifications in the ion channels of the cell membrane, such that each cell is “electrically tuned” to a particular frequency, a phenomenon still observed in some modern amniote ears. Moreover, the early evolution of these dedicated auditory organs in land vertebrates led to the loss of the heavy otolithic membrane that overlies the hair-cell bundles of vestibular organs and is responsible for their slow responses. What remains is the watery macromolecular gel known as the tectorial membrane, which assures that local groups of hair cells move synchronously, resulting in greater sensitivity.

Good high-frequency hearing did not exist from the start, however. For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing. They had not yet evolved any mechanism for absorbing sound energy from air; they lacked the middle ear and eardrum that are vital for the function of modern hearing organs. As such, ancestral amniotes most likely perceived only sounds of relatively low frequency and high amplitude that reached the inner ear via the limbs or, if the skull were rested on the ground, through the tissues of the head. It is unclear what kind of stimuli could have existed that would have led to the retention of such hearing organs for such a long time.

The magnificent middle ear

http://www.the-scientist.com/Sept2015/Manleyonline.jpg

CONVERGING ON THE EAR: Starting around 250 million years ago, the three amniote lineages—lepidosaurs (lizards and snakes), archosaurs (crocodilians and birds), and mammals—separately evolved a tympanic middle ear, followed by evolution of the inner ear, both of which served to increase hearing sensitivity. Despite the independent origin of hearing structures in the three lineages, the outcomes were functionally quite similar, serving as a remarkable example of convergent evolution.
See full infographic: JPG

ILLUSTRATIONS: PHEBE LI FOR THE SCIENTIST. ICONS: ISTOCK.COMDuring the Triassic period, some 250 to 200 million years ago, a truly remarkable thing happened. Independently, but within just 20 million to 30 million years of one another, all three amniote lineages evolved a tympanic middle ear from parts of the skull and the jaws.2

The tympanic middle ear is the assemblage of tiny bones that connects at one end to an eardrum and at the other end to the oval window, an aperture in the bone of the inner ear. Despite the temporal coincidence in the evolution of these structures in the three amniote lineages and the functional similarities of the adaptations, the groups were by this time so far separated that the middle ears evolved from different structures into two different configurations. The single middle-ear bone, the columella, of archosaurs and lepidosaurs derived from the hyomandibular, a bone that earlier had formed a large strut connecting the braincase to the outer skull. In modern representatives, the columella is long and thin, with several, usually cartilaginous extensions known as the extracolumella. One of these, the “inferior process,” connects the inner surface of the eardrum and the columella, which then connects to the footplate that covers the oval window of the inner ear. This two-part system forms a lever that, together with the pressure increase incurred by transmitting from the much larger eardrum to the footplate, greatly magnifies sound entering the inner ear.

In the mammals of the Triassic, the equivalent events were more complex, but the functional result was remarkably similar. Mammal ancestors reduced the number of bones in the lower jaw from seven to one and, in the process, formed a new jaw joint. Initially, the old and new jaw structures existed in parallel, but over time the old joint moved towards the rear of the head. This event, which at any other time would likely have led to the complete loss of the old joint bones, occurred simultaneously with the origin of the mammalian tympanic middle ear. Older paleontological and newer developmental evidence from Shigeru Kuratani’s lab at RIKEN in Japan indicate that the mammalian eardrum evolved at a lower position on the skull relative to that of the other amniotes, a position outside the old jaw joint.3 In time, the bones of this old joint, together with the hyomandibula, became the three bony ossicles (malleus, incus, and stapes) of the new middle ear. Like the middle ear of archosaurs and lepidosaurs, these ossicles form a lever system that, along with the large area difference between eardrum and footplate, greatly magnifies sound input.

Thus, remarkably, these complex events led independently to all modern amniotes possessing a middle ear that, at frequencies below 10 kHz, works equally effectively despite the diverse structures and origins. There is also evidence that the three-ossicle mammalian middle ear itself evolved at least twice—in egg-laying mammals such as the platypus, and in therians, which include marsupials and placentals—with similar outcomes.

Inner-ear evolution

http://www.the-scientist.com/Sept2015/piano_med.jpg

PITCH PERFECT: The hearing organs of amniotes are organized tonotopically, with hair cells sensitive to high frequencies at the basal end of the papilla, grading into low-frequency hair cells at the apical end.BASED ON MED-EL WWW.MEDEL.COMThe evolution of tympanic middle ears kick-started the evolution of modern inner ears, where sound waves are converted into the electrical signals that are sent to the brain. The inner ear is least developed in the lepidosaurs, most of which retained a relatively small auditory papilla, in some just a few hundred micrometers long. Many lepidosaurs, predominantly diurnal species, also lost their eardrum. Snakes reduced their middle ear, limiting their hearing to frequencies less than 1 kHz, about two octaves above middle C. (For comparison, humans can hear sounds up to about 15 or 16 kHz.) Clearly, hearing was not under strong selective pressure in this group. There are a few exceptions, however. In geckos, for example, which are largely nocturnal, the papillar structure shows unique specializations, accompanied by high sensitivity and strong frequency selectivity. Indeed, the frequency selectivity of gecko auditory nerve fibers exceeds that of many mammals.

One part of the inner ear that did improve in lizards (but not in snakes) is the hair cells, with the papillae developing different areas occupied by two structural types of these sound-responsive cells. One of these hair cell groups responds to sounds below 1 kHz and perhaps corresponds to the ancestral version. The higher-frequency hair cells have a more specialized structure, particularly with regard to the size and height of the stereovilli, with bundle heights and stereovillus numbers varying consistently along the papilla’s length. Taller bundles with fewer stereovilli, which are much less stiff and therefore respond best to low frequencies, are found at one end of the membrane, while shorter, thicker bundles with more stereovilli that respond best to higher frequencies are found at the other end—a frequency distribution known as a tonotopic organization. Still, with the exception of one group of geckos, lizard hearing is limited to below 5 to 8 kHz.

In contrast to the relatively rudimentary lepidosaur inner ear, the auditory papilla of archosaurs (birds, crocodiles, and their relatives) evolved much greater length. Owls, highly proficient nocturnal hunters, boast the longest archosaur papilla, measuring more than 10 millimeters and containing many thousands of hair cells. As in lizards, archosaur hair cells show strong tonotopic organization, with a gradual change in the diameter and height of the stereovillar bundles contributing to the gradually changing frequency sensitivity along the papilla. In addition, the hair cells are divided along and across the basilar membrane, with tall hair cells (THCs) resting on the inner side and the apical end, most distant from the middle ear, grading into short hair cells (SHCs) on the outer side and at the basal end. Interestingly, many SHCs completely lack afferent innervation, which is the only known case of sensory cells lacking a connection to the brain. Instead of transmitting sensory information to the brain, these hair cells likely amplify the signal received by the inner ear. Despite the more complex anatomy, however, bird hearing is also generally limited to between 5 and 8 kHz, with the exception of some owls, which can hear up to 12 kHz.

The mammalian papilla, called the organ of Corti, also evolved to be larger—generally, but not always, longer than those of birds—but the extension in length varies in different lineages.4 Mammalian papillae also have a unique cellular arrangement. The papillae of modern egg-laying monotremes, which likely resemble those of the earliest mammals, include two groups of hair cells separated by numerous supporting pillar cells that form the tunnel of Corti. In any given cross section, there are approximately five inner hair cells (IHCs) on the inner side of the pillar cells, closer to the auditory nerve, and eight outer hair cells (OHCs) on the outer side. In therian mammals (marsupials and placentals), the numbers of each cell group have been much reduced, with only two pillar cells forming the tunnel in any given cross-section, and generally just a single IHC and three or four OHCs, though the functional consequences of this reduction remain unclear. About 90 percent of afferent fibers innervate IHCs, while only 10 percent or fewer innervate OHCs, despite the fact that OHCs account for some 80 percent of all hair cells. As with bird SHCs that lack afferent innervation, there are indications that the main function of OHCs is to amplify the physical sound signal at very low sound-pressure levels.

Therian mammals also evolved another key hearing adaptation: the cochlea. Shortly before marsupial and placental lineages diverged, the elongating hearing organ, which had always been curved, reached full circle. The only way to further increase its length was to form more than one full coil, a state that was reached roughly 120 million years ago. The result is hearing organs with 1.5 to 4 coils and lengths from 7 millimeters (mouse) to 75 millimeters (blue whale). Hearing ranges also diverged, partly depending on the size of the animal (larger mammals tend to have lower upper-frequency limits), but with a number of remarkable specializations, as expected in a lineage that radiated greatly during several evolutionary episodes.

As a result of these adaptations, most mammals have an upper frequency-response limit that well exceeds those of lepidosaurs and archosaurs. Human hearing extends to frequencies of about 15 kHz; a guinea pig can hear sounds up to about 45 kHz; and in the extreme cases of many bats and toothed whales, hearing extends into ultrasonic frequencies, sometimes as high as 180 kHz, allowing these animals to echolocate in air and water. This impressive increase in frequency limits is due to an extremely stiff middle ear, as well as a stiff cochlea. During early therian evolution, the bone of the canal surrounding the soft tissues invaded the supporting ridges of the basilar membrane, creating stiff laminae. Such bony ridges were retained in species perceiving ultrasonic frequencies, but tended to be reduced and replaced by softer connective-tissue supports in those with lower-frequency limits, such as humans.

Amplification within the ear

http://www.the-scientist.com/Sept2015/ear_hair.jpg

HAIRS OF THE EAR: Rows of inner-ear hair cells have villous bundles (blue) on their apical surface that convert sound waves to nervous signals sent to the brain.© STEVE GSCHMEISSNER/SCIENCE SOURCEIn addition to the specialized structures of the middle and inner ears of amniotes that served to greatly increase hearing sensitivity, the hair cells themselves can produce active movements that further amplify sound stimuli. The evolutionarily oldest such active mechanism was discovered in the late 1980s by Jim Hudspeth’s group, then at the University of California, San Francisco, School of Medicine, working with frogs,5 and Andrew Crawford and Robert Fettiplace, then at the University of Cambridge, working with turtles.6 The amplification mechanism, called the active bundle mechanism, probably evolved in the ancestors of vertebrates and helped overcome the viscous forces of the surrounding fluids, which resist movement. When sound stimuli move the hair-cell bundle and thus open transduction channels to admit potassium ions, some calcium ions also enter the cell. These calcium ions bind to and influence the open transduction channels, increasing the speed with which these channels close. Such closing forces are exerted in phase with the incoming sound waves, increasing the distance that the hair cells move in response, and thereby increasing their sensitivity. It is likely that this mechanism operates in all vertebrate hair cells.5 In lizards, my group provided evidence that this bundle mechanism really does operate in the living animal.7

In 1986, a second mechanism of hair cell–driven sound amplification was discovered in mammalian OHCs by Bill Brownell’s group, then at the University of Florida School of Medicine. Brownell and his colleagues showed that mammalian OHCs, but not IHCs, changed their length very rapidly in phase with the signal if exposed to an alternating electrical field.8 Such fields occur when hair cells respond to sound. Subsequent experiments showed that the change in cell length is due to changes in the molecular configuration of a protein, later named prestin, which occurs in high density along the lateral cell membrane of OHCs. In mammals, the force produced by the OHCs is so strong that the entire organ of Corti, which includes all cell types that surround the hair cells and the basilar membrane itself, is driven in an up-and-down motion. This movement can amplify sounds by at least 40dB, allowing very quiet noises to be detected. There is evidence for the independent evolution of specific molecular configurations of prestins that allow for the amplification of very high ultrasonic frequencies in bats and whales.9

Bird ears also appear to produce active forces that amplify sound. The SHCs have bundles comprising up to 300 stereovilli (about three times as many as the bundles of mammalian OHCs),10 and the movement of these bundles probably drives the movement of THCs indirectly via the tectorial membrane. Also, very recent data from the lab of Fettiplace, now at the University of Wisconsin–Madison, suggests that in birds, prestin (albeit in a different molecular form) may work in the plane across the hearing organ (i.e., not up and down as in mammals), perhaps reinforcing the influence of the bundle active mechanism on the THCs via the tectorial membrane.11

Three hundred million years of evolution have resulted in a fascinating variety of ear configurations that, despite their struc­tural diversity, show remarkably similar physiological responses.

In addition to amplifying hair-cell activity, these active mechanisms manifest as spontaneous movements of the hearing organ, oscillating even in the absence of sound stimuli. Such spontaneous movements actually produce sound that is emitted through the middle ear to the outside world and can be measured in the ear canal. These spontaneous otoacoustic emissions (SOAEs) enable remote sensing of what is going on within the inner ear and have permitted increasingly important research on inner-ear mechanisms and new clinical diagnostic methods to monitor the health of the ear’s sensory epithelium. We recently showed that spectral patterns of SOAEs in lizards, birds, and mammals are remarkably similar, despite up to 70-fold differences in the size of the hearing organs, suggesting that there are profound commonalities among the inner ears of amniotes that we still do not really understand.12

Remarkable convergence

Three hundred million years of evolution have resulted in a fascinating variety of ear configurations that, despite their structural diversity, show remarkably similar physiological responses. There are hardly any differences in sensitivity between the hearing of endothermal birds and mammals, and the frequency selectivity of responses is essentially the same in most lizards, birds, and mammals. The combined research efforts of paleontologists, anatomists, physiologists, and developmental biologists over several decades have clarified the major evolutionary steps in all lineages that modified the malleable middle and inner ears into their present-day kaleidoscopic variety of form, yet a surprising consensus in their function.

Geoffrey A. Manley is a retired professor from the Institute of Zoology at the Technical University in Munich, Germany. He is currently a guest scientist in the laboratory of his wife, Christine Köppl, at Oldenburg University in Germany.

References

  1. G.A. Manley, C. Köppl, “Phylogenetic development of the cochlea and its innervation,” Curr Opin Neurobiol, 8:468-74, 1998.
  2. J.A. Clack, “Patterns and processes in the early evolution of the tetrapod ear,” J Neurobiol, 53:251-64, 2002.
  3. T. Kitazawa et al., “Developmental genetic bases behind the independent origin of the tympanic membrane in mammals and diapsids,” Nat Commun, 6:6853, 2015.
  4. G.A. Manley, “Evolutionary paths to mammalian cochleae,” JARO, 13:733-43, 2012.
  5. A.J. Hudspeth, “How the ear’s works work: Mechanoelectrical transduction and amplification by hair cells,” C R Biol, 328:155-62, 2005.
  6. A.C. Crawford, R. Fettiplace, “The mechanical properties of ciliary bundles of turtle cochlear hair cells,” J Physiol, 364:359-79, 1985.
  7. G.A. Manley et al., “In vivo evidence for a cochlear amplifier in the hair-cell bundle of lizards,” PNAS, 98:2826-31, 2001.
  8. B. Kachar et al., “Electrokinetic shape changes of cochlear outer hair cells,” Nature, 322:365-68, 1986.
  9. Y. Liu et al., “Convergent sequence evolution between echolocating bats and dolphins,” Curr Biol, 20:R53-R54, 2010.
  10. C. Köppl et al., “Big and powerful: A model of the contribution of bundle motility to mechanical amplification in hair cells of the bird basilar papilla,” in Concepts and Challenges in the Biophysics of Hearing, ed. N.P. Cooper, D.T. Kemp (Singapore: World Scientific, 2009), 444-50.
  11. M. Beurg et al., “A prestin motor in chicken auditory hair cells: Active force generation in a nonmammalian species,” Neuron, 79:69-81, 2013.
  12. C. Bergevin et al., “Salient features of otoacoustic emissions are common across tetrapod groups and suggest shared properties of generation mechanisms,” PNAS, 112:3362-67, 2015.

Sea Lion: © iStock/LFStewart; Squirrel: © Erik Mandre/Shutterstock; Frog: ©Frank B. Yuwono/Shutterstock; Owl: ©XNature.Photography/Shutterstock; Lizard: ©Andrew Wijesuriya/Shutterstock; Bat: © iStock/GlobalP; Ostrich: © Jamen Percy/Shutterstock; Dog: ©Annette Shaff/Shutterstock; Lynx: © Dmitri Gomon/Shutterstock

Correction (September 15, 2015): Citation #8 of this story has been updated to accurately reflect the research referenced in the text. The Scientist regrets the error.

Tags

vertebratessensory biologyhearingevolutionary biologyevolution and amniotes

Early Hominin Hearing

http://www.the-scientist.com//?articles.view/articleNo/44119/title/Early-Hominin-Hearing/

Based on the structure of fossilized skulls and ear bones, researchers learn that early hominins heard sounds best between the frequencies that humans and chimpanzees do.

By Karen Zusi | September 29, 2015

africanus skullWIKIMEDIA, JOSÉ BRAGA

http://www.the-scientist.com/images/Nutshell/Sept2015/hominin310.jpg

Early hominin species Australopithecus africanus andParanthropus robustus, which lived around 2 million years ago, possessed hearing capabilities largely similar to modern-day chimpanzees but with a few differences that made their sense more akin to that of humans, according to a recent study. The results were reported last week (September 25) in Science Advances.

“We know that the hearing patterns, or audiograms, in chimpanzees and humans are distinct because their hearing abilities have been measured in the laboratory in living subjects,” study coauthor Rolf Quam of Binghamton University in New York said in a press release. “So we were interested in finding out when this human-like hearing pattern first emerged during our evolutionary history.”

Quam and an international team of researchers studied the anatomy of the ear in three complete fossilized specimens, as well as several partial specimens, from South Africa. The team reconstructed the size and relative proportions of up to six different structures—such as the stapes, a middle ear bone—using 3-D CT scans. The researchers then used a published model to predict how the early hominins may have heard, based on these measurements.

Both species of early hominin evolved an anatomy that allowed them to hear sounds at slightly higher frequencies than chimpanzees, best in the 1.0 kHz to 3.5 kHz range. In comparison, chimpanzees can hear sounds best between 1.0 kHz and 3.0 kHz. Humans can typically hear sounds best between 1.0 kHz and 4.5 kHz; this range encompasses most sounds formed in spoken language.

“[The early hominins] didn’t hear as well as humans, and they are more like chimps,” Quam told The New York Times. But the researchers speculated that the changes in hearing anatomy over time were driven by a lifestyle spent on the open savanna, where short-range communication would have been favored.

“Hearing abilities are closely tied with verbal communication,” Quam wrote at The Conversation. “By figuring out when certain hearing capacities emerged during our evolutionary history, we might be able to shed some light on when spoken language started to evolve.”

Tags

paleontologyhuman evolutionhomininhearingfossilsCT scan and chimpanzee

Hearing Explained

Observe the ins and outs of how our ears perceive sound.

By The Scientist Staff | September 1, 2015

http://www.the-scientist.com//?articles.view/articleNo/43884/title/Hearing-Explained/

Human Hearing: A Primer

How the human ear translates sound waves into nervous impulses

By The Scientist Staff | September 1, 2015

https://youtu.be/46aNGGNPm7s

When sound enters the ear canal, it vibrates the tympanic membrane, or eardrum. These vibrations are passed through the inner ear via three small bones called ossicles: the malleus, the incus, and the stapes. Finally, vibrations of the stapes stimulate the movement of a fluid called perilymph within the bony labyrinth of the inner ear.

See labeled infographic: JPG© CATHERINE DELPHIA

hearing

http://www.the-scientist.com/images/August2015/Primer_2.jpg

Perilymph fills the both the vestibular and tympanic ducts of the cochlea. Between these two channels lies the cochlear duct, which is home to the organ of Corti. There, the sound induced movement of perilymph in the cochlea is translated to an electrical signal that is sent to the brain for processing.

An electrical signal is generated by inner hair cells that sit above the basilar membrane, which separates the cochlear duct from the tympanic duct. As the basilar membrane vibrates in response to fluid movement, it pushes the hair cells along another membrane, known as the tectorial membrane, which shifts laterally to bend projections at the tips of the cells, called stereocilia.

The bending of the stereocilia results in the depolarization of the inner hair cell and initiates a nerve impulse through the spiral ganglion neuron at the base of the cell. A series of outer hair cells serves to mechanically amplify the vibrations that trigger the inner hair cells to fire. High-frequency sounds stimulate hair cells at the base of the cochlea, while low-frequency sounds stimulate hair cells at the apex.

See labeled infographic: JPG© CATHERINE DELPHIA

HAIRS OF THE EAR ear_hair

http://www.the-scientist.com/images/August2015/Primer_1.jpg

Tags

spiral ganglion neuronssensory biologyneuroscienceneuronsmechanotransductionmechanoreception and hearing

Author of books:
Brain Mechanisms of Vision (1991, with David H. Hubel)
Colloquium on Vision: From Photon to Perception (200, with John Dowling and Lubert Stryer)
Brain and Visual Perception: The Story of a 25-Year Collaboration (2005, with David H. Hubel)

Professor: Physiology, Harvard University (1964-74)
Professor: Neurobiology, Harvard University (1974-84)
Professor: Neurobiology, Rockefeller University (1984-98)
Administrator: President, Rockefeller University (1991-98)

The interplay of light and life

Lubert Stryer (born March 2, 1938, in Tianjin, China) is the Mrs. George A. Winzer Professor of Cell Biology, Emeritus, at the Stanford University School of Medicine.[1][2] His research over more than four decades has been centered on the interplay of light and life. In 2007 he received the National Medal of Science for elucidating the biochemical basis of signal amplification in vision, pioneering the development of high density micro-arrays for genetic analysis, and authoring the biochemistry textbook.[3]

Stryer received his B.S. degree from the University of Chicago in 1957 and his M.D. degree from Harvard Medical School. He was a Helen Hay Whitney Research Fellow[4] in the Department of Physics at Harvard and then at the MRC Laboratory of Molecular Biology[5] in Cambridge, England, before joining the faculty of the Department of Biochemistry at Stanford in 1963. In 1969 he moved to Yale to become Professor of Molecular Biophysics and Biochemistry, and in 1976, he returned to Stanford to head a new Department of Structural Biology.[6]

Stryer and coworkers pioneered the use of fluorescence spectroscopy, particularly Förster resonance energy transfer (FRET), to monitor the structure and dynamics of biological macromolecules.[7][8] In 1967, Stryer and Haugland showed that the efficiency of energy transfer depends on the inverse sixth power of the distance between the donor and acceptor,[9][10] as predicted by Förster’s theory. They proposed that energy transfer can serve as a spectroscopic ruler to reveal proximity relationships in biological macromolecules.

A second contribution was Stryer’s discovery of the primary stage of amplification in visual excitation.[11][12] Stryer, together with Fung and Hurley, showed that a single photoexcited rhodopsin molecule activates many molecules of transducin, which in turn activate many molecules of a cyclic GMP phosphodiesterase. Stryer’s laboratory has also contributed to our understanding of the role of calcium in visual recovery and adaptation.[13][14][15]

Stryer participated in developing light-directed, spatially addressable parallel chemical synthesis for the synthesis of peptides and polynucleotides.[16][17][18] Light-directed combinatorial synthesis has been used by Stephen Fodor and coworkers at Affymetrix to make DNA arrays containing millions of different sequences for genetic analyses.

Download Biochemistry Jeremy M Berg John L Tymoczko …

Dec 13, 2014 – Uploaded by Marquita Iraely

Download Biochemistry Jeremy M Berg John L Tymoczko Lubert Stryer PDF. Marquita Iraely …

Macmillan Higher Education: Biochemistry Seventh Edition …

http://www.macmillanhighered.com/Catalog/…/biochemistry-seventhedition-be…

Jeremy M. Berg , John L. Tymoczko (Carleton College) , Lubert Stryer (Stanford … this extraordinary textbook has helped shape the way biochemistry is taught, …

Loose-leaf Version for Biochemistry Seventh Edition Edition

Read Full Post »

The Neurogenetics of Language – Patricia Kuhl

Larry H. Bernstein, MD, FCAP, Curator

Leaders in Pharmaceutical Innovation

This image has an empty alt attribute; its file name is ArticleID-182.png

WordCloud Image Produced by Adam Tubman

Series E. 2; 5.7

2015 George A. Miller Award

In neuroimaging studies using structural (diffusion weighted magnetic resonance imaging or DW-MRI) and functional (magnetoencephalography or MEG) imaging, my laboratory has produced data on the neural connectivity that underlies language processing, as well as electrophysiological measures of language functioning during various levels of language processing (e.g., phonemic, lexical, or sentential). Taken early in development, electrophysiological measures or “biomarkers” have been shown to predict future language performance in neurotypical children as well as children with autism spectrum disorders (ASD). Work in my laboratory is now combining these neuroimaging approaches with genetic sequencing, allowing us to understand the genetic contributions to language learning.

http://www.youtube.com/watch%3Fv%3DG2XBIkHW954

http://www.youtube.com/watch%3Fv%3DM-ymanHajN8

Patricia Kuhl shares astonishing findings about how babies learn one language over another — by listening to the humans around them

Kuhl Constructs: How Babies Form Foundations for Language

MAY 3, 2013

by Sarah Andrews Roehrich, M.S., CCC-SLP

Years ago, I was captivated by an adorable baby on the front cover of a book, The Scientist in the Crib: What Early Learning Tells Us About the Mind, written by a trio of research scientists including Alison Gopknik, PhDAndrew Meltzoff, PhD, and Patricia Kuhl, PhD.

At the time, I was simply interested in how babies learn about their worlds, how they conduct experiments, and how this learning could impact early brain development.  I did not realize the extent to which interactions with family, caretakers, society, and culture could shape the direction of a young child’s future.

Now, as a speech-language pathologist in Early Intervention in Massachusetts, more cognizant of the myriad of factors that shape a child’s cognitive, social-emotional, language, and literacy development, I have been absolutely delighted to discover more of the work of Dr. Kuhl, a distinguished speech-and-language pathologist at The University of Washington.  So, last spring, when I read that Dr. Kuhl was going to present “Babies’ Language Skills” as one part of a 2-part seminar series sponsored by the Mind, Brain, and Behavior Annual Distinguished Lecture Series at Harvard University1, I was thrilled to have the opportunity to attend. Below are some highlights from that experience and the questions it has since sparked for me:

Lip ‘Reading’ Babies
According to a study by Dr. Patricia Kuhl and Dr. Andrew Meltzoff, “Bimodal Perception of Speech in Infancy” (Science, 1982), cited in the 2005 Seattle Times article, “Infant Science: How do Babies Learn to Talk?” by Paula Bock, Drs. Patricia Kuhl and Andrew Meltzoff showed that babies as young as 18 weeks of age could listen to “Ah ah ah” or “Ee ee ee” vowel sounds and gaze at the correct, corresponding lip shape on a video monitor.
This image from Kuhl’s 2011 TED talk shows how a baby is trained to turn his head in response to a change in such vowel sounds, and is immediately rewarded by watching a black box light up while a panda bear inside pounds a drum.  Images provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Who is Dr. Patricia Kuhl and how has her work re-shaped our knowledge about how babies learn language?

Dr. Kuhl, who is co-director of the Institute for Learning and Brain Sciences at The University of Washington, has been internationally recognized for her research on early language and brain development, and for her studies on how young children learn.  In her most recent research experiments, she’s been using magnetoencephalography (MEG)–a relatively new neuroscience technology that measures magnetic fields generated by the activity of brain cells–to investigate how, where, and with what frequency babies from around the world process speech sounds in the brain when they are listening to adults speak in their native and non-native languages.

A 6-month-old baby sits in a magnetoencephalography machine, which maps brain activity, while listening to various languages in earphones and playing with a toy. Image originally printed in “Brain Mechanisms in Early Language Acquisition” (Neuron review, Cell Press, 2010) and provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Not only does Kuhl’s research point us in the direction of how babies learn to process phonemes, the sound units upon which many languages are built, but it is part of a larger body of studies looking at infants across languages and cultures that has revolutionized our understanding of language development over the last half of the 20th century—leading to, as Kuhl puts it, “a new view of language acquisition, that accounts for both the initial state of linguistic knowledge in infants, and infants’ extraordinary ability to learn simply by listening to their native language.”2

What is neuroplasticity and how does it underlie child development?

Babies are born with 100 billion neurons, about the same as the number of stars in the Milky Way.3 In The Whole Brain Child,Daniel Siegel, MD and Tina Payne Bryson, PhD explain that when we undergo an experience, these brain cells respond through changes in patterns of electrical activity—in other words, they “fire” electrical signals called “action potentials.”4

In a child’s first years of life, the brain exhibits extraordinary neuroplasticity, refining its circuits in response to environmental experiences. Synapses—the sites of communication between neurons—are built, strengthened, weakened and pruned away as needed. Two short videos from the Center on the Developing Child at Harvard, “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry”, nicely depict how some of this early brain development happens.5

Since brain circuits organize and reorganize themselves in response to an infant’s interactions with his or her environment, exposing babies to a variety of positive experiences (such as talking, cuddling, reading, singing, and playing in different environments) not only helps tune babies in to the language of their culture, but it also builds a foundation for developing the attention, cognition, memory, social-emotional, language and literacy, and sensory and motor skills that will help them reach their potential later on.

When and how do babies become “language-bound” listeners?

In her 2011 TED talk, “The Linguistic Genius of Babies,” Dr. Kuhl discusses how babies under 8 months of age from different cultures can detect sounds in any language from around the world, but adults cannot do this. 6   So when exactly do babies go from being “citizens of the world”, as Kuhl puts it, to becoming “language-bound” listeners, specifically focused on the language of their culture?”

Between 8-10 months of age, when babies are trying to master the sounds used in their native language, they enter a critical period for sound development.1  Kuhl explains that in one set of experiments, she compared a group of babies in America learning to differentiate the sounds “/Ra/” and “/La/,” with a group of babies in Japan.  Between 6-8 months, the babies in both cultures recognized these sounds with the same frequency.  However, by 10-12 months, after multiple training sessions, the babies in Seattle, Washington, were much better at detecting the “/Ra/-/La/” shift than were the Japanese babies.

Kuhl explains these results by suggesting that babies “take statistics” on how frequently they hear sounds in their native and non-native languages.  Because “/Ra/” and “/La/” occur more frequently in the English language, the American babies recognized these sounds far more frequently in their native language than the Japanese babies.  Kuhl believes that the results in this study indicate a shift in brain development, during which babies from each culture are preparing for their own languages and becoming “language-bound” listeners.

In what ways are nurturing interactions with caregivers more valuable to babies’ early language development than interfacing with technology?

If parents, caretakers, and other children can help mold babies’ language development simply by talking to them, it is tempting to ask whether young babies can learn language by listening to the radio, watching television, or playing on their parents’ mobile devices. I mean, what could be more engaging than the brightly-colored screens of the latest and greatest smart phones, iPads, iPods, and computers? They’re perfect for entertaining babies.  In fact, some babies and toddlers can operate their parents’ devices before even having learned how to talk.

However, based on her research, Kuhl states that young babies cannot learn language from television and it is necessary for babies to have lots of face-to-face interaction to learn how to talk.1  In one interesting study, Kuhl’s team exposed 9 month old American babies to Mandarin in various forms–in person interactions with native Mandarin speakers vs. audiovisual or audio recordings of these speakers–and then looked at the impact of this exposure on the babies’ ability to make Mandarin phonetic contrasts (not found in English) at 10-12 months of age. Strikingly, twelve laboratory visits featuring in person interactions with the native Mandarin speakers were sufficient to teach the American babies how to distinguish the Mandarin sounds as well as Taiwanese babies of the same age. However, the same number of lab visits featuring the audiovisual or audio recordings made no impact. American babies exposed to Mandarin through these technologies performed the same as a control group of American babies exposed to native English speakers during their lab visits.

This diagram depicts the results of a Kuhl study on American infants exposed to Mandarin in various forms–in person interactions with native speakers versus television or audio recordings of these speakers. As the top blue triangle shows, the American infants exposed in person to native Mandarin speakers performed just as well on a Mandarin phoneme distinction task as age-matched Taiwanese counterparts. However, those American infants exposed to television or audio recordings of the Mandarin speakers performed the same as a control group of American babies exposed to native English speakers during their lab visits. Diagram displayed in Kuhl’s TED TAlk 6, provided courtesy of Dr. Patricia Kuhl’s Lab at the University of Washington.

Kuhl believes that this is primarily because a baby’s interactions with others engages the social brain, a critical element for helping children learn to communicate in their native and non-native languages. 6  In other words, learning language is not simply a technical skill that can be learned by listening to a recording or watching a show on a screen.  Instead, it is a special gift that is handed down from one generation to the next.

Language is learned through talking, singing, storytelling, reading, and many other nurturing experiences shared between caretaker and child.  Babies are naturally curious; they watch every movement and listen to every sound they hear around them.  When parents talk, babies look up and watch their mouth movements with intense wonder.  Parents respond in turn, speaking in “motherese,” a special variant of language designed to bathe babies in the sound patterns and speech sounds of their native language. Motherese helps babies hear the “edges” of sound, the very thing that is difficult for babies who exhibit symptoms of dyslexia and auditory processing issues later on.

Over time, by listening to and engaging with the speakers around them, babies build sound maps which set the stage for them to be able to say words and learn to read later on.  In fact, based on years of research, Kuhl has discovered that babies’ abilities to discriminate phonemes at 7 months-old is a predictor of future reading skills for that child at age 5.7

I believe that educating families about brain development, nurturing interactions, and the benefits and limits of technology is absolutely critical to helping families focus on what is most important in developing their children’s communication skills.  I also believe that Kuhl’s work is invaluable in this regard.  Not only has it focused my attention on how babies form foundations for language, but it has illuminated my understanding of how caretaker-child interactions help set the stage for babies to become language-bound learners.

Sources

(1) Kuhl, P. (April 3, 2012.) Talk on “Babies’ Language Skills.” Mind, Brain, and Behavior Annual Distinguished Lecture Series, Harvard University.

(2) Kuhl, P. (2000). “A New View of Language Acquisition.” This paper was presented at the National Academy of Sciences colloquium “Auditory Neuroscience: Development, Transduction, and Integration,” held May 19–21, 2000, at the Arnold and Mabel Beckman Center in Irvine, CA. Published by the National Academy of Sciences.

(3) Bock, P. (2005.)  “The Baby Brain.  Infant Science: How do Babies Learn to Talk?” Pacific Northwest: The Seattle Times Magazine.

(4) Siegel, D., Bryson, T. (2011.)  The Whole-Brain Child: 12 Revolutionary Strategies to Nurture Your Child’s Developing Mind. New York, NY:  Delacorte Press, a division of Random House, Inc.

(5) Center on the Developing Child at Harvard University. “Experiences Build Brain Architecture” and “Serve and Return Interaction Shapes Brain Circuitry” videos, two parts in the three-part series, “Three Core Concepts in Early Development.

http://developingchild.harvard.edu/resources/multimedia/videos

(6) Kuhl, P.  (February 18, 2011.) “The Linguistic Genius of Babies,” video talk on TED.com, a TEDxRainier event.

www.ted.com/talks/patricia_kuhl_the_linguistic_genius_of_babies.html

(7) Lerer, J. (2012.) “Professor Discusses Babies’ Language Skills.”  The Harvard Crimson.

Andrew Meltzoff & Patricia Kuhl: Joint attention to mind

Sarah DeWeerdt  11 Feb 2013

Power couple: In addition to a dizzying array of peer-reviewed publications, Andrew Meltzoff and Patricia Kuhl have written a popular book on brain development, given TED talks and lobbied political leaders.

Andrew Meltzoff shares many things with his wife — research dollars, authorship, a keen interest in the young brain — but he does not keep his wife’s schedule.

“It’s one of the agreements we have,” he says, laying out the rule with a twinkle in his eye that conveys both the delights and the complications of working with one’s spouse.

Meltzoff, professor of psychology at the University of Washington in Seattle, and his wife, speech and hearing sciences professor Patricia Kuhl, are co-directors of the university’s Institute for Learning and Brain Sciences, which focuses on the development of the brain and mind during the first five years of life.

Between them, they have shown that learning is a fundamentally social process, and that babies begin this social learning when they are just weeks or even days old.

You could say the couple is attached at the cerebral cortex, but not at the hip: They take equal roles in running the institute, but they each have their own daily rhythms and distinct, if overlapping, scientific interests.

Kuhl studies how infants “crack the language code,” as she puts it — how they figure out sounds and meanings and eventually learn to produce speech. Meltzoff’s work focuses on social building blocks such as imitation and joint attention, or a shared focus on an object or activity. Meltzoff says these basic behaviors help children develop theory of mind, a sophisticated awareness and understanding of others’ thoughts and feelings.

All of these abilities are impaired in children with autism. Most of the couple’s studies have focused on typically developing infants, because, they say, it’s essential to understand typical development in order to appreciate the irregularities in autism.

Both also study autism, which can in turn help explain typical development.

In addition to a dizzying array of peer-reviewed publications, the duo have written a popular book on developmental psychiatry, The Scientist in the Crib, and promote their ideas through TED talks and by lobbying political leaders.

Geraldine Dawson, chief science officer of the autism science and advocacy organization Autism Speaks and a longtime collaborator, calls Meltzoff and Kuhl “the dynamic duo.” “They’re sort of bigger-than-life type people, who fill the room when they walk into it,” she says.

Making a match:

Meltzoff and Kuhl’s story began with a scientific twist on a standard rom-com meet cute.

It was the early 1980s, and Kuhl, who had recently joined the faculty at the University of Washington, wanted to understand how infants hear and see vowels. But she was having trouble designing an effective experiment.

“I kept running into Andy’s office,” which was near hers, to talk it through, Kuhl recalls.

Meltzoff had done some research on how babies integrate what they see with what they touch, a process called cross-modal matching1. Soon he and Kuhl realized that they could adapt his experimental design to her question, and decided to collaborate.

They showed babies two video screens, each featuring a person mouthing a different vowel sound – “ahhh” or “eeee.” A speaker placed between the two screens played one of those two vowel sounds.

They found that babies as young as 18 to 20 weeks look longer at the face that matches the sound they hear, integrating faces with voices2.

But that wasn’t the only significant result from those experiments.

“Speaking only for myself, I will say I became very interested in the very attractive, smart blonde that I was collaborating with,” Meltzoff says. “Criticizing each other’s scientific writing at the same time the relationship was building was… interesting.”

And effective: Their paper appeared in Science in 1982, and the couple married three years later.

Listening to Meltzoff tell that story, it’s easy to understand why some colleagues say he is funny but they can’t quite explain why. His humor is subtle and wry. More obvious is his passion, not just for science, but for working out the theory underlying empirical results. Even his wife describes his personality as “cerebral.”

“He just has this laser vision for homing in on what is the heart of the issue,” says Rechele Brooks, research assistant professor of psychiatry and behavioral sciences at the University of Washington, who collaborates with Meltzoff on studies of gaze.

For example, in one of his earliest papers, Meltzoff wanted to investigate how babies learn to imitate. He found that infants just 12 to 21 days old can imitate both facial expressions and hand gestures, much earlier than previously thought3.

“It really turned the scientific community on its head,” Brooks says.

Early insights:

Face to face: Meltzoff and Kuhl are developing a method to simultaneously record the brain activity of two people as they interact.

Meltzoff continued to study infants, tracing back the components of theory of mind to their earliest developmental source. That sparked the interest of Dawson, who had gotten to know Meltzoff as a student at the University of Washington in the 1970s, and became the first director of the university’s autism center in 1996.

Meltzoff and Dawson together applied his techniques to study young, often nonverbal, children with autism. In one study, they found that children with autism have more trouble imitating others than do either typically developing children or those with Down syndrome4.

In another study, they found that children with autism are less interested in social sounds such as clapping or hearing their name called than are their typically developing peers5.  They also found that how children with autism imitate and play with toys when they are 3 or 4 years old predicts their communication skills two years later6.

Most previous studies of autism had focused on older children, Dawson says, and this work helped paint a picture of the disorder earlier in childhood.

Kuhl began her career with studies showing that monkeys7 and even chinchillas8 can distinguish the difference between speech sounds, or phonemes, such as “ba” and “pa,” just as human infants can.

“The bottom line was that animals were sharing this aspect of perception,” Kuhl says.

So why are people so much better than animals at learning language? Kuhl has been trying to answer that question ever since, first through behavioral studies and then by measuring brain activity using imaging techniques.

Kuhl is soft-spoken, but a listener wants to lean in to catch every word. Scientists who have worked with her describe her as poised and perfectly put together, a master of gentle yet effective diplomacy.

“She has her sort of magnetic power to pull people together,” says Yang Zhang, associate professor of speech-language-hearing sciences at the University of Minnesota in Rochester, who was a graduate student and postdoctoral researcher in Kuhl’s lab beginning in the late 1990s.

Listen and learn:

At one point, Kuhl turned her considerable powers of persuasion on a famously smooth negotiator, then-President Bill Clinton.

Kuhl had shown that newborns hear virtually all speech sounds, but by 6 months of age they lose the ability to distinguish sounds that aren’t part of their native language9.

At the White House Conference on Early Childhood Development and Learning in 1997, she described how infants learn by listening, long before they can speak.

Clinton, ever the policy wonk, asked her how much babies need to hear in order to learn. Kuhl said she didn’t know — but if Clinton gave her the funds, she would find out. “Even the president could see that research on the effects of language input on the young brain had impact on society,” she says.

Kuhl used the funds Clinton gave her to design a study in which 9-month-old babies in the U.S. received 12 short Mandarin Chinese ‘lessons.’ The babies quickly learned to distinguish speech sounds in the second language, her team found — but only if the speaker was live, not in a video10.

Those results contributed to Kuhl’s ‘social gating’ hypothesis, which holds that social interaction is necessary for picking up on the sounds and patterns of language. “We’re saying that social interaction is a kind of gate to an interest in learning, the kind that humans are completely masters of,” she says.

Her results also suggest that the language problems in children with autism may be the result of their social deficits.

“Children with autism will have a very difficult time acquiring language if language requires the social gate to be open,” she says.

Over the years, Kuhl and Meltzoff have had largely independent research programs, but her recent focus on the social roots of language dovetails with his long-time focus on social interaction.

These days, they are trying to develop ‘face-to-face neuroscience,’ which involves simultaneously recording brain activity from two people as they interact with each other.

This approach would allow researchers to observe, for example, what happens in an infant’s brain when she hears her mother’s voice, and what happens in the mother’s brain as she sees her infant respond to her. “It’s going to be very special to do,” Meltzoff says enthusiastically, even though the effort is more directly related to Kuhl’s work than to his own.

It’s clear that this fervor for each other’s work goes both ways.

“That’s one of the great things about being married to a scientist,” Meltzoff says. “When you come home and think, ‘God, I really nailed this methodologically,’ your wife, instead of yawning, leans forward and says, ‘You did? Tell me about the method, that’s so exciting.’”

News and Opinion articles on SFARI.org are editorially independent of the Simons Foundation.

References:

1: Meltzoff A.N. and R.W. Borton Nature 282, 403-404 (1979) PubMed

2: Kuhl P.K. and A.N. Meltzoff Science 218, 1138-1141 (1982) PubMed

3: Meltzoff A.N. and M.K. Moore Science 198, 75-78 (1977) PubMed

4: Dawson G. et al. Child Dev. 69, 1276-1285 (1998) PubMed

5: Dawson G. et al. J. Autism Dev. Disord. 28, 479-485 (1998) PubMed

6: Toth K. et al. J. Autism Dev. Disord. 36, 993-1005 (2006) PubMed

7: Kuhl P.K. and D.M. Padden Percept. Psychophys. 32, 542-550 (1982) PubMed

8: Kuhl P.K. and J.D. Miller Science 190, 69-72 (1975) PubMed

9: Kuhl P.K. et al. Science 255, 606-608 (1992) PubMed

10: Kuhl P.K. et al. Proc. Natl. Acad. Sci. U.S.A. 100, 9096-9101 (2003) PubMed

Using genetic data in cognitive neuroscience: from growing pains to genuine insights

Adam E. Green, Marcus R. Munafò, Colin G. DeYoung, John A. Fossella, Jin Fan & Jeremy R. Gray
Nature Reviews Neuroscience 2008 Sep; 9, 710-720
http://dx.doi.org:/10.1038/nrn2461

Research that combines genetic and cognitive neuroscience data aims to elucidate the mechanisms that underlie human behaviour and experience by way of ‘intermediate phenotypes’: variations in brain function. Using neuroimaging and other methods, this approach is poised to make the transition from health-focused investigations to inquiries into cognitive, affective and social functions, including ones that do not readily lend themselves to animal models. The growing pains of this emerging field are evident, yet there are also reasons for a measured optimism.

NSF – Cognitive Neuroscience Award

The cross-disciplinary integration and exploitation of new techniques in cognitive neuroscience has generated a rapid growth in significant scientific advances. Research topics have included sensory processes (including olfaction, thirst, multi-sensory integration), higher perceptual processes (for faces, music, etc.), higher cognitive functions (e.g., decision-making, reasoning, mathematics, mental imagery, awareness), language (e.g., syntax, multi-lingualism, discourse), sleep, affect, social processes, learning, memory, attention, motor, and executive functions. Cognitive neuroscientists further clarify their findings by examining developmental and transformational aspects of such phenomena across the span of life, from infancy to late adulthood, and through time.

New frontiers in cognitive neuroscience research have emerged from investigations that integrate data from a variety of techniques. One very useful technique has been neuroimaging, including positron emission tomography (PET), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), optical imaging (near infrared spectroscopy or NIRS), anatomical MRI, and diffusion tensor imaging (DTI). A second class of techniques includes physiological recording such as subdural and deep brain electrode recording, electroencephalography (EEG), event-related electrical potentials (ERPs), and galvanic skin responses (GSRs). In addition, stimulation methods have been employed, including transcranial magnetic stimulation (TMS), subdural and deep brain electrode stimulation, and drug stimulation. A fourth approach involves cognitive and behavioral methods, such as lesion-deficit neuropsychology and experimental psychology. Other techniques have included genetic analysis, molecular modeling, and computational modeling. The foregoing variety of methods is used with individuals in healthy, neurological, psychiatric, and cognitively-impaired conditions. The data from such varied sources can be further clarified by comparison with invasive neurophysiological recordings in non-human primates and other mammals.

Findings from cognitive neuroscience can elucidate functional brain organization, such as the operations performed by a particular brain area and the system of distributed, discrete neural areas supporting a specific cognitive, perceptual, motor, or affective operation or representation. Moreover, these findings can reveal the effect on brain organization of individual differences (including genetic variation), plasticity, and recovery of function following damage to the nervous system.

Read Full Post »

Metabolic Genomics and Pharmaceutics, Vol. 1 of BioMed Series D available on Amazon Kindle

Metabolic Genomics and Pharmaceutics, Vol. 1 of BioMed Series D available on Amazon Kindle

Reporter: Stephen S Williams, PhD

Article ID #180: Metabolic Genomics and Pharmaceutics, Vol. 1 of BioMed Series D available on Amazon Kindle. Published on 8/15/2015

WordCloud Image Produced by Adam Tubman

Leaders in Pharmaceutical Business Intelligence would like to announce the First volume of their BioMedical E-Book Series D:

Metabolic Genomics & Pharmaceutics, Vol. I

SACHS FLYER 2014 Metabolomics SeriesDindividualred-page2

which is now available on Amazon Kindle at

http://www.amazon.com/dp/B012BB0ZF0.

This e-Book is a comprehensive review of recent Original Research on  METABOLOMICS and related opportunities for Targeted Therapy written by Experts, Authors, Writers. This is the first volume of the Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases.  It is written for comprehension at the third year medical student level, or as a reference for licensing board exams, but it is also written for the education of a first time baccalaureate degree reader in the biological sciences.  Hopefully, it can be read with great interest by the undergraduate student who is undecided in the choice of a career. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation. The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012.  All new articles on this subject, will continue to be incorporated, as published with periodical updates.

We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon.

All forthcoming BioMed e-Book Titles can be viewed at:

http://pharmaceuticalintelligence.com/biomed-e-books/

Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of  life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.

Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:

  • delivering curation and summary interpretations of latest findings and innovations on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
  • providing a social platform for scientists and clinicians to enter into discussion using social media
  • compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform

This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Table of Contents for Metabolic Genomics & Pharmaceutics, Vol. I

Chapter 1: Metabolic Pathways

Chapter 2: Lipid Metabolism

Chapter 3: Cell Signaling

Chapter 4: Protein Synthesis and Degradation

Chapter 5: Sub-cellular Structure

Chapter 6: Proteomics

Chapter 7: Metabolomics

Chapter 8:  Impairments in Pathological States: Endocrine Disorders; Stress

                   Hypermetabolism and Cancer

Chapter 9: Genomic Expression in Health and Disease 

 

Summary 

Epilogue

 

 

Read Full Post »

Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle

Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle

Reporter: Stephen J Williams, PhD

Article ID #179: Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle. Published on 8/14/2015

WordCloud Image Produced by Adam Tubman

Leaders in Pharmaceutical Business Intelligence would like to announce the First volume of their BioMedical E-Book Series C: e-Books on Cancer & Oncology

Volume One: Cancer Biology and Genomics for Disease Diagnosis

CancerandOncologyseriesCcoverwhich is now available on Amazon Kindle at                          http://www.amazon.com/dp/B013RVYR2K.

This e-Book is a comprehensive review of recent Original Research on Cancer & Genomics including related opportunities for Targeted Therapy written by Experts, Authors, Writers. This ebook highlights some of the recent trends and discoveries in cancer research and cancer treatment, with particular attention how new technological and informatics advancements have ushered in paradigm shifts in how we think about, diagnose, and treat cancer. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation. The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012.  All new articles on this subject, will continue to be incorporated, as published with periodical updates.

We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon. All forthcoming BioMed e-Book Titles can be viewed at:

http://pharmaceuticalintelligence.com/biomed-e-books/

Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of  life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.

Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:

  • delivering curation and summary interpretations of latest findings and innovations
  • on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
  • providing a social platform for scientists and clinicians to enter into discussion using social media
  • compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform

This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Table of Contents for Cancer Biology and Genomics for Disease Diagnosis

Preface

Introduction  The evolution of cancer therapy and cancer research: How we got here?

Part I. Historical Perspective of Cancer Demographics, Etiology, and Progress in Research

Chapter 1:  The Occurrence of Cancer in World Populations

Chapter 2.  Rapid Scientific Advances Changes Our View on How Cancer Forms

Chapter 3:  A Genetic Basis and Genetic Complexity of Cancer Emerge

Chapter 4: How Epigenetic and Metabolic Factors Affect Tumor Growth

Chapter 5: Advances in Breast and Gastrointestinal Cancer Research Supports Hope for Cure

Part II. Advent of Translational Medicine, “omics”, and Personalized Medicine Ushers in New Paradigms in Cancer Treatment and Advances in Drug Development

Chapter 6:  Treatment Strategies

Chapter 7:  Personalized Medicine and Targeted Therapy

Part III.Translational Medicine, Genomics, and New Technologies Converge to Improve Early Detection

Chapter 8:  Diagnosis                                     

Chapter 9:  Detection

Chapter 10:  Biomarkers

Chapter 11:  Imaging In Cancer

Chapter 12: Nanotechnology Imparts New Advances in Cancer Treatment, Detection, &  Imaging                                 

Epilogue by Larry H. Bernstein, MD, FACP: Envisioning New Insights in Cancer Translational Biology

 

Read Full Post »

Treatment of Lymphomas [2.4.4C]

Larry H. Bernstein, MD, FCAP, Author, Curator, Editor

http://pharmaceuticalinnovation.com/2015/8/11/larryhbern/Treatment-of-Lymphomas-[2.4.4C]

 

Lymphoma treatment

Overview

http://www.emedicinehealth.com/lymphoma/page8_em.htm#lymphoma_treatment

The most widely used therapies are combinations of chemotherapyand radiation therapy.

  • Biological therapy, which targets key features of the lymphoma cells, is used in many cases nowadays.

The goal of medical therapy in lymphoma is complete remission. This means that all signs of the disease have disappeared after treatment. Remission is not the same as cure. In remission, one may still have lymphoma cells in the body, but they are undetectable and cause no symptoms.

  • When in remission, the lymphoma may come back. This is called recurrence.
  • The duration of remission depends on the type, stage, and grade of the lymphoma. A remission may last a few months, a few years, or may continue throughout one’s life.
  • Remission that lasts a long time is called durable remission, and this is the goal of therapy.
  • The duration of remission is a good indicator of the aggressiveness of the lymphoma and of the prognosis. A longer remission generally indicates a better prognosis.

Remission can also be partial. This means that the tumor shrinks after treatment to less than half its size before treatment.

The following terms are used to describe the lymphoma’s response to treatment:

  • Improvement: The lymphoma shrinks but is still greater than half its original size.
  • Stable disease: The lymphoma stays the same.
  • Progression: The lymphoma worsens during treatment.
  • Refractory disease: The lymphoma is resistant to treatment.

The following terms to refer to therapy:

  • Induction therapy is designed to induce a remission.
  • If this treatment does not induce a complete remission, new or different therapy will be initiated. This is usually referred to as salvage therapy.
  • Once in remission, one may be given yet another treatment to prevent recurrence. This is called maintenance therapy.

Chemotherapy

Many different types of chemotherapy may be used for Hodgkin lymphoma. The most commonly used combination of drugs in the United States is called ABVD. Another combination of drugs, known as BEACOPP, is now widely used in Europe and is being used more often in the United States. There are other combinations that are less commonly used and not listed here. The drugs that make up these two more common combinations of chemotherapy are listed below.

ABVD: Doxorubicin (Adriamycin), bleomycin (Blenoxane), vinblastine (Velban, Velsar), and dacarbazine (DTIC-Dome). ABVD chemotherapy is usually given every two weeks for two to eight months.

BEACOPP: Bleomycin, etoposide (Toposar, VePesid), doxorubicin, cyclophosphamide (Cytoxan, Neosar), vincristine (Vincasar PFS, Oncovin), procarbazine (Matulane), and prednisone (multiple brand names). There are several different treatment schedules, but different drugs are usually given every two weeks.

The type of chemotherapy, number of cycles of chemotherapy, and the additional use of radiation therapy are based on the stage of the Hodgkin lymphoma and the type and number of prognostic factors.

Adult Non-Hodgkin Lymphoma Treatment (PDQ®)

http://www.cancer.gov/cancertopics/pdq/treatment/adult-non-hodgkins/Patient/page1

Key Points for This Section

Adult non-Hodgkin lymphoma is a disease in which malignant (cancer) cells form in the lymph system.

Because lymph tissue is found throughout the body, adult non-Hodgkin lymphoma can begin in almost any part of the body. Cancer can spread to the liver and many other organs and tissues.

Non-Hodgkin lymphoma in pregnant women is the same as the disease in nonpregnant women of childbearing age. However, treatment is different for pregnant women. This summary includes information on the treatment of non-Hodgkin lymphoma during pregnancy

Non-Hodgkin lymphoma can occur in both adults and children. Treatment for children, however, is different than treatment for adults. (See the PDQ summary on Childhood Non-Hodgkin Lymphoma Treatment for more information.)

There are many different types of lymphoma.

Lymphomas are divided into two general types: Hodgkin lymphoma and non-Hodgkin lymphoma. This summary is about the treatment of adult non-Hodgkin lymphoma. For information about other types of lymphoma, see the following PDQ summaries:

Age, gender, and a weakened immune system can affect the risk of adult non-Hodgkin lymphoma.

If cancer is found, the following tests may be done to study the cancer cells:

  • Immunohistochemistry : A test that uses antibodies to check for certain antigens in a sample of tissue. The antibody is usually linked to a radioactive substance or a dye that causes the tissue to light up under a microscope. This type of test may be used to tell the difference between different types of cancer.
  • Cytogenetic analysis : A laboratory test in which cells in a sample of tissue are viewed under a microscope to look for certain changes in the chromosomes.
  • Immunophenotyping : A process used to identify cells, based on the types of antigens ormarkers on the surface of the cell. This process is used to diagnose specific types of leukemia and lymphoma by comparing the cancer cells to normal cells of the immune system.

Certain factors affect prognosis (chance of recovery) and treatment options.

The prognosis (chance of recovery) and treatment options depend on the following:

  • The stage of the cancer.
  • The type of non-Hodgkin lymphoma.
  • The amount of lactate dehydrogenase (LDH) in the blood.
  • The amount of beta-2-microglobulin in the blood (for Waldenström macroglobulinemia).
  • The patient’s age and general health.
  • Whether the lymphoma has just been diagnosed or has recurred (come back).

Stages of adult non-Hodgkin lymphoma may include E and S.

Adult non-Hodgkin lymphoma may be described as follows:

E: “E” stands for extranodal and means the cancer is found in an area or organ other than the lymph nodes or has spread to tissues beyond, but near, the major lymphatic areas.

S: “S” stands for spleen and means the cancer is found in the spleen.

Stage I adult non-Hodgkin lymphoma is divided into stage I and stage IE.

  • Stage I: Cancer is found in one lymphatic area (lymph node group, tonsils and nearby tissue, thymus, or spleen).
  • Stage IE: Cancer is found in one organ or area outside the lymph nodes.

Stage II adult non-Hodgkin lymphoma is divided into stage II and stage IIE.

  • Stage II: Cancer is found in two or more lymph node groups either above or below the diaphragm (the thin muscle below the lungs that helps breathing and separates the chest from the abdomen).
  • Stage IIE: Cancer is found in one or more lymph node groups either above or below the diaphragm. Cancer is also found outside the lymph nodes in one organ or area on the same side of the diaphragm as the affected lymph nodes.

Stage III adult non-Hodgkin lymphoma is divided into stage III, stage IIIE, stage IIIS, and stage IIIE+S.

  • Stage III: Cancer is found in lymph node groups above and below the diaphragm (the thin muscle below the lungs that helps breathing and separates the chest from the abdomen).
  • Stage IIIE: Cancer is found in lymph node groups above and below the diaphragm and outside the lymph nodes in a nearby organ or area.
  • Stage IIIS: Cancer is found in lymph node groups above and below the diaphragm, and in the spleen.
  • Stage IIIE+S: Cancer is found in lymph node groups above and below the diaphragm, outside the lymph nodes in a nearby organ or area, and in the spleen.

In stage IV adult non-Hodgkin lymphoma, the cancer:

  • is found throughout one or more organs that are not part of a lymphatic area (lymph node group, tonsils and nearby tissue, thymus, or spleen), and may be in lymph nodes near those organs; or
  • is found in one organ that is not part of a lymphatic area and has spread to organs or lymph nodes far away from that organ; or
  • is found in the liver, bone marrow, cerebrospinal fluid (CSF), or lungs (other than cancer that has spread to the lungs from nearby areas).

Adult non-Hodgkin lymphomas are also described based on how fast they grow and where the affected lymph nodes are in the body.  Indolent & aggressive.

The treatment plan depends mainly on the following:

  • The type of non-Hodgkin’s lymphoma
  • Its stage (where the lymphoma is found)
  • How quickly the cancer is growing
  • The patient’s age
  • Whether the patient has other health problems
  • If there are symptoms present such as fever and night sweats (see above)

Read Full Post »

Treatment for Chronic Leukemias [2.4.4B]

Larry H. Bernstein, MD, FCAP, Author, Curator, Editor

http://pharmaceuticalintelligence.com/2015/8/11/larryhbern/Treatment-for-Chronic-Leukemias-[2.4.4B]

2.4.4B1 Treatment for CML

Chronic Myelogenous Leukemia Treatment (PDQ®)

http://www.cancer.gov/cancertopics/pdq/treatment/CML/Patient/page4

Treatment Option Overview

Key Points for This Section

There are different types of treatment for patients with chronic myelogenous leukemia.

Six types of standard treatment are used:

  1. Targeted therapy
  2. Chemotherapy
  3. Biologic therapy
  4. High-dose chemotherapy with stem cell transplant
  5. Donor lymphocyte infusion (DLI)
  6. Surgery

New types of treatment are being tested in clinical trials.

Patients may want to think about taking part in a clinical trial.

Patients can enter clinical trials before, during, or after starting their cancer treatment.

Follow-up tests may be needed.

There are different types of treatment for patients with chronic myelogenous leukemia.

Different types of treatment are available for patients with chronic myelogenous leukemia (CML). Some treatments are standard (the currently used treatment), and some are being tested in clinical trials. A treatment clinical trial is a research study meant to help improve current treatments or obtain information about new treatments for patients with cancer. When clinical trials show that a new treatment is better than the standard treatment, the new treatment may become the standard treatment. Patients may want to think about taking part in a clinical trial. Some clinical trials are open only to patients who have not started treatment.

Six types of standard treatment are used:

Targeted therapy

Targeted therapy is a type of treatment that uses drugs or other substances to identify and attack specific cancer cells without harming normal cells. Tyrosine kinase inhibitors are targeted therapy drugs used to treat chronic myelogenous leukemia.

Imatinib mesylate, nilotinib, dasatinib, and ponatinib are tyrosine kinase inhibitors that are used to treat CML.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Chemotherapy

Chemotherapy is a cancer treatment that uses drugs to stop the growth of cancer cells, either by killing the cells or by stopping them from dividing. When chemotherapy is taken by mouth or injected into a vein or muscle, the drugs enter the bloodstream and can reach cancer cells throughout the body (systemic chemotherapy). When chemotherapy is placed directly into the cerebrospinal fluid, an organ, or a body cavity such as the abdomen, the drugs mainly affect cancer cells in those areas (regional chemotherapy). The way the chemotherapy is given depends on the type and stage of the cancer being treated.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Biologic therapy

Biologic therapy is a treatment that uses the patient’s immune system to fight cancer. Substances made by the body or made in a laboratory are used to boost, direct, or restore the body’s natural defenses against cancer. This type of cancer treatment is also called biotherapy or immunotherapy.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

High-dose chemotherapy with stem cell transplant

High-dose chemotherapy with stem cell transplant is a method of giving high doses of chemotherapy and replacing blood-forming cells destroyed by the cancer treatment. Stem cells (immature blood cells) are removed from the blood or bone marrow of the patient or a donor and are frozen and stored. After the chemotherapy is completed, the stored stem cells are thawed and given back to the patient through an infusion. These reinfused stem cells grow into (and restore) the body’s blood cells.

See Drugs Approved for Chronic Myelogenous Leukemia for more information.

Donor lymphocyte infusion (DLI)

Donor lymphocyte infusion (DLI) is a cancer treatment that may be used after stem cell transplant.Lymphocytes (a type of white blood cell) from the stem cell transplant donor are removed from the donor’s blood and may be frozen for storage. The donor’s lymphocytes are thawed if they were frozen and then given to the patient through one or more infusions. The lymphocytes see the patient’s cancer cells as not belonging to the body and attack them.

Surgery

Splenectomy

What`s new in chronic myeloid leukemia research and treatment?

http://www.cancer.org/cancer/leukemia-chronicmyeloidcml/detailedguide/leukemia-chronic-myeloid-myelogenous-new-research

Combining the targeted drugs with other treatments

Imatinib and other drugs that target the BCR-ABL protein have proven to be very effective, but by themselves these drugs don’t help everyone. Studies are now in progress to see if combining these drugs with other treatments, such as chemotherapy, interferon, or cancer vaccines (see below) might be better than either one alone. One study showed that giving interferon with imatinib worked better than giving imatinib alone. The 2 drugs together had more side effects, though. It is also not clear if this combination is better than treatment with other tyrosine kinase inhibitors (TKIs), such as dasatinib and nilotinib. A study going on now is looking at combing interferon with nilotinib.

Other studies are looking at combining other drugs, such as cyclosporine or hydroxychloroquine, with a TKI.

New drugs for CML

Because researchers now know the main cause of CML (the BCR-ABL gene and its protein), they have been able to develop many new drugs that might work against it.

In some cases, CML cells develop a change in the BCR-ABL oncogene known as a T315I mutation, which makes them resistant to many of the current targeted therapies (imatinib, dasatinib, and nilotinib). Ponatinib is the only TKI that can work against T315I mutant cells. More drugs aimed at this mutation are now being tested.

Other drugs called farnesyl transferase inhibitors, such as lonafarnib and tipifarnib, seem to have some activity against CML and patients may respond when these drugs are combined with imatinib. These drugs are being studied further.

Other drugs being studied in CML include the histone deacetylase inhibitor panobinostat and the proteasome inhibitor bortezomib (Velcade).

Several vaccines are now being studied for use against CML.

2.4.4.B2 Chronic Lymphocytic Leukemia

Chronic Lymphocytic Leukemia Treatment (PDQ®)

General Information About Chronic Lymphocytic Leukemia

Key Points for This Section

  1. Chronic lymphocytic leukemia is a type of cancer in which the bone marrow makes too many lymphocytes (a type of white blood cell).
  2. Leukemia may affect red blood cells, white blood cells, and platelets.
  3. Older age can affect the risk of developing chronic lymphocytic leukemia.
  4. Signs and symptoms of chronic lymphocytic leukemia include swollen lymph nodes and tiredness.
  5. Tests that examine the blood, bone marrow, and lymph nodes are used to detect (find) and diagnose chronic lymphocytic leukemia.
  6. Certain factors affect treatment options and prognosis (chance of recovery).
  7. Chronic lymphocytic leukemia is a type of cancer in which the bone marrow makes too many lymphocytes (a type of white blood cell).

Chronic lymphocytic leukemia (also called CLL) is a blood and bone marrow disease that usually gets worse slowly. CLL is one of the most common types of leukemia in adults. It often occurs during or after middle age; it rarely occurs in children.

http://www.cancer.gov/images/cdr/live/CDR755927-750.jpg

Anatomy of the bone; drawing shows spongy bone, red marrow, and yellow marrow. A cross section of the bone shows compact bone and blood vessels in the bone marrow. Also shown are red blood cells, white blood cells, platelets, and a blood stem cell.

Anatomy of the bone. The bone is made up of compact bone, spongy bone, and bone marrow. Compact bone makes up the outer layer of the bone. Spongy bone is found mostly at the ends of bones and contains red marrow. Bone marrow is found in the center of most bones and has many blood vessels. There are two types of bone marrow: red and yellow. Red marrow contains blood stem cells that can become red blood cells, white blood cells, or platelets. Yellow marrow is made mostly of fat.

Leukemia may affect red blood cells, white blood cells, and platelets.

Normally, the body makes blood stem cells (immature cells) that become mature blood cells over time. A blood stem cell may become a myeloid stem cell or a lymphoid stem cell.

A myeloid stem cell becomes one of three types of mature blood cells:

  1. Red blood cells that carry oxygen and other substances to all tissues of the body.
  2. White blood cells that fight infection and disease.
  3. Platelets that form blood clots to stop bleeding.

A lymphoid stem cell becomes a lymphoblast cell and then one of three types of lymphocytes (white blood cells):

  1. B lymphocytes that make antibodies to help fight infection.
  2. T lymphocytes that help B lymphocytes make antibodies to fight infection.
  3. Natural killer cells that attack cancer cells and viruses.
Blood cell development. CDR526538-750

Blood cell development. CDR526538-750

http://www.cancer.gov/images/cdr/live/CDR526538-750.jpg

Blood cell development; drawing shows the steps a blood stem cell goes through to become a red blood cell, platelet, or white blood cell. A myeloid stem cell becomes a red blood cell, a platelet, or a myeloblast, which then becomes a granulocyte (the types of granulocytes are eosinophils, basophils, and neutrophils). A lymphoid stem cell becomes a lymphoblast and then becomes a B-lymphocyte, T-lymphocyte, or natural killer cell.

Blood cell development. A blood stem cell goes through several steps to become a red blood cell, platelet, or white blood cell.

In CLL, too many blood stem cells become abnormal lymphocytes and do not become healthy white blood cells. The abnormal lymphocytes may also be called leukemia cells. The lymphocytes are not able to fight infection very well. Also, as the number of lymphocytes increases in the blood and bone marrow, there is less room for healthy white blood cells, red blood cells, and platelets. This may cause infection, anemia, and easy bleeding.

This summary is about chronic lymphocytic leukemia. See the following PDQ summaries for more information about leukemia:

  • Adult Acute Lymphoblastic Leukemia Treatment.
  • Childhood Acute Lymphoblastic Leukemia Treatment.
  • Adult Acute Myeloid Leukemia Treatment.
  • Childhood Acute Myeloid Leukemia/Other Myeloid Malignancies Treatment.
  • Chronic Myelogenous Leukemia Treatment.
  • Hairy Cell Leukemia Treatment

Older age can affect the risk of developing chronic lymphocytic leukemia.

Anything that increases your risk of getting a disease is called a risk factor. Having a risk factor does not mean that you will get cancer; not having risk factors doesn’t mean that you will not get cancer. Talk with your doctor if you think you may be at risk. Risk factors for CLL include the following:

  • Being middle-aged or older, male, or white.
  • A family history of CLL or cancer of the lymph system.
  • Having relatives who are Russian Jews or Eastern European Jews.

Signs and symptoms of chronic lymphocytic leukemia include swollen lymph nodes and tiredness.

Usually CLL does not cause any signs or symptoms and is found during a routine blood test. Signs and symptoms may be caused by CLL or by other conditions. Check with your doctor if you have any of the following:

  • Painless swelling of the lymph nodes in the neck, underarm, stomach, or groin.
  • Feeling very tired.
  • Pain or fullness below the ribs.
  • Fever and infection.
  • Weight loss for no known reason.

Tests that examine the blood, bone marrow, and lymph nodes are used to detect (find) and diagnose chronic lymphocytic leukemia.

The following tests and procedures may be used:

Physical exam and history : An exam of the body to check general signs of health, including checking for signs of disease, such as lumps or anything else that seems unusual. A history of the patient’s health habits and past illnesses and treatments will also be taken.

Complete blood count (CBC) with differential : A procedure in which a sample of blood is drawn and checked for the following:

The number of red blood cells and platelets.

The number and type of white blood cells.

The amount of hemoglobin (the protein that carries oxygen) in the red blood cells.

The portion of the blood sample made up of red blood cells.

Results from the Phase 3 Resonate™ Trial

Significantly improved progression free survival (PFS) vs ofatumumab in patients with previously treated CLL

  • Patients taking IMBRUVICA® had a 78% statistically significant reduction in the risk of disease progression or death compared with patients who received ofatumumab1
  • In patients with previously treated del 17p CLL, median PFS was not yet reached with IMBRUVICA® vs 5.8 months with ofatumumab (HR 0.25; 95% CI: 0.14, 0.45)1

Significantly prolonged overall survival (OS) with IMBRUVICA® vs ofatumumab in patients with previously treated CLL

  • In patients with previously treated CLL, those taking IMBRUVICA® had a 57% statistically significant reduction in the risk of death compared with those who received ofatumumab (HR 0.43; 95% CI: 0.24, 0.79; P<0.05)1

Typical treatment of chronic lymphocytic leukemia

http://www.cancer.org/cancer/leukemia-chroniclymphocyticcll/detailedguide/leukemia-chronic-lymphocytic-treating-treatment-by-risk-group

Treatment options for chronic lymphocytic leukemia (CLL) vary greatly, depending on the person’s age, the disease risk group, and the reason for treating (for example, which symptoms it is causing). Many people live a long time with CLL, but in general it is very difficult to cure, and early treatment hasn’t been shown to help people live longer. Because of this and because treatment can cause side effects, doctors often advise waiting until the disease is progressing or bothersome symptoms appear, before starting treatment.

If treatment is needed, factors that should be taken into account include the patient’s age, general health, and prognostic factors such as the presence of chromosome 17 or chromosome 11 deletions or high levels of ZAP-70 and CD38.

Initial treatment

Patients who might not be able to tolerate the side effects of strong chemotherapy (chemo), are often treated with chlorambucil alone or with a monoclonal antibody targeting CD20 like rituximab (Rituxan) or obinutuzumab (Gazyva). Other options include rituximab alone or a corticosteroid like prednisione.

In stronger and healthier patients, there are many options for treatment. Commonly used treatments include:

  • FCR: fludarabine (Fludara), cyclophosphamide (Cytoxan), and rituximab
  • Bendamustine (sometimes with rituximab)
  • FR: fludarabine and rituximab
  • CVP: cyclophosphamide, vincristine, and prednisone (sometimes with rituximab)
  • CHOP: cyclophosphamide, doxorubicin, vincristine (Oncovin), and prednisone
  • Chlorambucil combined with prednisone, rituximab, obinutuzumab, or ofatumumab
  • PCR: pentostatin (Nipent), cyclophosphamide, and rituximab
  • Alemtuzumab (Campath)
  • Fludarabine (alone)

Other drugs or combinations of drugs may also be also used.

If the only problem is an enlarged spleen or swollen lymph nodes in one region of the body, localized treatment with low-dose radiation therapy may be used. Splenectomy (surgery to remove the spleen) is another option if the enlarged spleen is causing symptoms.

Sometimes very high numbers of leukemia cells in the blood cause problems with normal circulation. This is calledleukostasis. Chemo may not lower the number of cells until a few days after the first dose, so before the chemo is given, some of the cells may be removed from the blood with a procedure called leukapheresis. This treatment lowers blood counts right away. The effect lasts only for a short time, but it may help until the chemo has a chance to work. Leukapheresis is also sometimes used before chemo if there are very high numbers of leukemia cells (even when they aren’t causing problems) to prevent tumor lysis syndrome (this was discussed in the chemotherapy section).

Some people who have very high-risk disease (based on prognostic factors) may be referred for possible stem cell transplant (SCT) early in treatment.

Second-line treatment of CLL

If the initial treatment is no longer working or the disease comes back, another type of treatment may help. If the initial response to the treatment lasted a long time (usually at least a few years), the same treatment can often be used again. If the initial response wasn’t long-lasting, using the same treatment again isn’t as likely to be helpful. The options will depend on what the first-line treatment was and how well it worked, as well as the person’s health.

Many of the drugs and combinations listed above may be options as second-line treatments. For many people who have already had fludarabine, alemtuzumab seems to be helpful as second-line treatment, but it carries an increased risk of infections. Other purine analog drugs, such as pentostatin or cladribine (2-CdA), may also be tried. Newer drugs such as ofatumumab, ibrutinib (Imbruvica), and idelalisib (Zydelig) may be other options.

If the leukemia responds, stem cell transplant may be an option for some patients.

Some people may have a good response to first-line treatment (such as fludarabine) but may still have some evidence of a small number of leukemia cells in the blood, bone marrow, or lymph nodes. This is known as minimal residual disease. CLL can’t be cured, so doctors aren’t sure if further treatment right away will be helpful. Some small studies have shown that alemtuzumab can sometimes help get rid of these remaining cells, but it’s not yet clear if this improves survival.

Treating complications of CLL

One of the most serious complications of CLL is a change (transformation) of the leukemia to a high-grade or aggressive type of non-Hodgkin lymphoma called diffuse large cell lymphoma. This happens in about 5% of CLL cases, and is known as Richter syndrome. Treatment is often the same as it would be for lymphoma (see our document called Non-Hodgkin Lymphoma for more information), and may include stem cell transplant, as these cases are often hard to treat.

Less often, CLL may transform to prolymphocytic leukemia. As with Richter syndrome, these cases can be hard to treat. Some studies have suggested that certain drugs such as cladribine (2-CdA) and alemtuzumab may be helpful.

In rare cases, patients with CLL may have their leukemia transform into acute lymphocytic leukemia (ALL). If this happens, treatment is likely to be similar to that used for patients with ALL (see our document called Leukemia: Acute Lymphocytic).

Acute myeloid leukemia (AML) is another rare complication in patients who have been treated for CLL. Drugs such as chlorambucil and cyclophosphamide can damage the DNA of blood-forming cells. These damaged cells may go on to become cancerous, leading to AML, which is very aggressive and often hard to treat (see our document calledLeukemia: Acute Myeloid).

CLL can cause problems with low blood counts and infections. Treatment of these problems were discussed in the section “Supportive care in chronic lymphocytic leukemia.”

Read Full Post »

Treatments other than Chemotherapy for Leukemias and Lymphomas

Author, Curator, Editor: Larry H. Bernstein, MD, FCAP

2.5.1 Radiation Therapy 

http://www.lls.org/treatment/types-of-treatment/radiation-therapy

Radiation therapy, also called radiotherapy or irradiation, can be used to treat leukemia, lymphoma, myeloma and myelodysplastic syndromes. The type of radiation used for radiotherapy (ionizing radiation) is the same that’s used for diagnostic x-rays. Radiotherapy, however, is given in higher doses.

Radiotherapy works by damaging the genetic material (DNA) within cells, which prevents them from growing and reproducing. Although the radiotherapy is directed at cancer cells, it can also damage nearby healthy cells. However, current methods of radiotherapy have been improved upon, minimizing “scatter” to nearby tissues. Therefore its benefit (destroying the cancer cells) outweighs its risk (harming healthy cells).

When radiotherapy is used for blood cancer treatment, it’s usually part of a treatment plan that includes drug therapy. Radiotherapy can also be used to relieve pain or discomfort caused by an enlarged liver, lymph node(s) or spleen.

Radiotherapy, either alone or with chemotherapy, is sometimes given as conditioning treatment to prepare a patient for a blood or marrow stem cell transplant. The most common types used to treat blood cancer are external beam radiation (see below) and radioimmunotherapy.
External Beam Radiation

External beam radiation is the type of radiotherapy used most often for people with blood cancers. A focused radiation beam is delivered outside the body by a machine called a linear accelerator, or linac for short. The linear accelerator moves around the body to deliver radiation from various angles. Linear accelerators make it possible to decrease or avoid skin reactions and deliver targeted radiation to lessen “scatter” of radiation to nearby tissues.

The dose (total amount) of radiation used during treatment depends on various factors regarding the patient, disease and reason for treatment, and is established by a radiation oncologist. You may receive radiotherapy during a series of visits, spread over several weeks (from two to 10 weeks, on average). This approach, called dose fractionation, lessens side effects. External beam radiation does not make you radioactive.

2.5.2  Bone marrow (BM) transplantation

http://www.nlm.nih.gov/medlineplus/ency/article/003009.htm

There are three kinds of bone marrow transplants:

Autologous bone marrow transplant: The term auto means self. Stem cells are removed from you before you receive high-dose chemotherapy or radiation treatment. The stem cells are stored in a freezer (cryopreservation). After high-dose chemotherapy or radiation treatments, your stems cells are put back in your body to make (regenerate) normal blood cells. This is called a rescue transplant.

Allogeneic bone marrow transplant: The term allo means other. Stem cells are removed from another person, called a donor. Most times, the donor’s genes must at least partly match your genes. Special blood tests are done to see if a donor is a good match for you. A brother or sister is most likely to be a good match. Sometimes parents, children, and other relatives are good matches. Donors who are not related to you may be found through national bone marrow registries.

Umbilical cord blood transplant: This is a type of allogeneic transplant. Stem cells are removed from a newborn baby’s umbilical cord right after birth. The stem cells are frozen and stored until they are needed for a transplant. Umbilical cord blood cells are very immature so there is less of a need for matching. But blood counts take much longer to recover.

Before the transplant, chemotherapy, radiation, or both may be given. This may be done in two ways:

Ablative (myeloablative) treatment: High-dose chemotherapy, radiation, or both are given to kill any cancer cells. This also kills all healthy bone marrow that remains, and allows new stem cells to grow in the bone marrow.

Reduced intensity treatment, also called a mini transplant: Patients receive lower doses of chemotherapy and radiation before a transplant. This allows older patients, and those with other health problems to have a transplant.

A stem cell transplant is usually done after chemotherapy and radiation is complete. The stem cells are delivered into your bloodstream usually through a tube called a central venous catheter. The process is similar to getting a blood transfusion. The stem cells travel through the blood into the bone marrow. Most times, no surgery is needed.

Donor stem cells can be collected in two ways:

  • Bone marrow harvest. This minor surgery is done under general anesthesia. This means the donor will be asleep and pain-free during the procedure. The bone marrow is removed from the back of both hip bones. The amount of marrow removed depends on the weight of the person who is receiving it.
  • Leukapheresis. First, the donor is given 5 days of shots to help stem cells move from the bone marrow into the blood. During leukapheresis, blood is removed from the donor through an IV line in a vein. The part of white blood cells that contains stem cells is then separated in a machine and removed to be later given to the recipient. The red blood cells are returned to the donor.

Why the Procedure is Performed

A bone marrow transplant replaces bone marrow that either is not working properly or has been destroyed (ablated) by chemotherapy or radiation. Doctors believe that for many cancers, the donor’s white blood cells can attach to any remaining cancer cells, similar to when white cells attach to bacteria or viruses when fighting an infection.

Your doctor may recommend a bone marrow transplant if you have:

Certain cancers, such as leukemia, lymphoma, and multiple myeloma

A disease that affects the production of bone marrow cells, such as aplastic anemia, congenital neutropenia, severe immunodeficiency syndromes, sickle cell anemia, thalassemia

Had chemotherapy that destroyed your bone

2.5.3 Autologous stem cell transplantation

Phase II trial of 131I-B1 (anti-CD20) antibody therapy with autologous stem cell transplantation for relapsed B cell lymphomas

O.W Press,  F Appelbaum,  P.J Martin, et al.
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(95)92225-3/abstract

25 patients with relapsed B-cell lymphomas were evaluated with trace-labelled doses (2·5 mg/kg, 185-370 MBq [5-10 mCi]) of 131I-labelled anti-CD20 (B1) antibody in a phase II trial. 22 patients achieved 131I-B1 biodistributions delivering higher doses of radiation to tumor sites than to normal organs and 21 of these were treated with therapeutic infusions of 131I-B1 (12·765-29·045 GBq) followed by autologous hemopoietic stem cell reinfusion. 18 of the 21 treated patients had objective responses, including 16 complete remissions. One patient died of progressive lymphoma and one died of sepsis. Analysis of our phase I and II trials with 131I-labelled B1 reveal a progression-free survival of 62% and an overall survival of 93% with a median follow-up of 2 years. 131I-anti-CD20 (B1) antibody therapy produces complete responses of long duration in most patients with relapsed B-cell lymphomas when given at maximally tolerated doses with autologous stem cell rescue.

Autologous (Self) Transplants

http://www.leukaemia.org.au/treatments/stem-cell-transplants/autologous-self-transplants

An autologous transplant (or rescue) is a type of transplant that uses the person’s own stem cells. These cells are collected in advance and returned at a later stage. They are used to replace stem cells that have been damaged by high doses of chemotherapy, used to treat the person’s underlying disease.

In most cases, stem cells are collected directly from the bloodstream. While stem cells normally live in your marrow, a combination of chemotherapy and a growth factor (a drug that stimulates stem cells) called Granulocyte Colony Stimulating Factor (G-CSF) is used to expand the number of stem cells in the marrow and cause them to spill out into the circulating blood. From here they can be collected from a vein by passing the blood through a special machine called a cell separator, in a process similar to dialysis.

Most of the side effects of an autologous transplant are caused by the conditioning therapy used. Although they can be very unpleasant at times it is important to remember that most of them are temporary and reversible.

Procedure of Hematopoietic Stem Cell Transplantation

Hematopoietic stem cell transplantation (HSCT) is the transplantation of multipotent hematopoietic stem cells, usually derived from bone marrow, peripheral blood, or umbilical cord blood. It may be autologous (the patient’s own stem cells are used) or allogeneic (the stem cells come from a donor).

Hematopoietic Stem Cell Transplantation

Author: Ajay Perumbeti, MD, FAAP; Chief Editor: Emmanuel C Besa, MD
http://emedicine.medscape.com/article/208954-overview

Hematopoietic stem cell transplantation (HSCT) involves the intravenous (IV) infusion of autologous or allogeneic stem cells to reestablish hematopoietic function in patients whose bone marrow or immune system is damaged or defective.

The image below illustrates an algorithm for typically preferred hematopoietic stem cell transplantation cell source for treatment of malignancy.

An algorithm for typically preferred hematopoietic stem cell transplantation cell source for treatment of malignancy: If a matched sibling donor is not available, then a MUD is selected; if a MUD is not available, then choices include a mismatched unrelated donor, umbilical cord donor(s), and a haploidentical donor.

Supportive Therapies

2.5.4  Blood transfusions – risks and complications of a blood transfusion

  • Allogeneic transfusion reaction (acute or delayed hemolytic reaction)
  • Allergic reaction
  • Viruses Infectious Diseases

The risk of catching a virus from a blood transfusion is very low.

HIV. Your risk of getting HIV from a blood transfusion is lower than your risk of getting killed by lightning. Only about 1 in 2 million donations might carry HIV and transmit HIV if given to a patient.

Hepatitis B and C. The risk of having a donation that carries hepatitis B is about 1 in 205,000. The risk for hepatitis C is 1 in 2 million. If you receive blood during a transfusion that contains hepatitis, you’ll likely develop the virus.

Variant Creutzfeldt-Jakob disease (vCJD). This disease is the human version of Mad Cow Disease. It’s a very rare, yet fatal brain disorder. There is a possible risk of getting vCJD from a blood transfusion, although the risk is very low. Because of this, people who may have been exposed to vCJD aren’t eligible blood donors.

  • Fever
  • Iron Overload
  • Lung Injury
  • Graft-Versus-Host Disease

Graft-versus-host disease (GVHD) is a condition in which white blood cells in the new blood attack your tissues.

2.5.5 Erythropoietin

Erythropoietin, (/ɨˌrɪθrɵˈpɔɪ.ɨtɨn/UK /ɛˌrɪθr.pˈtɪn/) also known as EPO, is a glycoprotein hormone that controls erythropoiesis, or red blood cell production. It is a cytokine (protein signaling molecule) for erythrocyte (red blood cell) precursors in the bone marrow. Human EPO has a molecular weight of 34 kDa.

Also called hematopoietin or hemopoietin, it is produced by interstitial fibroblasts in the kidney in close association with peritubular capillary and proximal convoluted tubule. It is also produced in perisinusoidal cells in the liver. While liver production predominates in the fetal and perinatal period, renal production is predominant during adulthood. In addition to erythropoiesis, erythropoietin also has other known biological functions. For example, it plays an important role in the brain’s response to neuronal injury.[1] EPO is also involved in the wound healing process.[2]

Exogenous erythropoietin is produced by recombinant DNA technology in cell culture. Several different pharmaceutical agents are available with a variety ofglycosylation patterns, and are collectively called erythropoiesis-stimulating agents (ESA). The specific details for labelled use vary between the package inserts, but ESAs have been used in the treatment of anemia in chronic kidney disease, anemia in myelodysplasia, and in anemia from cancer chemotherapy. Boxed warnings include a risk of death, myocardial infarction, stroke, venous thromboembolism, and tumor recurrence.[3]

2.5.6  G-CSF (granulocyte-colony stimulating factor)

Granulocyte-colony stimulating factor (G-CSF or GCSF), also known as colony-stimulating factor 3 (CSF 3), is a glycoprotein that stimulates the bone marrow to produce granulocytes and stem cells and release them into the bloodstream.

There are different types, including

  • Lenograstim (Granocyte)
  • Filgrastim (Neupogen, Zarzio, Nivestim, Ratiograstim)
  • Long acting (pegylated) filgrastim (pegfilgrastim, Neulasta) and lipegfilgrastim (Longquex)

Pegylated G-CSF stays in the body for longer so you have treatment less often than with the other types of G-CSF.

2.5.7  Plasma Exchange (plasmapheresis)

http://emedicine.medscape.com/article/1895577-overview

Plasmapheresis is a term used to refer to a broad range of procedures in which extracorporeal separation of blood components results in a filtered plasma product.[1, 2] The filtering of plasma from whole blood can be accomplished via centrifugation or semipermeable membranes.[3] Centrifugation takes advantage of the different specific gravities inherent to various blood products such as red cells, white cells, platelets, and plasma.[4] Membrane plasma separation uses differences in particle size to filter plasma from the cellular components of blood.[3]

Traditionally, in the United States, most plasmapheresis takes place using automated centrifuge-based technology.[5] In certain instances, in particular in patients already undergoing hemodialysis, plasmapheresis can be carried out using semipermeable membranes to filter plasma.[4]

In therapeutic plasma exchange, using an automated centrifuge, filtered plasma is discarded and red blood cells along with replacement colloid such as donor plasma or albumin is returned to the patient. In membrane plasma filtration, secondary membrane plasma fractionation can selectively remove undesired macromolecules, which then allows for return of the processed plasma to the patient instead of donor plasma or albumin. Examples of secondary membrane plasma fractionation include cascade filtration,[6] thermofiltration, cryofiltration,[7] and low-density lipoprotein pheresis.

The Apheresis Applications Committee of the American Society for Apheresis periodically evaluates potential indications for apheresis and categorizes them from I to IV based on the available medical literature. The following are some of the indications, and their categorization, from the society’s 2010 guidelines.[2]

  • The only Category I indication for hemopoietic malignancy is Hyperviscosity in monoclonal gammopathies

2.5.8  Platelet Transfusions

Indications for platelet transfusion in children with acute leukemia

Scott Murphy, Samuel Litwin, Leonard M. Herring, Penelope Koch, et al.
Am J Hematol Jun 1982; 12(4): 347–356
http://onlinelibrary.wiley.com/doi/10.1002/ajh.2830120406/abstract;jsessionid=A6001D9D865EA1EBC667EF98382EF20C.f03t01
http://dx.doi.org:/10.1002/ajh.2830120406

In an attempt to determine the indications for platelet transfusion in thrombocytopenic patients, we randomized 56 children with acute leukemia to one of two regimens of platelet transfusion. The prophylactic group received platelets when the platelet count fell below 20,000 per mm3 irrespective of clinical events. The therapeutic group was transfused only when significant bleeding occurred and not for thrombocytopenia alone. The time to first bleeding episode was significantly longer and the number of bleeding episodes were significantly reduced in the prophylactic group. The survival curves of the two groups could not be distinguished from each other. Prior to the last month of life, the total number of days on which bleeding was present was significantly reduced by prophylactic therapy. However, in the terminal phase (last month of life), the duration of bleeding episodes was significantly longer in the prophylactic group. This may have been due to a higher incidence of immunologic refractoriness to platelet transfusion. Because of this terminal bleeding, comparison of the two groups for total number of days on which bleeding was present did not show a significant difference over the entire study period.

Clinical and Laboratory Aspects of Platelet Transfusion Therapy
Yuan S, Goldfinger D
http://www.uptodate.com/contents/clinical-and-laboratory-aspects-of-platelet-transfusion-therapy

INTRODUCTION — Hemostasis depends on an adequate number of functional platelets, together with an intact coagulation (clotting factor) system. This topic covers the logistics of platelet use and the indications for platelet transfusion in adults. The approach to the bleeding patient, refractoriness to platelet transfusion, and platelet transfusion in neonates are discussed elsewhere.

Pooled Platelets – A single unit of platelets can be isolated from every unit of donated blood, by centrifuging the blood within the closed collection system to separate the platelets from the red blood cells (RBC). The number of platelets per unit varies according to the platelet count of the donor; a yield of 7 x 1010 platelets is typical [1]. Since this number is inadequate to raise the platelet count in an adult recipient, four to six units are pooled to allow transfusion of 3 to 4 x 1011 platelets per transfusion [2]. These are called whole blood-derived or random donor pooled platelets.

Advantages of pooled platelets include lower cost and ease of collection and processing (a separate donation procedure and pheresis equipment are not required). The major disadvantage is recipient exposure to multiple donors in a single transfusion and logistic issues related to bacterial testing.

Apheresis (single donor) Platelets – Platelets can also be collected from volunteer donors in the blood bank, in a one- to two-hour pheresis procedure. Platelets and some white blood cells are removed, and red blood cells and plasma are returned to the donor. A typical apheresis platelet unit provides the equivalent of six or more units of platelets from whole blood (ie, 3 to 6 x 1011 platelets) [2]. In larger donors with high platelet counts, up to three units can be collected in one session. These are called apheresis or single donor platelets.

Advantages of single donor platelets are exposure of the recipient to a single donor rather than multiple donors, and the ability to match donor and recipient characteristics such as HLA type, cytomegalovirus (CMV) status, and blood type for certain recipients.

Both pooled and apheresis platelets contain some white blood cells (WBC) that were collected along with the platelets. These WBC can cause febrile non-hemolytic transfusion reactions (FNHTR), alloimmunization, and transfusion-associated graft-versus-host disease (ta-GVHD) in some patients.

Platelet products also contain plasma, which can be implicated in adverse reactions including transfusion-related acute lung injury (TRALI) and anaphylaxis. (See ‘Complications of platelet transfusion’ .)

Read Full Post »

« Newer Posts - Older Posts »