Archive for the ‘Statistical Methods for Research Evaluation’ Category

Thriving at the Survival Calls during Careers in the Digital Age – An AGE like no Other, also known as, DIGITAL

Author and Curator: Aviva Lev-Ari, PhD, RN


The source for the inspiration to write this curation is described in

Survival Calls during Careers in the Digital Age



In this curation, I present the following concepts in three parts:

  1. Part 1: Authenticity of Careers in the Digital Age: In Focus, the BioTechnology Sector
  2. Part 2: Top 10 books to help you survive the Digital Age

  3. Part 3: A case study on Thriving at the Survival Calls during Careers in the Digital Age: Aviva Lev-Ari, UCB, PhD’83; HUJI, MA’76 


Part 1: Authenticity of Careers in the Digital Age: 

In Focus, the BioTechnology Sector


Lisa LaMotta, Senior Editor, BioPharma Dive wrote in Conference edition | June 11, 2018

Unlike that little cancer conference in Chicago last week, the BIO International convention is not about data, but about the people who make up the biopharma industry.

The meeting brings together scientists, board members, business development heads and salespeople, from the smallest virtual biotechs to the largest of pharmas. It allows executives at fledgling biotechs to sit at the same tables as major decision-makers in the industry — even if it does look a little bit like speed dating.

But it’s not just a partnering meeting.

This year’s BIO also sought to shine a light on pressing issues facing the industry. Among those tackled included elevating the discussion on gender diversity and how to bring more women to the board level; raising awareness around suicide and the need for more mental health treatments; giving a voice to patient advocacy groups; and highlighting the need for access to treatments in developing nations.

Four days of meetings and panel discussions are unlikely to move the needle for many of these challenges, but debate can be the first step toward progress.

I attended the meetings on June 4,5,6, 2018 and covered in Real Time the sessions I attended. On the link below, Tweets, Re-Tweets and Likes mirrors the feelings and the opinions of the attendees as expressed in real time using the Twitter.com platform. This BioTechnology events manifested the AUTHENTICITY of Careers in the Digital Age – An AGE like no Other, also known as, DIGITAL.

The entire event is covered on twitter.com by the following hash tag and two handles:


I covered the events on two tracks via two Twitter handles, each handle has its own followers:

The official LPBI Group Twitter.com account

The Aviva Lev-Ari, PhD, RN Twitter.com account

Track A:

  • Original Tweets by @Pharma_BI and by @AVIVA1950 for #BIO2018 @IAmBiotech @BIOConvention – BIO 2018, Boston, June 4-7, 2018, BCEC

Curator: Aviva Lev-Ari, PhD, RN



  • Reactions to Original Tweets by @Pharma_BI and by @AVIVA1950 from #BIO2018

Curator: Aviva Lev-Ari, PhD, RN


Track B:

  • Re-Tweets and Likes by @Pharma_BI and by @AVIVA1950 from #BIO2018 @IAmBiotech @BIOConvention – BIO 2018, Boston, June 4-7, 2018, BCEC

Curator: Aviva Lev-Ari, PhD, RN


Part 2: Top 10 books to help you survive the digital age

From Philip K Dick’s obtuse robots to Mark O’Connell’s guide to transhumanism, novelist Julian Gough picks essential reading for a helter skelter world

Here are 10 of the books that did help me [novelist Julian Gough]: they might also help you understand, and survive, our complicated, stressful, digital age.

  1. Marshall McLuhan Unbound by Marshall McLuhan (2005)
    The visionary Canadian media analyst predicted the internet, and coined the phrase the Global Village, in the early 1960s. His dense, complex, intriguing books explore how changes in technology change us. This book presents his most important essays as 20 slim pamphlets in a handsome, profoundly physical, defiantly non-digital slipcase.
  2. Ubik by Philip K Dick (1969)
    Pure pulp SF pleasure; a deep book disguised as a dumb one. Dick shows us, not a dystopia, but a believably shabby, amusingly human future. The everyman hero, Joe Chip, wakes up and argues with his robot toaster, which refuses to toast until he sticks a coin in the slot. Joe can’t do this, because he’s broke. He then has a stand-up row with his robot front door, which won’t open, because he owes it money too … Technology changes: being human, and broke, doesn’t. Warning: Dick wrote Ubik at speed, on speed. But embedded in the pulpy prose are diamonds of imagery that will stay with you for ever.
  3. The Singularity Is Near by Ray Kurzweil (2005)
    This book is what Silicon Valley has instead of a bible. It’s a visionary work that predicts a technological transformation of the world in our lifetime. Kurzweil argues that computer intelligence will soon outperform human thought. We will then encode our minds, upload them, and become one with our technology, achieving the Singularity. At which point, the curve of technological progress starts to go straight up. Ultimately – omnipotent, no longer mortal, no longer flesh – we transform all the matter in the universe into consciousness; into us.
  4. To Be a Machine by Mark O’Connell (2017)
    This response to Kurzweil won this year’s Wellcome prize. It’s a short, punchy tour of transhumanism: the attempt to meld our minds with machines, to transcend biology and escape death. He meets some of the main players, and many on the fringes, and listens to them, quizzically. It is a deliberately, defiantly human book, operating in that very modern zone between sarcasm and irony, where humans thrive and computers crash.
  5. A Visit from the Goon Squad by Jennifer Egan (2011)
    This intricately structured, incredibly clever novel moves from the 60s right through to a future maybe 15 years from now. It steps so lightly into that future you hardly notice the transition. It has sex and drugs and rock’n’roll, solar farms, social media scams and a stunningly moving chapter written as a PowerPoint presentation. It’s a masterpiece. Life will be like this.
  6. What Technology Wants by Kevin Kelly (2010)
    Kelly argues that we scruffy biological humans are no longer driving technological progress. Instead, the technium, “the greater, global, massively interconnected system of technology vibrating around us”, is now driving its own progress, faster and faster, and we are just caught up in its slipstream. As we accelerate down the technological waterslide, there is no stopping now … Kelly’s vision of the future is scary, but it’s fun, and there is still a place for us in it.
  7. The Meme Machine by Susan Blackmore (1999)
    Blackmore expands powerfully and convincingly on Richard Dawkins’s original concept of the meme. She makes a forceful case that technology, religion, fashion, art and even our personalities are made of memes – ideas that replicate, mutate and thus evolve over time. We are their replicators (if you buy my novel, you’ve replicated its memes); but memes drive our behaviour just as we drive theirs. It’s a fascinating book that will flip your world upside down.
  8. Neuromancer by William Gibson (1984)
    In the early 1980s, Gibson watched kids leaning into the screens as they played arcade games. They wanted to be inside the machines, he realised, and they preferred the games to reality. In this novel, Gibson invented the term cyberspace; sparked the cyberpunk movement (to his chagrin); and vividly imagined the jittery, multi-screened, anxious, technological reality that his book would help call into being.
  9. You Are Not a Gadget: A Manifesto by Jaron Lanier (2010)
    Lanier, an intense, brilliant, dreadlocked artist, musician and computer scientist, helped to develop virtual reality. His influential essay Digital Maoism described early the downsides of online collective action. And he is deeply aware that design choices made by (mainly white, young, male) software engineers can shape human behaviour globally. He argues, urgently, that we need to question those choices, now, because once they are locked in, all of humanity must move along those tracks, and we may not like where they take us. Events since 2010 have proved him right. His manifesto is a passionate argument in favour of the individual voice, the individual gesture.
  10. All About Love: New Visions by bell hooks (2000)
    Not, perhaps, an immediately obvious influence on a near-future techno-thriller in which military drones chase a woman and her son through Las Vegas. But hooks’s magnificent exploration and celebration of love, first published 18 years ago, will be far more useful to us, in our alienated digital future, than the 10,000 books of technobabble published this year. All About Love is an intensely practical roadmap, from where we are now to where we could be. When Naomi and Colt find themselves on the run through a militarised American wilderness of spirit, when GPS fails them, bell hooks is their secret guide.



Part 3: A case study on Thriving at the Survival Calls during Careers in the Digital Age:  Aviva Lev-Ari, UCB, PhD’83; HUJI, MA’76


On June 10, 2018


Following, is a case study about an alumna of HUJI and UC, Berkeley as an inspirational role model. An alumna’s profile in context of dynamic careers in the digital age. It has great timeliness and relevance to graduate students, PhD level at UC Berkeley and beyond, to all other top tier universities in the US and Europe. As presented in the following curations:

Professional Self Re-Invention: From Academia to Industry – Opportunities for PhDs in the Business Sector of the Economy



Pioneering implementations of analytics to business decision making: contributions to domain knowledge conceptualization, research design, methodology development, data modeling and statistical data analysis: Aviva Lev-Ari, UCB, PhD’83; HUJI, MA’76 



This alumna is Editor-in-Chief of a Journal that has other 173 articles on Scientist: Career Considerations 



In a 5/22/2018 article, Ways to Pursue Science Careers in Business After a PhD by Ankita Gurao,


Unemployment figures of PhDs by field of science are included, Ankita Gurao identifies the following four alternative careers for PhDs in the non-academic world:

  1. Science Writer/Journalist/Communicator
  2. Science Management
  3. Science Administration
  4. Science Entrepreneurship

My career, as presented in Reflections on a Four-phase Career: Aviva Lev-Ari, PhD, RN, March 2018


has the following phases:

  • Phase 1: Research, 1973 – 1983
  • Phase 2: Corporate Applied Research in the US, 1985 – 2005
  • Phase 3: Career Reinvention in Health Care, 2005 – 2012
  • Phase 4: Electronic Scientific Publishing, 4/2012 to present

These four phases are easily mapped to the four alternative careers for PhDs in the non-academic world. One can draw parallel lines between the four career opportunities A,B,C,D, above, and each one of the four phases in my own career.

Namely, I have identified A,B,C,D as early as 1985, and pursued each of them in several institutional settings, as follows:

A. Science Writer/Journalist/Communicator – see link above for Phase 4: Electronic Scientific Publishing, 4/2012 to present 

B. Science Management – see link above for Phase 2: Corporate Applied Research in the US, 1985 – 2005 and Phase 3: Career Reinvention in Health Care, 2005 – 2012 

C. Science Administration – see link above for Phase 2: Corporate Applied Research in the US, 1985 – 2005and Phase 4: Electronic Scientific Publishing, 4/2012 to present 

D. Science Entrepreneurship – see link above for Phase 4: Electronic Scientific Publishing, 4/2012 to present  

Impressions of My Days at Berkeley in Recollections: Part 1 and 2, below.

  • Recollections: Part 1 – My days at Berkeley, 9/1978 – 12/1983 –About my doctoral advisor, Allan Pred, other professors and other peers


  • Recollections: Part 2 – “While Rolling” is preceded by “While Enrolling” Autobiographical Alumna Recollections of Berkeley – Aviva Lev-Ari, PhD’83


The topic of Careers in the Digital Age is closely related to my profile, see chiefly: Four-phase Career, Reflections, Recollections Parts 1 & 2 and information from other biographical sources, below.

Other sources for my biography


Read Full Post »

Decline in Sperm Count – Epigenetics, Well-being and the Significance for Population Evolution and Demography


Dr. Marc Feldman, Expert Opinion on the significance of Sperm Count Decline on the Future of Population Evolution and Demography

Dr. Sudipta Saha, Effects of Sperm Quality and Quantity on Human Reproduction

Dr. Aviva Lev-Ari, Psycho-Social Effects of Poverty, Unemployment and Epigenetics on Male Well-being, Physiological Conditions affecting Sperm Quality and Quantity


UPDATED on 2/3/2018

Nobody Really Knows What Is Causing the Overdose Epidemic, But Here Are A Few Theories



Recent studies concluded via rigorous and comprehensive analysis found that Sperm Count (SC) declined 52.4% between 1973 and 2011 among unselected men from western countries, with no evidence of a ‘leveling off’ in recent years. Declining mean SC implies that an increasing proportion of men have sperm counts below any given threshold for sub-fertility or infertility. The high proportion of men from western countries with concentration below 40 million/ml is particularly concerning given the evidence that SC below this threshold is associated with a decreased monthly probability of conception.

1.Temporal trends in sperm count: a systematic review and meta-regression analysis 

Hagai Levine, Niels Jørgensen, Anderson Martino‐Andrade, Jaime Mendiola, Dan Weksler-Derri, Irina Mindlis, Rachel Pinotti, Shanna H SwanHuman Reproduction Update, July 25, 2017, doi:10.1093/humupd/dmx022.

Link: https://academic.oup.com/humupd/article-lookup/doi/10.1093/humupd/dmx022.

2. Sperm Counts Are Declining Among Western Men – Interview with Dr. Hagai Levine


3. Trends in Sperm Count – Biological Reproduction Observations

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

4. Long, mysterious strips of RNA contribute to low sperm count – Long non-coding RNAs can be added to the group of possible non-structural effects, possibly epigenetic, that might regulate sperm counts.



Dynamic expression of long non-coding RNAs reveals their potential roles in spermatogenesis and fertility

Published: 29 July 2017
Thus, we postulated that some lncRNAs may also impact mammalian spermatogenesis and fertility. In this study, we identified a dynamic expression pattern of lncRNAs during murine spermatogenesis. Importantly, we identified a subset of lncRNAs and very few mRNAs that appear to escape meiotic sex chromosome inactivation (MSCI), an epigenetic process that leads to the silencing of the X- and Y-chromosomes at the pachytene stage of meiosis. Further, some of these lncRNAs and mRNAs show strong testis expression pattern suggesting that they may play key roles in spermatogenesis. Lastly, we generated a mouse knock out of one X-linked lncRNA, Tslrn1 (testis-specific long non-coding RNA 1), and found that males carrying a Tslrn1 deletion displayed normal fertility but a significant reduction in spermatozoa. Our findings demonstrate that dysregulation of specific mammalian lncRNAs is a novel mechanism of low sperm count or infertility, thus potentially providing new biomarkers and therapeutic strategies.

This article presents two perspectives on the potential effects of Sperm Count decline.

One Perspective identifies Epigenetics and male well-being conditions

  1. as a potential explanation to the Sperm Count decline, and
  2. as evidence for decline in White male longevity in certain geographies in the US since the mid 80s.

The other Perspective, evaluates if Sperm Count Decline would have or would not have a significant long term effects on Population Evolution and Demography.

The Voice of Prof. Marc Feldman, Stanford University – Long term significance of Sperm Count Decline on Population Evolution and Demography

Poor sperm count appears to be associated with such demographic statistics as life expectancy (1), infertility (2), and morbidity (3,4). The meta-analysis by Levine et al. (5) focuses on the change in sperm count of men from North America, Europe, Australia, and New Zealand, and shows a more than 50% decline between 1973 and 2011. Although there is no analysis of potential environmental or lifestyle factors that could contribute to the estimated decline in sperm count, Levine et al. speculate that this decline could be a signal for other negative changes in men’s health.

Because this study focuses mainly on Western men, this remarkable decline in sperm count is difficult to associate with any change in actual fertility, that is, number of children born per woman. The total fertility rate in Europe, especially Italy, Spain, and Germany, has slowly declined, but age at first marriage has increased at the same time, and this increase may be more due to economic factors than physiological changes.

Included in Levine et al.’s analysis was a set of data from “Other” countries from South America, Asia, and Africa. Sperm count in men from these countries did not show significant trends, which is interesting because there have been strong fertility declines in Asia and Africa over the same period, with corresponding increases in life expectancy (once HIV is accounted for).

What can we say about the evolutionary consequences for humans of this decrease? The answer depends on the minimal number of sperm/ml/year that would be required to maintain fertility (per woman) at replacement level, say 2.1 children, over a woman’s lifetime. Given the smaller number of ova produced per woman, a change in the ovulation statistics of women would be likely to play a larger role in the total fertility rate than the number of sperm/ejaculate/man. In other words, sperm count alone, absent other effects on mortality during male reproductive years, is unlikely to tell us much about human evolution.

Further, the major declines in fertility over the 38-year period covered by Levine et al. occurred in China, India, and Japan. Chinese fertility has declined to less than 1.5 children per woman, and in Japan it has also been well below 1.5 for some time. These declines have been due to national policies and economic changes, and are therefore unlikely to signal genetic changes that would have evolutionary ramifications. It is more likely that cultural changes will continue to be the main drivers of fertility change.

The fastest growing human populations are in the Muslim world, where fertility control is not nearly as widely practiced as in the West or Asia. If this pattern were to continue for a few more generations, the cultural evolutionary impact would swamp any effects of potentially declining sperm count.

On the other hand, if the decline in sperm count were to be discovered to be associated with genetic and/or epigenetic phenotypic effects on fetuses, newborns, or pre-reproductive humans, for example, due to stress or obesity, then there would be cause to worry about long-term evolutionary problems. As Levine et al. remark, “decline in sperm count might be considered as a ‘canary in the coal mine’ for male health across the lifespan”. But to date, there is little evidence that the evolutionary trajectory of humans constitutes such a “coal mine”.


  1. Jensen TK, Jacobsen R, Christensen K, Nielsen NC, Bostofte E. 2009. Good semen quality and life expectancy: a cohort study of 43,277 men. Am J Epidemiol 170: 559-565.
  2. Eisenberg ML, Li S, Behr B, Cullen MR, Galusha D, Lamb DJ, Lipshultz LI. 2014. Semen quality, infertility and mortality in the USA. Hum Reprod 29: 1567-1574.
  3. Eisenberg ML, Li S, Cullen MR, Baker LC. 2016. Increased risk of incident chronic medical conditions in infertile men: analysis of United States claims data. Fertil Steril 105: 629-636.
  4. Latif T, Kold Jensen T, Mehlsen J, Holmboe SA, Brinth L, Pors K, Skouby SO, Jorgensen N, Lindahl-Jacobsen R. Semen quality is a predictor of subsequent morbidity. A Danish cohort study of 4,712 men with long-term follow-up. Am J Epidemiol. Doi: 10.1093/aje/kwx067. (Epub ahead of print]
  5. Levine H, Jorgensen N, Martino-Andrade A, Mendiola J, Weksler-Derri D, Mindlis I, Pinotti R, Swan SH. 2017. Temporal trends in sperm count: a systematic review and meta-regression analysis. Hum Reprod Update pp. 1-14. Doi: 10.1093/humupd/dmx022.


From: Marcus W Feldman <mfeldman@stanford.edu>

Date: Monday, July 31, 2017 at 8:10 PM

To: Aviva Lev-Ari <aviva.lev-ari@comcast.net>

Subject: Fwd: text of sperm count essay

Psycho-Social Effects of Poverty, Unemployment and Epigenetics on Male Well-being, Physiological Conditions as POTENTIAL effects on Sperm Quality and Quantity and Evidence of its effects on Male Longevity

The Voice of Carol GrahamSergio Pinto, and John Juneau II , Monday, July 24, 2017, Report from the Brookings Institute

  1. The IMPACT of Well-being, Stress induced by Worry, Pain, Perception of Hope related to Employment and Lack of employment on deterioration of Physiological Conditions as evidence by Decrease Longevity

  2. Epigenetics and Environmental Factors

The geography of desperation in America

Carol GrahamSergio Pinto, and John Juneau II Monday, July 24, 2017, Report from the Brookings Institute

In recent work based on our well-being metrics in the Gallup polls and on the mortality data from the Centers for Disease Control and Prevention, we find a robust association between lack of hope (and high levels of worry) among poor whites and the premature mortality rates, both at the individual and metropolitan statistical area (MSA) levels. Yet we also find important differences across places. Places come with different economic structures and identities, community traits, physical environments and much more. In the maps below, we provide a visual picture of the differences in in hope for the future, worry, and pain across race-income cohorts across U.S. states. We attempted to isolate the specific role of place, controlling for economic, socio-demographic, and other variables.

One surprise is the low level of optimism and high level of worry in the minority dense and generally “blue” state of California, and high levels of pain and worry in the equally minority dense and “blue” states of New York and Massachusetts. High levels of income inequality in these states may explain these patterns, as may the nature of jobs that poor minorities hold.

We cannot answer many questions at this point. What is it about the state of Washington, for example, that is so bad for minorities across the board? Why is Florida so much better for poor whites than it is for poor minorities? Why is Nevada “good” for poor white optimism but terrible for worry for the same group? One potential issue—which will enter into our future analysis—is racial segregation across places. We hope that the differences that we have found will provoke future exploration. Readers of this piece may have some contributions of their own as they click through the various maps, and we welcome their input. Better understanding the role of place in the “crisis” of despair facing our country is essential to finding viable solutions, as economic explanations, while important, alone are not enough.



Read Full Post »

Trends in Sperm Count

Reporter and Curator: Dr. Sudipta Saha, Ph.D.


There has been a genuine decline in semen quality over the past 50 years. There is lot of controversy about this as there are limitations in studies that have attempted to address it. Sperm count is of considerable public health importance for several reasons. First, sperm count is closely linked to male fecundity and is a crucial component of semen analysis, the first step to identify male factor infertility.

Reduced sperm count is associated with cryptorchidism, hypospadias and testicular cancer. It may be associated with multiple environmental influences, including endocrine disrupting chemicals, pesticides, heat and lifestyle factors, including diet, stress, smoking and BMI. Therefore, sperm count may sensitively reflect the impacts of the modern environment on male health throughout the life span.

This study provided a systematic review and meta-regression analysis of recent trends in sperm counts as measured by sperm concentration (SC) and total sperm count (TSC), and their modification by fertility and geographic group. Analyzing trends by birth cohorts instead of year of sample collection may aid in assessing the causes of the decline (prenatal or in adult life) but was not feasible owing to lack of information.

This rigorous and comprehensive analysis found that SC declined 52.4% between 1973 and 2011 among unselected men from western countries, with no evidence of a ‘leveling off’ in recent years. Declining mean SC implies that an increasing proportion of men have sperm counts below any given threshold for sub-fertility or infertility. The high proportion of men from western countries with concentration below 40 million/ml is particularly concerning given the evidence that SC below this threshold is associated with a decreased monthly probability of conception.

Declines in sperm count have implications beyond fertility and reproduction. The decline reported in this study is consistent with reported trends in other male reproductive health indicators, such as testicular germ cell tumors, cryptorchidism, onset of male puberty and total testosterone levels. The public health implications are even wider. Recent studies have shown that poor sperm count is associated with overall morbidity and mortality. While the current study is not designed to provide direct information on the causes of the observed declines, sperm count has been plausibly associated with multiple environmental and lifestyle influences, both prenatally and in adult life. In particular, endocrine disruption from chemical exposures or maternal smoking during critical windows of male reproductive development may play a role in prenatal life, while lifestyle changes and exposure to pesticides may play a role in adult life.

These findings strongly suggest a significant decline in male reproductive health, which has serious implications beyond fertility concerns. Research on causes and implications of this decline is urgently needed.



Temporal trends in sperm count: a systematic review and meta-regression analysis 

Hagai Levine, Niels Jørgensen, Anderson Martino‐Andrade, Jaime Mendiola, Dan Weksler-Derri, Irina Mindlis, Rachel Pinotti, Shanna H Swan. Human Reproduction Update, July 25, 2017, doi:10.1093/humupd/dmx022.

Link: https://academic.oup.com/humupd/article-lookup/doi/10.1093/humupd/dmx022.

Sperm Counts Are Declining Among Western Men – Interview with Dr. Hagai Levine


J Urol. 1983 Sep;130(3):467-75.

A critical method of evaluating tests for male infertility.


Hum Reprod. 1993 Jan;8(1):65-70.

Estimating fertility potential via semen analysis data.


Lancet. 1998 Oct 10;352(9135):1172-7.

Relation between semen quality and fertility: a population-based study of 430 first-pregnancy planners.


Hum Reprod Update. 2010 May-Jun;16(3):231-45. doi: 10.1093/humupd/dmp048. Epub 2009 Nov 24.

World Health Organization reference values for human semen characteristics.


J Nutr. 2016 May;146(5):1084-92. doi: 10.3945/jn.115.226563. Epub 2016 Apr 13.

Intake of Fruits and Vegetables with Low-to-Moderate Pesticide Residues Is Positively Associated with Semen-Quality Parameters among Young Healthy Men.


Reprod Toxicol. 2003 Jul-Aug;17(4):451-6.

Semen quality of Indian welders occupationally exposed to nickel and chromium.


Fertil Steril. 1996 May;65(5):1009-14.

Semen analyses in 1,283 men from the United States over a 25-year period: no decline in quality.


Read Full Post »

A New Computational Method illuminates the Heterogeneity and Evolutionary Histories of cells within a Tumor

Reporter: Aviva Lev-Ari, PhD, RN


Start Quote

Numerous computational approaches aimed at inferring tumor phylogenies from single or multi-region bulk sequencing data have recently been proposed. Most of these methods utilize the variant allele fraction or cancer cell fraction for somatic single-nucleotide variants restricted to diploid regions to infer a two-state perfect phylogeny, assuming an infinite-site model such that each site can mutate only once and persists. In practice, convergent evolution could result in the acquisition of the same mutation more than once, thereby violating this assumption. Similarly, mutations could be lost due to loss of heterozygosity. Indeed, both single-nucleotide variants and copy number alterations arise during tumor evolution, and both the variant allele fraction and cancer cell fraction depend on the copy number state whose inference reciprocally relies on the relative ordering of these alterations such that joint analysis can help resolve their ancestral relationship (Figure 1). To tackle this outstanding problem, El-Kebir et al. (2016) formulated the multi-state perfect phylogeny mixture deconvolution problem to infer clonal genotypes, clonal fractions, and phylogenies by simultaneously modeling single-nucleotide variants and copy number alterations from multi-region sequencing of individual tumors. Based on this framework, they present SPRUCE (Somatic Phylogeny Reconstruction Using Combinatorial Enumeration), an algorithm designed for this task. This new approach uses the concept of a ‘‘character’’ to represent the status of a variant in the genome.

Commonly, binary characters have been used to represent single-nucleotide variants— that is, the variant is present or absent. In contrast, El-Kebir et al. use multi-state characters to represent copy number alterations, which may be present in zero, one, two, or more copies in the genome.

SPRUCE outperforms existing methods on simulated data, yielding higher recall rates under a variety of scenarios. Moreover, it is more robust to noise in variant allele frequency estimates, which is a significant feature of tumor genome sequencing data. Importantly, El-Kebir and colleagues demonstrate that there is often an ensemble of phylogenetic trees consistent with the underlying data. This uncertainty calls for caution in deriving definitive conclusions about the evolutionary process from a single solution.”

End Quote


From Original Paper

Inferring Tumor Phylogenies from Multi-region Sequencing

Zheng Hu1,2 and Christina Curtis1,2,*

1Departments of Medicine and Genetics

2Stanford Cancer Institute

Stanford University School of Medicine, Stanford, CA 94305, USA

*Correspondence: cncurtis@stanford.edu


Read Full Post »

Warfarin and Dabigatran, Similarities and Differences

Author and Curator: Danut Dragoi, PhD


What anticoagulants do?

An anticoagulant helps your body control how fast your blood clots; therefore, it prevents clots from forming inside your arteries, veins or heart during certain medical conditions.

If you have a blood clot, an anticoagulant may prevent the clot from getting larger. It also may prevent a piece of the clot from breaking off and traveling to your lungs, brain or heart. The anticoagulant medication does not dissolve the blood clot. With time, however, this clot may dissolve on its own.

Blood tests you will need

The blood tests for clotting time are called prothrombin time (Protime, PT) and international normalized ratio (INR). These tests help determine if your medication is working. The tests are performed at a laboratory, usually once a week to once a month, as directed by your doctor. Your doctor will help you decide which laboratory you will go to for these tests.

The test results help the doctor decide the dose of warfarin (Coumadin) that you should take to keep a balance between clotting and bleeding.

Important things to keep in mind regarding blood tests include:

  • Have your INR checked when scheduled.
  • Go to the same laboratory each time. (There can be a difference in results between laboratories).
  • If you are planning a trip, talk with your doctor about using another laboratory while traveling.

The dose of medication usually ranges from 1 mg to 10 mg once daily. The doctor will prescribe one strength and change the dose as needed (your dose may be adjusted with each INR).

The tablet is scored and breaks in half easily. For example: if your doctor prescribes a 5 mg tablet and then changes the dose to 2.5 mg (2½ mg), which is half the strength, you should break one of the 5 mg tablets in half and take the half-tablet. If you have any questions about your dose, talk with your doctor or pharmacist.

What warfarin (Coumadin) tablets look like

Warfarin is made by several different drug manufacturers and is available in many different shapes. Each color represents a different strength, measured in milligrams (mg). Each tablet has the strength imprinted on one side, and is scored so you can break it in half easily to adjust your dose as your doctor instructed.


Today, on the basis of 4 clinical trials involving over 9,000 patients, PRADAXA is approved to treat blood clots in the veins of your legs(deep vein thrombosis, or DVT) or lungs (pulmonary embolism, or PE)in patients who have been treated with blood thinner injections, and to reduce the risk of them occurring again.

In these trials, PRADAXA was compared to warfarin or to placebo (sugar pills) for the treatment of DVT and PE patients.


Warfarin (NB-which goes by the brand name Coumadin, see link in here) reduces the risk of stroke in patients with atrial fibrillation (NB- atrial fibrillation (also called AFib or AF) is a quivering or irregular heartbeat (arrhythmia) that can lead to blood clots, stroke, heart failure and other heart-related complications. Some people refer to AF as a quivering heart, see link here) but increases the risk of hemorrhage and is difficult to use.

Dabigatran is a new oral direct thrombin inhibitor (NB-direct thrombin inhibitors are a class of medication that act as anticoagulants by directly inhibiting the enzyme thrombin). Some are in clinical use, while others are undergoing clinical development), see link in here.

Some international large clinical trials, see link in here,  show results for patients with atrial fibrillation, dabigatran given at a dose of 110 mg was associated with rates of stroke and systemic embolism that were similar to those associated with warfarin, as well as lower rates of major hemorrhage. Dabigatran administered at a dose of 150 mg, as compared with warfarin, was associated with lower rates of stroke and systemic embolism but similar rates of major hemorrhage.

Picture below shows a deep vein thrombosis which is a blood clot that forms inside a vein, usually deep within the leg. About half a million Americans every year get one, and up to 100,000 die because of it. The danger is that part of the clot can break off and travel through your bloodstream. It could get stuck in your lungs and block blood flow, causing organ damage or death, see link in here.

Blod Clot

Image SOURCE: http://www.webmd.com/heart-disease/guide/warfarin-other-blood-thinners

The behaviour of blood thinning drugs is dependent on their physico-chemical properties and since a significant proportion of drugs contain ionisable centers a knowledge of their pKa (NB-pKa was introduced as an index to express the acidity of weak acids, where pKa is defined as follows. For example, the Ka constant for acetic acid (CH3C00H) is 0.0000158 (= 10-4.8), but the pKa constant is 4.8, which is a simpler expression. In addition, the smaller the pKa value, the stronger the acid, see link in here ) is essential, see link in here. The pKa is defined as the negative log of the dissociation constant, see link in here:

pka=-log10(Ka)              (1)

where the dissociation constant is defined thus:


Most drugs have pKa in the range 0-12, and whilst it is possible to calculate pKa it is desirable to experimentally measure the value for representative examples. There are a number of instruments that are capable of measuring pKa utilising Sirius T3 instrument, see link in here .

Table 1 below shows the pka values for warfarin, see link in here  and dabigatran, see link in here.

Table 1


Anticoagulant           pka          

warfarin                     4.99

dabigatran                 4.24        11.51*


* dabigatran possess both acidic and basic functionality.

Both groups are at ionized at blood pH and exist as zwitterionic

structures, see link in here.

Adding physico-chemical features of anticoagulants utilized in “dissolving” blood clots is important for better understanding the de-blocking process within the veins utilizing anticoagulants.











Other related articles published in this Open Access Online Scientific Journal, include the following:

Coagulation N=69


Peripheral Arterial Disease N=43


Antiarrhythmic drugs




Electrophysiology N = 80



Read Full Post »

Huge Data Network Bites into Cancer Genomics

Larry H. Bernstein, MD, FCAP, Curator



Closer to a Cure for Gastrointestinal Cancer

Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source

In order to streamline workflows and keep pace with data-intensive discovery demands, CCS integrated its HPC environment with data capture and analytics capabilities, allowing data to move transparently between research steps, and driving discoveries such as a link between certain viruses and gastrointestinal cancers.


SANTA CLARA, CA — At the University of Miami’s Center for Computational Science (CCS), more than 2,000 internal researchers and a dozen expert collaborators across academic and industry sectors worldwide are working together in workflow management, data management, data mining, decision support, visualization and cloud computing. CCS maintains one of the largest centralized academic cyberinfrastructures in the country, which fuels vital and critical discoveries in Alzheimer’s, Parkinson’s, gastrointestinal cancer, paralysis and climate modeling, as well as marine and atmospheric science research.

In order to streamline workflows and keep pace with data-intensive discovery demands, CCS integrated its high performance computing (HPC) environment with data capture and analytics capabilities, allowing data to move transparently between research steps. To speed scientific discoveries and boost collaboration with researchers around the world, the center deployed high-performance DataDirect Networks (DDN) GS12K scale-out file storage. CCS now relies on GS12K storage to handle bandwidth-driven workloads while serving very high IOPS demand resulting from intense user interaction, which simplifies data capture and analysis. As a result, the center is able to capture, store and distribute massive amounts of data generated from multiple scientific models running different simulations on 15 Illumina HiSeq sequencers simultaneously on DDN storage. Moreover, number-crunching time for genome mapping and SNP calling has been reduced from 72 to 17 hours.

“DDN enabled us to analyze thousands of samples for the Cancer Genome Atlas, which amounts to nearly a petabyte of data,” explained Dr. Nicholas Tsinoremas, director of the Center for Computational Sciences at the University of Miami. “Having a robust storage platform like DDN is essential to driving discoveries, such as our recent study that revealed a link between certain viruses and gastrointestinal cancers. Previously, we couldn’t have done that level of computation.”

In addition to providing significant storage processing power to meet both high I/O and interactive processing requirements, CCS needed a flexible file system that could support large parallel and short serial jobs. The center also needed to address “data in flight” challenges that result from major data surges during analysis, and which often cause a 10x spike in storage. The system’s performance for genomics assembly, alignment and mapping is enabling CCS to support all its application needs, including the use of BWA and Bowtie for initial mapping, as well as SamTools and GATK for variant analysis and SNP calling.

“Our arrangement is to share data or make it available to anyone asking, anywhere in the world,” added Tsinoremas. “Now, we have the storage versatility to attract researchers from both within and outside the HPC community … we’re well-positioned to generate, analyze and integrate all types of research data to drive major scientific discoveries and breakthroughs.”

About DDN

DataDirect Networks is a big data storage supplier to data-intensive, global organizations. For more than 15 years, the company has designed, developed, deployed and optimized systems, software and solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage DDN technology and the technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at largest scale in the most efficient, reliable and cost effective manner. DDN customers include financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers.


“Where DDN really stood out is in the ability to adapt to whatever we would need. We have both IOPS-centric storage and the deep, slower I/O pool at full bandwidth. No one else could do that.”

Joel P. Zysman

Director of High Performance Computing

Center for Computational Science at the University of Miami

The University of Miami maintains one of the largest centralized, academic, cyber infrastructures in the US, which is integral to addressing and solving major scientific challenges. At its Center for Computational Science (CCS), more than 2,000 researchers, faculty, staff and students across multiple disciplines collaborate on diverse and interdisciplinary projects requiring HPC resources.

With 50% of the center’s users come from University of Miami’s Miller School of Medicine with ongoing projects at the Hussman Institute for Human Genomics, the explosion of next-generation sequencing has had a major impact on compute and storage demands. At CCS, the heavy I/O required to create four billion reads from one genome in a couple of days only intensifies when the data from the reads needs to be managed and analyzed

Aside from providing sufficient storage power to meet both high I/O and interactive processing demands, CCS needed a powerful file system that was flexible enough to handle very large parallel jobs as well as smaller, shorter serial jobs. CCS also needed to address as much as 10X spikes in storage, so it was critical to scale and support petabytes of machine-generated data without adding a layer of complexity or creating inefficiencies.

Read their success story to learn how high-performance DDN® Storage I/O has helped the University of Miami:

  • Establish links between certain viruses and gastrointestinal cancers discovered with computation that were not possible before
  • Reduce genomics compute and analysis time from 72 to 17 hours

  • Diverse, interdisciplinary research projects required massive compute and storage power as well as integrated data lifecycle movement and management
  • Highly demanding I/O and heavy interactivity requirements from next-gen sequencing intensified data generation, analysis and management
  • Handle large parallel jobs and smaller, shorter serial jobs
  • Data surges during analysis created “data-in-flight” challenges


An end-to-end, high performance DDN GRIDScaler® solution featuring a GS12K™ scale-out appliance with an embedded IBM® GPFS™ parallel file system


  • Centralized storage with an embedded file system makes it easy to add storage where needed—in the high-performance, high-transaction or slower storage pools—and then manage it all through a single pane of glass
  • DDN’s transparent data movement enables using one platform for data capture, download, analysis and retention
  • The ability to maintain an active archive of storage lets the center accommodate different types of analytics with varied I/O needs

Read Full Post »

Diet and Exercise

Writer and Curator: Larry H. Bernstein, MD, FCAP 



In the last several decades there has been a transformation in the diet of Americans, and much debate about obesity, type 2 diabetes mellitus, hyperlipidemia, and the transformation of medical practice to a greater emphasis on preventive medicine. This occurs at a time that the Western countries are experiencing a large portion of the obesity epidemic, which actually diverts attention from a larger share of malnutrition in parts of Africa, Asia, and to a greater extent in India. This does not mean that obesity or malnutrition is exclusively in any parts of the world. But there is a factor at play that involves social factors, poverty, education, cognition, anxiety, and eating behaviors, food preferences and food balance, and activities of daily living. The epidemic of obesity also involves the development of serious long term health problems, such as, type 2 diabetes mellitus, sarcopenia, fracture risk, pulmonary disease, sleep apnea in particular, and cardiovascular and stroke risk. Nevertheless, this generation of Western society is also experiencing a longer life span than its predecessors. In this article I shall explore the published work on diet and exercise.


‘‘Go4Life’’ exercise counseling, accelerometer feedback, and activity levels in older people

Warren G. Thompson, CL Kuhle, GA Koepp, SK McCrady-Spitzer, JA Levine
Archives of Gerontology and Geriatrics 58 (2014) 314–319

Older people are more sedentary than other age groups. We sought to determine if providing an accelerometer with feedback about activity and counseling older subjects using Go4Life educational material would increase activity levels. Participants were recruited from independent living areas within assisted living facilities and the general public in the Rochester, MN area. 49 persons aged 65–95(79.5 + 7.0 years) who were ambulatory but sedentary and overweight participated in this randomized controlled crossover trial for one year. After a baseline period of 2 weeks, group 1 received an accelerometer and counseling using Go4Life educational material (www.Go4Life.nia.nih.gov) for 24 weeks and accelerometer alone for the next 24 weeks. Group 2 had no intervention for the first 24 weeks and then received an accelerometer and Go4Life based counseling for 24 weeks. There were no significant baseline differences between the two groups. The intervention was not associated with a significant change inactivity, body weight, % body fat, or blood parameters (p > 0.05). Older (80–93) subjects were less active than younger (65–79) subjects (p = 0.003). Over the course of the 48 week study, an increase in activity level was associated with a decline in % body fat (p = 0.008). Increasing activity levels benefits older patients. However, providing an accelerometer and a Go4Life based exercise counseling program did not result in a 15% improvement in activity levels in this elderly population. Alternate approaches to exercise counseling may be needed in elderly people of this age range.

It is generally recommended that older adults be moderately or vigorously active for 150 min each week. A systematic review demonstrated that only 20–60% of older people are achieving this goal. These studies determined adherence to physical activity recommendations by questionnaire. Using NHANES data, it has been demonstrated that older people meet activity recommendations 62% of the time using a self-report questionnaire compared to 9.6% of the time when measured by accelerometry. Thus, objective measures suggest that older people are falling even more short of the goal than previously thought. Most studies have measured moderate and vigorous activity. However, light activity or NEAT (non-exercise activity thermogenesis) also has an important effect on health. For example, increased energy expenditure was associated with lower mortality in community-dwelling older adults. More than half of the extra energy expenditure in the high energy expenditure group came from non-exercise (light) activity. In addition to reduced total mortality, increased light and moderate activity has been associated with better cognitive function, reduced fracture rate (Gregg et al., 1998), less cardiovascular disease, and weight loss in older people. A meta-analysis of middle-aged and older adults has demonstrated greater all-cause mortality with increased sitting time. Thus, any strategy which can increase activity (whether light or more vigorous) has the potential to save lives and improve quality of life for older adults. A variety of devices have been used to measure physical activity.

A tri-axial accelerometer measures movement in three dimensions. Studies comparing tri-axial accelerometers with uniaxial accelerometers and pedometers demonstrate that only certain tri-axial accelerometers provide a reliable assessment of energy expenditure. This is usually due to failure to detect light activity. Since light activity accounts for a substantial portion of older people’s energy expenditure, measuring activity with a questionnaire or measuring steps with a pedometer do not provide an accurate reflection of activity in older people.

A recent review concluded that there is only weak evidence that physical activity can be improved. Since increasing both light and moderate activity benefit older people, studies demonstrating that physical activity can be improved are urgently needed. Since accelerometry is the best way to accurately assess light activity, we performed a study to determine if an activity counseling program and using an accelerometer which gives feedback on physical activity, can result in an increase in light and moderate activity in older people. We also sought to determine whether counseling and accelerometer feedback would result in weight loss, change in % body fat, glucose, hemoglobin A1c, insulin, and fasting lipid profile.

The main results of the study are both the experimental and control group lost weight (about 1 kg) at 6months (p = 0.04 and 0.02, respectively). The experimental group was less active at 6 months but not significantly while the control group was significantly less active at 6 months (p = 0.006) than at baseline. The experimental group had a modest decline in cholesterol (p = 0.03) and an improvement in Get Up & go time (p = 0.03) while the control group had a slight improvement in HgbA1c (p = 0.01). However, the main finding of the study was that there were no differences between the two groups on any of these variables. Thus, providing this group of older participants with an accelerometer and Go4Life based counseling resulted in no increase in physical activity, weight loss or change in glucose, lipids, blood pressure, or body fat. There were no differences within either group or between groups from 6 to 12 months on any of the variables (data not shown). While age was correlated with baseline activity, it did not affect activity change indicating that younger participants did not respond to the program better than older participants. Performance on the Get Up and Go test and season of the year did not influence the change in activity. There were no differences in physical activity levels at 3 or 9 months.

There was a significant correlation (r = -0.38, p = 0.006) between change in activity and change in body fat over the course of the study. Those subjects (whether in the experimental or control group) who increased their activity over the course of the year were likely to have a decline in % body fat over the year while those whose activity declined were likely to have increased %body fat. There was no correlation between change in activity and any of the other parameters including weight and waist circumference (data not shown).

Older adults are the fastest growing segment of the population in the US, but few meet the minimum recommended 30 min of moderate activity on 5 days or more per week (Centers for Disease Control and Prevention, 2002). Our study found that within the geriatric population, activity declines as people age. We saw a 2.4% decline per year cross-sectionally. This finding agrees with a recent cohort study (Bachman et al., 2014). In that study, the annual decline accelerated with increasing age. Thus, there is a need to increase activity particularly in the oldest age groups. The United States Preventive Services Task Force concluded that the evidence that counseling improves physical activity is weak (Moyer and US Preventive Services Task Force, 2012). The American Heart Association reached similar conclusions (Artinian et al., 2010). Thus, new ways of counseling older patients to counter the natural decline in activity with age are urgently needed.

Applying health behavior theory to multiple behavior change: Considerations and approaches

Seth M. Noar, Melissa Chabot, Rick S. Zimmerman
Preventive Medicine 46 (2008) 275–280

Background.There has been a dearth of theorizing in the area of multiple behavior change. The purpose of the current article was to examine how health behavior theory might be applied to the growing research terrain of multiple behavior change. Methods. Three approaches to applying health behavior theory to multiple behavior change are advanced, including searching the literature for potential examples of such applications. Results. These three approaches to multiple behavior change include

(1) a behavior change principles approach;

(2) a global health/behavioral category approach, and

(3) a multiple behavioral approach.

Each approach is discussed and explicated and examples from this emerging literature are provided. Conclusions. Further study in this area has the potential to broaden our understanding of multiple behaviors and multiple behavior change. Implications for additional theory-testing and application of theory to interventions are discussed.

Many of the leading causes of death in the United States are behavior-related and thus preventable. While a number of health behaviors are a concern individually, increasingly the impact of multiple behavioral risks is being appreciated. As newer initiatives funded by the National Institutes of Health and Robert Wood Johnson Foundation begin to stimulate research in this important area, a critical question emerges: How can we understand multiple health behavior change from a theoretical standpoint? While multiple behavior change interventions are beginning to be developed and evaluated, to date there have been few efforts to garner a theory-based understanding of the process of multiple health behavior change. Given that so little theoretical work currently exists in this area, our main purpose is to advance the conversation on how health behavior theory can help us to achieve a greater understanding of multiple behavior change. The approaches discussed have implications for both theory-testing as well as intervention design.

A critical question that must be asked, is whether there is a common set of principles of health behavior change that transcend individual health behaviors. This is an area where much data already exists, as health behavior theories have been tested across numerous health behaviors.The integration of findings from studies across diverse behavioral areas, is not what it could be. Godin and Kok (1996) reviewed studies of the TPB applied to numerous health-related behaviors. Across seven categories of health behaviors, they found TPB components to offer similar prediction of intention but inconsistent prediction of behavior.They concluded that the nature of differing health behaviors may require additional constructs to be added to the TPB, such as actual (versus perceived) behavioral control. Prochaska et al. (1994) examined decisional balance across stages of change for 12 health-related behaviors. Similar patterns were found across nearly all of these health behaviors, with the “pros” of changing generally increasing across the stages, the “cons” decreasing, and a pro/con crossover occurring in the contemplation or preparation stages of change. Prochaska et al. (1994) concluded that clear commonalties exist across these differing health behaviors which were examined in differing samples. Finally, Rosen (2000) examined change processes from the TTM across six behavioral categories, examining whether the trajectory of change processes is similar or different across stages of change in those health areas. He found that for smoking cessation, cognitive change processes were used more in earlier stages of change than behavioral processes, while for physical activity and dietary change, both categories of change processes increased together.

A second approach is the following: Rather than applying theoretical concepts to specific behaviors, such concepts might be applied at the general or global level. A general orientation toward health may not lead directly to specific health behaviors, but it may increase the chances of particular health-related attitudes, which may in turn lead to specific health behaviors. In fact, although Ajzen and Timko (1986) found general health attitudes to be poor predictors of behavior, such attitudes were significantly related to specific health attitudes and perceived behavioral control over specific behaviors. It is likely that when we consider multiple behaviors that we may discover an entire network of health attitudes and beliefs that are interrelated. In fact, studies of single behaviors essentially take those behaviors out of the multi-attitude and multi-behavioral context in which they are embedded. For instance, although attitudes toward walking may be a better predictor of walking behavior than attitudes toward physical activity, walking behavior is part of a larger “physical activity” behavioral category. While predicting that particular behavior may be best served by the specific measure, the larger category is both relevant and of interest. Thus, it may be that there are higher order constructs to be understood here.

A third approach is a multiple behavioral approach, or one which focuses on the linkages among health behaviors. It shares some similarities to the approach just described. Here the focus is more strictly on how particular  interventions were superior to comparison groups for 21 of 41 (51%) studies (3 physical activity, 7 diet, 11 weight loss/physical activity and diet). Twenty-four studies had indeterminate results, and in four studies the comparison conditions outperformed eHealth interventions. Conclusions: Published studies of eHealth interventions for physical activity and dietary behavior change are in their infancy. Results indicated mixed findings related to the effectiveness of eHealth interventions. Interventions that feature interactive technologies need to be refined and more rigorously evaluated to fully determine their potential as tools to facilitate health behavior change.


A prospective evaluation of the Transtheoretical Model of Change applied to exercise in young people 

Patrick Callaghan, Elizabeth Khalil, Ioannis Morres
Intl J Nursing Studies 47 (2010) 3–12

Objectives:To investigate the utility of the Transtheoretical Model of Change in predicting exercise in young people. Design: A prospective study: assessments were done at baseline and follow-up 6 months later. Method: Using stratified random sampling 1055 Chinese high school pupils living in Hong Kong, 533 of who were followed up at 6 months, completed measures of stage of change (SCQ), self-efficacy (SEQ), perceptions of the pros and cons of exercising (DBQ) and processes of change (PCQ). Data were analyzed using one-way ANOVA, repeated measures ANOVA and independent sample t tests.
Results:The utility of the TTM to predict exercise in this population is not strong; increases in self-efficacy and decisional balance discriminated between those remaining active at baseline and follow-up, but not in changing from an inactive (e.g.,Precontemplation or Contemplation) to an active state (e.g.,Maintenance) as one would anticipate given the staging algorithm of the TTM.
Conclusion:The TTM is a modest predictor of future stage of change for exercise in young Chinese people. Where there is evidence that TTM variables may shape movement over time, self-efficacy, pros and behavioral processes of change appear to be the strongest predictors


A retrospective study on changes in residents’ physical activities, social interactions, and neighborhood cohesion after moving to a walkable community

Xuemei Zhu,Chia-Yuan Yu, Chanam Lee, Zhipeng Lu, George Mann
Preventive Medicine 69 (2014) S93–S97

Objective. This study is to examine changes in residents’ physical activities, social interactions, andneighbor-hood cohesion after they moved to a walkable community in Austin, Texas.
Methods. Retrospective surveys (N=449) were administered in 2013–2014 to collect pre-and post-move data about the outcome variables and relevant personal, social, and physical environmental factors. Walkability of each resident’s pre-move community was measured using the Walk Score. T tests were used to examine the pre–post move differences in the outcomes in the whole sample and across subgroups with different physical activity levels, neighborhood conditions, and neighborhood preferences before the move. Results. After the move, total physical activity increased significantly in the whole sample and all subgroups except those who were previously sufficiently active; lived in communities with high walkability, social interactions, or neighborhood cohesion; or had moderate preference for walkable neighborhoods. Walking in the community increased in the whole sample and all subgroups except those who were previously sufficiently active, moved from high-walkability communities, or had little to no preference for walkable neighborhoods. Social interactions and neighborhood cohesion increased significantly after the move in the whole sample and all subgroups.
Conclusion.This study explored potential health benefits of a walkable community in promoting physically and socially active lifestyles, especially for populations at higher risk of obesity. The initial result is promising, suggesting the need for more work to further examine the relationships between health and community design using pre–post assessments.


Application of the transtheoretical model to identify psychological constructs influencing exercise behavior: A questionnaire survey

Young-Ho Kim
Intl J Nursing Studies 44 (2007) 936–944

Background: Current research on exercise behavior has largely been attempted to identify the relationship between psychological attributes and the initiation or adherence of exercise behavior based on psychological theories. A limited data are available on the psychological predictors of exercise behavior in public health. Objectives: The present study examined the theorized association of TTM of behavior change constructs by stage of change for exercise behavior. Methods: A total of 228 college students selected from 2 universities in Seoul were surveyed. Four Korean-version questionnaires were used to identify the stage of exercise behavior and psychological attributes of adolescents. Data were analyzed by frequency analysis, MANOVA, correlation analysis, and discriminant function analysis.
Results: Multivariate F-test indicated that behavioral and cognitive processes of change, exercise efficacy, and pros differentiated participants across the stages of exercise behavior. Furthermore, exercise behavior was significantly correlated with the TTM constructs, and that overall classification accuracy across the stages of change was 61.0%. Conclusions:The present study supports the internal and external validity of the Transtheoretical Model for explaining exercise behavior. As this study highlights, dissemination must increase awareness but also influences perceptions regarding theoretically based and practically important exercise strategies for public health professionals.



Does more education lead to better health habits? Evidence from the school reforms in Australia?

Jinhu Li, Nattavudh Powdthavee
Social Science & Medicine 127 (2015) 83-91

The current study provides new empirical evidence on the causal effect of education on health-related behaviors by exploiting historical changes in the compulsory schooling laws in Australia. Since World War II, Australian states increased the minimum school leaving age from 14 to 15 in different years. Using differences in the laws regarding minimum school leaving age across different cohorts and across different states as a source of exogenous variation in education, we show that more education improves people’s diets and their tendency to engage in more regular exercise and drinking moderately, but not necessarily their tendency to avoid smoking and to engage in more preventive health checks. The improvements in health behaviors are also reflected in the estimated positive effect of education on some health outcomes. Our results are robust to alternative measures of education and different estimation methods.

Read Full Post »

Anorexia Nervosa and Related Eating Disorders

Writer and Curator: Larry H. Bernstein, MD, FCAP 



Anorexia nervosa is a stress related disorder that occurs mainly in women, closely related to bulimia, and is related to self-esteem, or to a preoccupation with how the individual would like to see themselves. It is not necessarily driven by conscious motive, but lies in midbrain activities that govern hormonal activity and social behavior


Eating disorders

Christopher G Fairburn, Paul J Harrison
Lancet 2003; 361: 407–16

Eating disorders are an important cause of physical and psychosocial morbidity in adolescent girls and young adult women. They are much less frequent in men. Eating disorders are divided into three diagnostic categories: anorexia nervosa, bulimia nervosa, and the atypical eating disorders. However, the disorders have many features in common and patients frequently move between them, so for the purposes of this Seminar we have adopted a transdiagnostic perspective. The cause of eating disorders is complex and badly understood. There is a genetic predisposition, and certain specific environmental risk factors have been implicated. Research into treatment has focused on bulimia nervosa, and evidence-based management of this disorder is possible. A specific form of cognitive behavior therapy is the most effective treatment, although few patients seem to receive it in practice. Treatment of anorexia nervosa and atypical eating disorders has received remarkably little research attention.

Eating disorders are of great interest to the public, of perplexity to researchers, and a challenge to clinicians. They feature prominently in the media, often attracting sensational coverage. Their cause is elusive, with social, psychological, and biological processes all seeming to play a major part, and they are difficult to treat, with some patients actively resisting attempts to help them.

Anorexia nervosa and bulimia nervosa are united by a distinctive core psychopathology, which is essentially the same in female and male individuals; patients overevaluate their shape and weight. Whereas most of us assess ourselves on the basis of our perceived performance in various domains—eg, relationships, work, parenting, sporting prowess—patients with anorexia nervosa or bulimia nervosa judge
their self-worth largely, or even exclusively, in terms of their shape  and weight and their ability to control them. Most of the other features
of these disorders seem to be secondary to this psychopathology and to its consequences—for example, self-starvation. Thus, in anorexia nervosa there is a sustained and determined pursuit of weight loss and, to the extent that this pursuit is successful, this behavior is not seen as a problem. Indeed, these patients tend to view their low weight as an accomplishment rather than an affliction. In bulimia nervosa, equivalent attempts to control shape and weight are undermined by frequent episodes of uncontrolled overeating (binge eating) with the result that patients  often describe themselves as failed anorexics.  The core psychopathology has other manifestations; for example,  many patients mislabel certain adverse physical and emotional states as feeling fat, and some repeatedly scrutinize aspects of their shape,
which could contribute to them overestimating their size.

Panel 1: Classification and diagnosis of eating disorders

Definition of an eating disorder

  • There is a definite disturbance of eating habits or weight- control behavior
  • Either this disturbance, or associated core eating disorder features, results in a clinically significant impairment of physical health or psychosocial functioning (core eating disorder features comprise the disturbance of eating and any associated over-evaluation of shape or weight)
  • The behavioral disturbance should not be secondary to any general medical disorder or to any other psychiatric condition

Classification of eating disorders

  • Anorexia nervosa
  • Bulimia nervosa
  • Atypical eating disorders (or eating disorder not otherwise specified)

Principal diagnostic criteria

  • Anorexia nervosa
  1. Over-evaluation of shape and weight—ie, judging self-worth largely, or exclusively, in terms of shape and weight
  2. Active maintenance of an unduly low bodyweight—eg, body-mass index 17·5 kg/m2
  3. Amenorrhea in post-menarche females who are not taking an oral contraceptive. The value of the amenorrhea criterion can be questioned since most female patients who meet the other two diagnostic criteria are amenorrheic, and those who menstruate
    seem to resemble closely those who do not
  • Bulimia nervosa
  1. Over-evaluation of shape and weight—ie, judging self-worth largely,
    or exclusively, in terms of shape and weight
  2. Recurrent binge eating—i.e., recurrent episodes of uncontrolled overeating
  3. Extreme weight-control behavior—e.g., strict dietary restriction, frequent self-induced vomiting or laxative misuse

Diagnostic criteria for anorexia nervosa are not met

  • Atypical eating disorders

Eating disorders of clinical severity that do not conform to the diagnostic criteria for anorexia nervosa or bulimia nervosa

Research into the pathogenesis of the eating disorders has focused almost exclusively on anorexia nervosa and bulimia nervosa. There is undoubtedly a genetic predisposition and a range of environmental risk factors, and there is some information with respect to the identity and relative importance of these contributions. However, virtually nothing is known about the individual causal processes involved, or about how they interact and vary across the development and maintenance of the disorders.


Panel 3: Main risk factors for anorexia nervosa and bulimia nervosa

  • General factors
  1. Female
  2. Adolescence and early adulthood
  3. Living in a Western society
  • Individual-specific factors

Family history

  • Eating disorder of any type
  • Depression
  • Substance misuse, especially alcoholism (bulimia nervosa)
  • Obesity (bulimia nervosa)

Premorbid experiences

  • Adverse parenting (especially low contact, high expectations, parental discord)
  • Sexual abuse
  • Family dieting
  • Critical comments about eating, shape, or weight from family and others
  • Occupational and recreational pressure to be slim Premorbid characteristics

Low self-esteem

  • Perfectionism (anorexia nervosa and to a lesser extent bulimia nervosa)
  • Anxiety and anxiety disorders
  • Obesity (bulimia nervosa)
  • Early menarche (bulimia nervosa)

There has been extensive research into the neurobiology of eating disorders. This work has focused on neuropeptide and monoamine (especially 5-HT) systems thought to be central to the physiology of eating and weight regulation. Of the various central and peripheral abnormalities reported, many are likely to be secondary to the aberrant eating and associated weight loss. However, some aspects of 5-HT function remain abnormal after recovery, leading to speculation that there is a trait monoamine abnormality that might predispose to the development of eating disorders or to associated characteristics such as perfectionism. Furthermore, normal dieting in healthy women alters central 5-HT function, providing a potential mechanism by which eating disorders might be precipitated in women vulnerable for other reasons.

Specific psychological theories have been proposed to account for the development and maintenance of eating disorders. Most influential in terms of treatment have been cognitive behavioral theories. In brief, these theories propose that the restriction of food intake that characterizes the onset of many eating disorders has two main origins, both of which may operate. The first is a need to feel in control of life, which gets displaced onto controlling eating. The second is over-evaluation of shape and weight in those who have been sensitized to their appearance. In both instances, the resulting dietary restriction is highly reinforcing. Subsequently, other processes begin to
operate and serve to maintain the eating disorder.


Depression, coping, hassles, and body dissatisfaction: Factors associated with disordered eating

Rose Marie Ward, M. Cameron Hay
Eating Behaviors 17 (2015) 14–18

The objective was to explore what predicts first-year college women’s disordered eating tendencies when they arrive on campus. The 215 first-year college women completed the surveys within the first 2 weeks of classes. A structural model examined how much the Helplessness, Hopelessness, Haplessness Scale, the Brief COPE, the Brief College Student Hassle Scale, and the Body Shape Questionnaire predicted eating disordered tendencies (as measured by the Eating Attitudes Test). The Body Shape Questionnaire, the Helplessness, Hopelessness, Haplessness Scale (inversely), and the Denial subscale of the Brief COPE significantly predicted eating disorder tendencies in first-year college women. In addition, the Planning and Self-Blame subscales of the Brief COPE and the Helplessness, Hopelessness, Haplessness Scale predicted the Body Shape Questionnaire. In general, higher levels on the Helplessness, Hopelessness, Haplessness Scale and higher levels on the Brief College Student Hassle Scale related to higher levels on the Brief COPE. Coping seems to remove the direct path from stress and depression to disordered eating and body dissatisfaction.

Eating disorders and disordered eating on college campuses are a pervasive problem. Research estimates that approximately 8–13.5% of college women meet the criteria for clinically diagnosed eating disorders such as anorexia nervosa, bulima nervosa, or eating disorders not otherwise specified. In addition, negative moods and stress seem to relate eating disorders. Diagnosable eating disorders emerge in the broader context of disordered eating, that is — engaging in practices such as restricting calories, eating less fat, skipping meals, using nonprescription diet pills, using laxatives, or inducing vomiting. Whereas disordered eating is broadly associated with the dynamics of human development in adolescence in the United States and the socio-cultural pressure to be thin, college environments may particularly predispose young women to disordered eating. In a national survey, 57% of female college students reported trying to lose weight, while only 38% of female college students categorized themselves as overweight.

The mean for the overall EAT scale was 8.89 (SD=9.26, mode=2, median = 6, range 0 to 60). Over 13% (n = 22) of the sample met the criteria for potential eating disorders with overall scores of 20 or greater. One primary model was tested using the quantitative measurement data. The model fit the data, χ2 (n = 191, 72) = 89.33, p = .08, CFI N .99, TLI = .99, and RMSEA = .035.

Note: Only significant paths shown; *p < .05; **p < .01; ***p < .001; HHH = Helplessness, Hopelessness, Haplessness Scale; Hassles = Brief College Student Hassle Scale; EAT = Eating Attitudes Test-26; BSQ = Body Satisfaction Questionnaire; CFI = Comparative Fit Index; TLI = Tucker-Lewis Index; RMSEA = Root Mean Squared Error of Approximation.

Structural modeling predicting eating disorder tendencies

Structural modeling predicting eating disorder tendencies

Structural modeling predicting eating disorder tendencies. Note: Only significant paths shown; *p < .05; **p < .01; **p < .001; HHH = Helplessness, Hopelessness, Haplessness Scale; Hassles = Brief College Student Hassle Scale; EAT = Eating Attitudes Test-26; BSQ = Body Satisfaction Questionnaire; CFI = Comparative Fit Index; TLI= Tucker–Lewis Index; RMSEA = Root Mean Squared Error of Approximation.

By identifying the risk factors through research, interventions can be developed that empower people to take control of their own eating behavior. This kind of intervention is supported by the finding that those students with more agentive, active coping styles, or who did not report frequent experiences of helplessness, haplessness, and hopelessness were less likely to have disordered eating behaviors. Whereas active coping has been associated with lower disordered eating in some studies (e.g., Ball & Lee, 2000), others suggest a more complicated relationship between denial or avoidant coping and disordered eating.


The cognitive behavioral model for eating disorders: A direct evaluation in children and adolescents with obesity

Veerle Decaluwe, Caroline Braet
Eating Behaviors 6 (2005) 211–220

Objective: The cognitive behavioural model of bulimia nervosa. The clinical features and maintenance of bulimia nervosa. In K.D. Brownell, and J.P. Foreyt (Eds.), Handbook of eating disorders: physiology, psychology and treatment of obesity, anorexia and bulimia (pp. 389–404). New York: Basic Books.] provides the theoretical framework for cognitive behavior therapy of Bulimia Nervosa. For a long time it was assumed that the model can also be used to understand the mechanism of binge eating among obese individuals. The present study aimed to test whether the specific hypotheses derived from the cognitive behavioral theory of bulimia nervosa are also valid for children and adolescents with obesity. Method: The prediction of the model was tested using structural equation modeling. Data were collected from 196 children and adolescents.  Results: In line with the model, the results suggest that a lower self-esteem predicts concerns about eating, weight and shape, which in turn predict dietary restraint, which then further is predictive of binge eating.
Discussion: The findings suggest that the mechanisms specified in the model of bulimia nervosa is also operational among obese youngsters. The cognitive behavioral model of Bulimia Nervosa (BN), outlined by Fairburn, Cooper, and Cooper (1986), provides the theoretical framework for cognitive behavior therapy of BN (Fairburn, Marcus, & Wilson, 1993; Wilson, Fairburn, & Agras, 1997). According to this model, over-evaluation of eating, weight and shape plays a central role in the maintenance of BN. It is assumed that over-concern in combination with a low self-esteem can lead to dietary restraint (e.g. strict dieting and other weight control behavior). However, the rigid and unrealistic dietary rules are difficult to follow and the eating behavior is seen as a failure. Moreover, minor dietary slips are considered as evidence of lack of control and can lead to an all-or-nothing reaction in which all efforts to control eating are abandoned. This condition makes people vulnerable to binge eating. In order to minimize weight gain as a result of overeating, some patients practice compensatory purging (compensatory vomiting or laxative misuse).

The present study aimed to directly evaluate the model among a population of children and adolescents suffering from obesity. It is justified to study this model in a group at-risk. Binge eating is [V. Decaluwe´, C. Braet / Eating Behaviors 6 (2005) 211–220] not restricted to adulthood and is recognized among children with obesity as well (Decaluwe´ & Braet, 2003). Even in childhood, associated eating and shape concerns and comorbid psychopathology are manifest. Until now, little is known about how the risk factors for BED operate. A case-control study by Fairburn et al. (1998) reported a number of adverse factors in childhood, carrying a higher risk of developing BED, including negative self-evaluation, parental depression, adverse experiences (sexual or physical abuse and parental problems), overweight and repeated exposure to negative comments about shape, weight and eating. Moreover, it seems that childhood obesity is not only a risk factor for developing BED, but also one of the risk factors for the development of BN (Fairburn, Welch, Doll, Davies, & O’Connor, 1997). If Fairburn’s model is able to predict binge eating in an obese population, we can discover how the risk factors are related to one another and how they are operating to predict disordered eating among obese youngsters.

To conclude, in the present study, we were interested whether the cognitive behavioral theory would predict disordered eating in a young obese population. Because the study focuses on subjects at risk for developing binge-eating problems, BED or BN, we considered the cognitive behavioral theory as a risk factor model for eating disorders rather than a model for the maintenance of eating disorders.

  1. Method

2.1. Design

The prediction of the models was evaluated using structural equation modeling (LISREL 8.50; Jo¨reskog & So¨rbom, 2001). The dependent variables were binge eating, over-evaluation of eating, shape and weight, and dietary restraint. The independent variable was self-esteem. Purging behavior was not included in the structural equation modeling since binge eating among children occurs in the absence of compensatory behavior. Next, it is worth noting that the concept of self-esteem is implicit in the original cognitive model of BN. In order to compare the present research with the study of Byrne and McLean (2002), self-esteem was included in the evaluation of the model.

A sample of 196 children and adolescents with obesity (78 boys and 118 girls) between the ages of 10 and 16 participated in the study (M=12.73 years, SD=1.75). All subjects were seeking help for obesity. The sample consisted of children seeking inpatient or outpatient treatment. All children seeking inpatient or outpatient treatment between July 1999 and December 2001 were invited to participate. The response rate was 72%. Children younger than 10 or older than 16 and mentally retarded children were excluded from the study. All participating children obtained a diagnosis of primary obesity. The group had a mean overweight of 172.69% (SD=27.09) with a range of 120–253%. The study was approved by the local research ethics committee. The subjects were visited at their homes before they entered into treatment. Informed consent was obtained from both the children and their parents. Two subjects (1%), both female, met the full diagnostic criteria for BED and 18 subjects (9.2%) experienced at least one binge-eating episode over the previous three months (overeating with loss of control), but did not endorse all of the other DSM-IV criteria that are required for a diagnosis of BED.

To conclude, in the present study, we were interested whether the cognitive behavioral theory would predict disordered eating in a young obese population. Because the study focuses on subjects at risk for developing binge-eating problems, BED or BN, we considered the cognitive behavioral theory as a risk factor model for eating disorders rather than a model for the maintenance of eating disorders.

A two-step procedure was followed to construct the measurement model. We first conducted a confirmatory factor analysis on the variance–covariance matrix of the items of the exogenous construct (independent latent variable) b self-esteem Q. The construct b self-esteem Q is composed of 5 items of the Global self-worth subscale of the SPPA. Goodness-of-fit statistics were generated by the analysis. Items with poor loading (absolute t-value = 1.96) were removed. This resulted in a satisfactory model, χ2 (2)=6.23, p=0.04, GFI=0.97, AGFI=0.87 after omitting 1 item. The parameter estimates between the observed items and the latent variable ranged from 0.49 to 0.88.

Self-esteem was highly negatively correlated with over-evaluation of eating, weight and shape (standardized ϒ=-0.59, t=-5.05), indicating that higher levels of concerns about eating, weight and shape were associated with a lower self-esteem. Over-evaluation of eating, weight and shape, in turn, was shown to be significantly related with dietary restraint (standardized β=0.70, t=2.71), indicating that more concerns about eating, weight or shape were associated with higher levels of dietary restraint. Finally, dietary restraint was significantly associated with binge eating (standardized β=0.45, t=2.14), indicating that higher levels of dietary restraint were associated with a higher level of binge eating. The feedback from binge eating to over-evaluation of eating, weight and shape was not significant. Overall, the results appeared to suggest that a lower self-esteem predicts concerns over eating, weight and shape, which in turn predict dietary restraint. This would then be predictive of binge eating.

To our knowledge, this was the first study that directly evaluated the CBT model of BN among children. Overall, the model was found to be a good fit of the data. The main predictions of the model were confirmed. We can conclude that the CBT model provides a relatively valid explanation of the prediction of binge-eating problems in a young obese sample. Three findings supported the model and one finding did not confirm the model.

First, in line with the model, the construct self-esteem was a predictor of the over-evaluation of eating, weight and shape. This finding is also consistent with findings of Byrne and McLean (2002) and previous research in children and adolescents, which also found an association between over-concern with weight and shape and a lower self-esteem.

Second, the over-evaluation of eating, weight and shape, in turn, was a direct predictor of dietary restraint. Our findings were in line with prospective studies that found that thin-ideal internalization and body dissatisfaction (components of the over-evaluation of shape and weight) had a significant effect on dieting. Our findings also support the cross sectional study of Womble et al. (2001), who found a direct association between body dissatisfaction and dietary restraint among obese women. As in adults, children seem to respond in the same manner by dieting to lose weight. To our knowledge, the relationship between over-evaluation and dietary restraint has never been explored before among children with obesity.

Third, in accordance with the CBT model of BN, the key pathway between dietary restraint and binge eating was confirmed: higher levels of dietary restraint were associated with higher rates of binge eating. It seems that the subjects of this study were not able to maintain their dietary restraint.


Transdiagnostic Theory and Application of Family-Based Treatment for Youth With Eating Disorders

Katharine L. Loeb, James Lock, Rebecca Greif, Daniel le Grange
Cognitive and Behavioral Practice 19 (2012) 17-30

This paper describes the transdiagnostic theory and application of family-based treatment (FBT) for children and adolescents with eating disorders. We review the fundamentals of FBT, a transdiagnostic theoretical model of FBT and the literature supporting its clinical application, adaptations across developmental stages and the diagnostic spectrum of eating disorders, and the strengths and challenges of this approach, including its suitability for youth. Finally, we report a case study of an adolescent female with eating disorder not otherwise specified (EDNOS) for whom FBT was effective. We conclude that FBT is a promising outpatient treatment for anorexia nervosa, bulimia nervosa, and their EDNOS variants. The transdiagnostic model of FBT posits that while the etiology of an eating disorder is unknown, the pathology affects the family and home environment in ways that inadvertently allow for symptom maintenance and progression. FBT directly targets and resolves family level variables,  including secrecy, blame, internalization of illness, and extreme active or passive parental responses to the eating disorder. Future research will test these mechanisms, which are currently theoretical.


The Evolution of “Enhanced” Cognitive Behavior Therapy for Eating Disorders: Learning From Treatment Nonresponse

Zafra Cooper and Christopher G. Fairburn
Cognitive and Behavioral Practice 18 (2011) 394–402

In recent years there has been widespread acceptance that cognitive behavior therapy (CBT) is the treatment of choice for bulimia nervosa. The cognitive behavioral treatment of bulimia nervosa (CBT-BN) was first described in 1981. Over the past decades the theory and treatment have evolved in response to a variety of challenges. The treatment has been adapted to make it suitable for all forms of eating disorder—thereby making it “transdiagnostic” in its scope— and treatment procedures have been refined to improve outcome. The new version of the treatment, termed enhanced CBT (CBT-E) also addresses psychopathological processes “external” to the eating disorder, which, in certain subgroups of patients, interact with the disorder itself. In this paper we discuss how the development of this broader theory and treatment arose from focusing on those patients who did not respond well to earlier versions of the treatment.

In recent years there has been widespread acceptance that cognitive behavior therapy (CBT) is the treatment of choice for bulimia nervosa (National Institute for Health and Clinical Excellence, 2004; Wilson, Grilo, & Vitousek, 2007; Shapiro et al., 2007). The cognitive behavioral treatment of bulimia nervosa (CBT-BN) was first described in 1981 (Fairburn). Several years later, Fairburn (1985) described further procedural details along with a more complete exposition of the theory upon which the treatment was based (1986). This theory has since been extensively studied and the treatment derived from it, CBT-BN (Fairburn et al., 1993), has been tested in a series of treatment trials (e.g., Agras, Crow, et al., 2000; Agras, Walsh, et al., 2000; Fairburn, Jones, et al., 1993). A detailed treatment manual was published in 1993 (Fairburn, Jones, et al.). In 1997 a supplement to the manual was published (Wilson, Fairburn, & Agras) and the theory was elaborated in the same year (Fairburn).

According to the cognitive behavioral theory of bulimia nervosa, central to the maintenance of the disorder is the patient’s over-evaluation of shape and weight, the so-called “core psychopathology” [Fig. 1 – not shown – schematic form the core eating disorder maintaining mechanisms (modified from Fairburn, Cooper, & Shafran, 2003 )]. Most other features can be understood as stemming directly from this psychopathology, including the dietary restraint and restriction, the other forms of weight-control behavior, the various forms of body checking and avoidance, and the preoccupation with thoughts about shape, weight, and eating (Fairburn, 2008).

The only feature of bulimia nervosa that is not obviously a direct expression of the core psychopathology is binge eating. The cognitive behavioral theory proposes that binge eating is largely a product of a form of dietary restraint (attempts to restrict eating), which may or may not be accompanied by dietary restriction (actual undereating). Rather than adopting general guidelines about how they should eat, patients try to adhere to multiple demanding, and highly specific, dietary rules and tend to react in an extreme and negative fashion to the (almost inevitable) breaking of these rules.

A substantial body of evidence supports CBT-BN, and the findings indicate that CBTBN is the leading treatment. However, at best, half the patients who start treatment make a full and lasting response. Between 30% and 50% of patients cease binge eating and purging, and a further proportion show some improvement while others drop out of treatment or fail to respond. These findings led us to ask the question, “Why aren’t more people getting better?”

In the light of our experience with patients, we proposed that in certain patients one or more of four additional maintaining processes interact with the core eating disorder maintaining mechanisms and that when this occurs they constitute further obstacles to change. The first of these maintaining mechanisms concerns the influence of extreme perfectionism (“clinical perfectionism”). The second concerns difficulty coping with intense mood states (“mood intolerance”). Two other mechanisms concern the impact of unconditional and pervasive low self-esteem (“core low self-esteem”), and marked interpersonal problems (“interpersonal difficulties”).  This new theory represents an extension of the original theory illustrated in Fig. 1. Fig. 2 shows in schematic form both the core maintaining mechanisms and the four hypothesized additional mechanisms.

This program of work illustrates the value of focusing attention on those patients who benefit least from treatment. Doing so resulted in the enhanced form of CBT, which appears to be markedly more effective and more useful (in terms of the full range of patients treated) than its forerunner, CBT-BN.


A novel measure of compulsive food restriction in anorexia nervosa: Validation of the Self-Starvation Scale (SS)

Lauren R. Godier, Rebecca J. Park
Eating Behaviors 17 (2015) 10–13

The characteristic relentless self-starvation behavior seen in Anorexia Nervosa (AN) has been described as evidence of compulsivity,with increasing suggestion of transdiagnostic parallels with addictive behavior. There is a paucity of standardized self-report measures of compulsive behavior in eating disorders (EDs). Measures that index the concept of compulsive self-starvation in AN are needed to explore the suggested parallels with addictions. With this aima novel measure of self-starvation was developed (the Self-Starvation Scale, SS). 126 healthy participants, and 78 individuals with experience of AN, completed the new measure along with existing measures of eating disorder symptoms, anxiety and depression. Initial validation in the healthy sample indicated good reliability and construct validity, and incremental validity in predicting eating disorder symptoms. The psychometric properties of the SS scale were replicated in the AN sample. The ability of this scale to predict ED symptoms was particularly strong in individuals currently suffering from AN. These results suggest the SS may be a useful index of compulsive food restriction in AN. The concept of ‘starvation dependence’ in those with eating disorders, as a parallel with addiction, may be of clinical and theoretical importance.

The compulsive nature of Anorexia Nervosa (AN) has increasingly been compared to the maladaptive cycle of compulsive drug-seeking behavior (Barbarich-Marsteller, Foltin, & Walsh, 2011). Individuals with AN engage in persistent weight loss behavior, such as extreme self-starvation and excessive exercise, to modulate anxiety associated with ingestion of food, in a similar way to the use of mood altering drugs in substance dependence. Substance dependence is described as a persistent state in which there is a lack of control over compulsive drug-seeking, and lack of regard for the risk of serious negative consequences, which may parallel the relentlessness with which individuals with AN pursue weight loss despite profoundly negative physiological and psychological consequences.

Considering the parallels suggested between AN and substance dependence, it may be useful to use the concept of ‘dependence’ on starvation when measuring compulsive behaviors in eating disorders (EDs) such as AN. For that reason, a novel measure of self-starvation, the Self-Starvation Scale (SS) was derived, in part by adapting the Yale Food Addiction Scale (YFAS) (Gearhardt, Corbin, & Brownell, 2009) for this construct.

The set of online questionnaires was created using Bristol Online Surveys (BOS; Institute of Learning and Research Technology, University of Bristol, UK). In addition to the new measure described below, ED symptoms were measured using the Eating Disorder Examination-Questionnaire (EDE-Q) (Fairburn & Beglin, 2008), and the Clinical Impairment Assessment (CIA) (Bohn & Fairburn, 2008). Depression symptoms were measured using the Patient Health Questionnaire-9 (PHQ-9) (Kroenke, Spitzer, & Williams, 2001). Anxiety symptoms were measured using the Generalized Anxiety Disorder Assessment-7 (GAD-7) (Spitzer, Kroenke, Williams, & Lowe, 2006). The mirror image concept of ‘food addiction’ was measured using the YFAS (Gearhardt et al., 2009). Excessive exercise was measured using the Compulsive Exercise Test (CET) (Taranis, Touyz, & Meyer, 2011). Impulsivity was measured using the Barratt Impulsivity Scale-11 (BIS-11) (Patton, Stanford, & Barratt, 1995). Substance abuse symptoms were measured using the Leeds Dependence Questionnaire (LDQ) (Raistrick et al., 1994).

The results of this study suggest that using the criteria of dependence in capturing compulsive self-starvation behavior in AN may have some validity. The utility of this criteria in capturing compulsive behavior across disorders, including AN, suggests that compulsivity as a construct of behavior may have transdiagnostic application (Godier & Park, 2014; Robbins, Gillan, Smith, de Wit, & Ersche, 2012), on which disorder-specific themes are superimposed.

Read Full Post »

The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

–                          SHORTER REACTION TIME
–                        No NEED OF DEPROTEINIZATION

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.


Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.


The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.


The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Read Full Post »

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Summary and Perspectives: Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer

Author and Curator: Larry H. Bernstein, MD, FCAP

This summary is the last of a series on the impact of transcriptomics, proteomics, and metabolomics on disease investigation, and the sorting and integration of genomic signatures and metabolic signatures to explain phenotypic relationships in variability and individuality of response to disease expression and how this leads to  pharmaceutical discovery and personalized medicine.  We have unquestionably better tools at our disposal than has ever existed in the history of mankind, and an enormous knowledge-base that has to be accessed.  I shall conclude here these discussions with the powerful contribution to and current knowledge pertaining to biochemistry, metabolism, protein-interactions, signaling, and the application of the -OMICS to diseases and drug discovery at this time.

The Ever-Transcendent Cell

Deriving physiologic first principles By John S. Torday | The Scientist Nov 1, 2014

Both the developmental and phylogenetic histories of an organism describe the evolution of physiology—the complex of metabolic pathways that govern the function of an organism as a whole. The necessity of establishing and maintaining homeostatic mechanisms began at the cellular level, with the very first cells, and homeostasis provides the underlying selection pressure fueling evolution.

While the events leading to the formation of the first functioning cell are debatable, a critical one was certainly the formation of simple lipid-enclosed vesicles, which provided a protected space for the evolution of metabolic pathways. Protocells evolved from a common ancestor that experienced environmental stresses early in the history of cellular development, such as acidic ocean conditions and low atmospheric oxygen levels, which shaped the evolution of metabolism.

The reduction of evolution to cell biology may answer the perennially unresolved question of why organisms return to their unicellular origins during the life cycle.

As primitive protocells evolved to form prokaryotes and, much later, eukaryotes, changes to the cell membrane occurred that were critical to the maintenance of chemiosmosis, the generation of bioenergy through the partitioning of ions. The incorporation of cholesterol into the plasma membrane surrounding primitive eukaryotic cells marked the beginning of their differentiation from prokaryotes. Cholesterol imparted more fluidity to eukaryotic cell membranes, enhancing functionality by increasing motility and endocytosis. Membrane deformability also allowed for increased gas exchange.

Acidification of the oceans by atmospheric carbon dioxide generated high intracellular calcium ion concentrations in primitive aquatic eukaryotes, which had to be lowered to prevent toxic effects, namely the aggregation of nucleotides, proteins, and lipids. The early cells achieved this by the evolution of calcium channels composed of cholesterol embedded within the cell’s plasma membrane, and of internal membranes, such as that of the endoplasmic reticulum, peroxisomes, and other cytoplasmic organelles, which hosted intracellular chemiosmosis and helped regulate calcium.

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.  ….

As eukaryotes thrived, they experienced increasingly competitive pressure for metabolic efficiency. Engulfed bacteria, assimilated as mitochondria, provided more bioenergy. As the evolution of eukaryotic organisms progressed, metabolic cooperation evolved, perhaps to enable competition with biofilm-forming, quorum-sensing prokaryotes. The subsequent appearance of multicellular eukaryotes expressing cellular growth factors and their respective receptors facilitated cell-cell signaling, forming the basis for an explosion of multicellular eukaryote evolution, culminating in the metazoans.

Casting a cellular perspective on evolution highlights the integration of genotype and phenotype. Starting from the protocell membrane, the functional homolog for all complex metazoan organs, it offers a way of experimentally determining the role of genes that fostered evolution based on the ontogeny and phylogeny of cellular processes that can be traced back, in some cases, to our last universal common ancestor.

Given that the unicellular toolkit is complete with all the traits necessary for forming multicellular organisms (Science, 301:361-63, 2003), it is distinctly possible that metazoans are merely permutations of the unicellular body plan. That scenario would clarify a lot of puzzling biology: molecular commonalities between the skin, lung, gut, and brain that affect physiology and pathophysiology exist because the cell membranes of unicellular organisms perform the equivalents of these tissue functions, and the existence of pleiotropy—one gene affecting many phenotypes—may be a consequence of the common unicellular source for all complex biologic traits.  …

The cell-molecular homeostatic model for evolution and stability addresses how the external environment generates homeostasis developmentally at the cellular level. It also determines homeostatic set points in adaptation to the environment through specific effectors, such as growth factors and their receptors, second messengers, inflammatory mediators, crossover mutations, and gene duplications. This is a highly mechanistic, heritable, plastic process that lends itself to understanding evolution at the cellular, tissue, organ, system, and population levels, mediated by physiologically linked mechanisms throughout, without having to invoke random, chance mechanisms to bridge different scales of evolutionary change. In other words, it is an integrated mechanism that can often be traced all the way back to its unicellular origins.

The switch from swim bladder to lung as vertebrates moved from water to land is proof of principle that stress-induced evolution in metazoans can be understood from changes at the cellular level.


A MECHANISTIC BASIS FOR LUNG DEVELOPMENT: Stress from periodic atmospheric hypoxia (1) during vertebrate adaptation to land enhances positive selection of the stretch-regulated parathyroid hormone-related protein (PTHrP) in the pituitary and adrenal glands. In the pituitary (2), PTHrP signaling upregulates the release of adrenocorticotropic hormone (ACTH) (3), which stimulates the release of glucocorticoids (GC) by the adrenal gland (4). In the adrenal gland, PTHrP signaling also stimulates glucocorticoid production of adrenaline (5), which in turn affects the secretion of lung surfactant, the distension of alveoli, and the perfusion of alveolar capillaries (6). PTHrP signaling integrates the inflation and deflation of the alveoli with surfactant production and capillary perfusion.  THE SCIENTIST STAFF

From a cell-cell signaling perspective, two critical duplications in genes coding for cell-surface receptors occurred during this period of water-to-land transition—in the stretch-regulated parathyroid hormone-related protein (PTHrP) receptor gene and the β adrenergic (βA) receptor gene. These gene duplications can be disassembled by following their effects on vertebrate physiology backwards over phylogeny. PTHrP signaling is necessary for traits specifically relevant to land adaptation: calcification of bone, skin barrier formation, and the inflation and distention of lung alveoli. Microvascular shear stress in PTHrP-expressing organs such as bone, skin, kidney, and lung would have favored duplication of the PTHrP receptor, since sheer stress generates radical oxygen species (ROS) known to have this effect and PTHrP is a potent vasodilator, acting as an epistatic balancing selection for this constraint.

Positive selection for PTHrP signaling also evolved in the pituitary and adrenal cortex (see figure on this page), stimulating the secretion of ACTH and corticoids, respectively, in response to the stress of land adaptation. This cascade amplified adrenaline production by the adrenal medulla, since corticoids passing through it enzymatically stimulate adrenaline synthesis. Positive selection for this functional trait may have resulted from hypoxic stress that arose during global episodes of atmospheric hypoxia over geologic time. Since hypoxia is the most potent physiologic stressor, such transient oxygen deficiencies would have been acutely alleviated by increasing adrenaline levels, which would have stimulated alveolar surfactant production, increasing gas exchange by facilitating the distension of the alveoli. Over time, increased alveolar distension would have generated more alveoli by stimulating PTHrP secretion, impelling evolution of the alveolar bed of the lung.

This scenario similarly explains βA receptor gene duplication, since increased density of the βA receptor within the alveolar walls was necessary for relieving another constraint during the evolution of the lung in adaptation to land: the bottleneck created by the existence of a common mechanism for blood pressure control in both the lung alveoli and the systemic blood pressure. The pulmonary vasculature was constrained by its ability to withstand the swings in pressure caused by the systemic perfusion necessary to sustain all the other vital organs. PTHrP is a potent vasodilator, subserving the blood pressure constraint, but eventually the βA receptors evolved to coordinate blood pressure in both the lung and the periphery.

Gut Microbiome Heritability

Analyzing data from a large twin study, researchers have homed in on how host genetics can shape the gut microbiome.
By Tracy Vence | The Scientist Nov 6, 2014

Previous research suggested host genetic variation can influence microbial phenotype, but an analysis of data from a large twin study published in Cell today (November 6) solidifies the connection between human genotype and the composition of the gut microbiome. Studying more than 1,000 fecal samples from 416 monozygotic and dizygotic twin pairs, Cornell University’s Ruth Ley and her colleagues have homed in on one bacterial taxon, the family Christensenellaceae, as the most highly heritable group of microbes in the human gut. The researchers also found that Christensenellaceae—which was first described just two years ago—is central to a network of co-occurring heritable microbes that is associated with lean body mass index (BMI).  …

Of particular interest was the family Christensenellaceae, which was the most heritable taxon among those identified in the team’s analysis of fecal samples obtained from the TwinsUK study population.

While microbiologists had previously detected 16S rRNA sequences belonging to Christensenellaceae in the human microbiome, the family wasn’t named until 2012. “People hadn’t looked into it, partly because it didn’t have a name . . . it sort of flew under the radar,” said Ley.

Ley and her colleagues discovered that Christensenellaceae appears to be the hub in a network of co-occurring heritable taxa, which—among TwinsUK participants—was associated with low BMI. The researchers also found that Christensenellaceae had been found at greater abundance in low-BMI twins in older studies.

To interrogate the effects of Christensenellaceae on host metabolic phenotype, the Ley’s team introduced lean and obese human fecal samples into germ-free mice. They found animals that received lean fecal samples containing more Christensenellaceae showed reduced weight gain compared with their counterparts. And treatment of mice that had obesity-associated microbiomes with one member of the Christensenellaceae family, Christensenella minuta, led to reduced weight gain.   …

Ley and her colleagues are now focusing on the host alleles underlying the heritability of the gut microbiome. “We’re running a genome-wide association analysis to try to find genes—particular variants of genes—that might associate with higher levels of these highly heritable microbiota.  . . . Hopefully that will point us to possible reasons they’re heritable,” she said. “The genes will guide us toward understanding how these relationships are maintained between host genotype and microbiome composition.”

J.K. Goodrich et al., “Human genetics shape the gut microbiome,” Cell,  http://dx.doi.org:/10.1016/j.cell.2014.09.053, 2014.

Light-Operated Drugs

Scientists create a photosensitive pharmaceutical to target a glutamate receptor.
By Ruth Williams | The Scentist Nov 1, 2014

light operated drugs MO1

light operated drugs MO1


The desire for temporal and spatial control of medications to minimize side effects and maximize benefits has inspired the development of light-controllable drugs, or optopharmacology. Early versions of such drugs have manipulated ion channels or protein-protein interactions, “but never, to my knowledge, G protein–coupled receptors [GPCRs], which are one of the most important pharmacological targets,” says Pau Gorostiza of the Institute for Bioengineering of Catalonia, in Barcelona.

Gorostiza has taken the first step toward filling that gap, creating a photosensitive inhibitor of the metabotropic glutamate 5 (mGlu5) receptor—a GPCR expressed in neurons and implicated in a number of neurological and psychiatric disorders. The new mGlu5 inhibitor—called alloswitch-1—is based on a known mGlu receptor inhibitor, but the simple addition of a light-responsive appendage, as had been done for other photosensitive drugs, wasn’t an option. The binding site on mGlu5 is “extremely tight,” explains Gorostiza, and would not accommodate a differently shaped molecule. Instead, alloswitch-1 has an intrinsic light-responsive element.

In a human cell line, the drug was active under dim light conditions, switched off by exposure to violet light, and switched back on by green light. When Gorostiza’s team administered alloswitch-1 to tadpoles, switching between violet and green light made the animals stop and start swimming, respectively.

The fact that alloswitch-1 is constitutively active and switched off by light is not ideal, says Gorostiza. “If you are thinking of therapy, then in principle you would prefer the opposite,” an “on” switch. Indeed, tweaks are required before alloswitch-1 could be a useful drug or research tool, says Stefan Herlitze, who studies ion channels at Ruhr-Universität Bochum in Germany. But, he adds, “as a proof of principle it is great.” (Nat Chem Biol, http://dx.doi.org:/10.1038/nchembio.1612, 2014)

Enhanced Enhancers

The recent discovery of super-enhancers may offer new drug targets for a range of diseases.
By Eric Olson | The Scientist Nov 1, 2014

To understand disease processes, scientists often focus on unraveling how gene expression in disease-associated cells is altered. Increases or decreases in transcription—as dictated by a regulatory stretch of DNA called an enhancer, which serves as a binding site for transcription factors and associated proteins—can produce an aberrant composition of proteins, metabolites, and signaling molecules that drives pathologic states. Identifying the root causes of these changes may lead to new therapeutic approaches for many different diseases.

Although few therapies for human diseases aim to alter gene expression, the outstanding examples—including antiestrogens for hormone-positive breast cancer, antiandrogens for prostate cancer, and PPAR-γ agonists for type 2 diabetes—demonstrate the benefits that can be achieved through targeting gene-control mechanisms.  Now, thanks to recent papers from laboratories at MIT, Harvard, and the National Institutes of Health, researchers have a new, much bigger transcriptional target: large DNA regions known as super-enhancers or stretch-enhancers. Already, work on super-enhancers is providing insights into how gene-expression programs are established and maintained, and how they may go awry in disease.  Such research promises to open new avenues for discovering medicines for diseases where novel approaches are sorely needed.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions (Cell, 153:307-19, 2013). They also appear to bind a large percentage of the transcriptional machinery compared to typical enhancers, allowing them to better establish and enforce cell-type specific transcriptional programs (Cell, 153:320-34, 2013).

Super-enhancers are closely associated with genes that dictate cell identity, including those for cell-type–specific master regulatory transcription factors. This observation led to the intriguing hypothesis that cells with a pathologic identity, such as cancer cells, have an altered gene expression program driven by the loss, gain, or altered function of super-enhancers.

Sure enough, by mapping the genome-wide location of super-enhancers in several cancer cell lines and from patients’ tumor cells, we and others have demonstrated that genes located near super-enhancers are involved in processes that underlie tumorigenesis, such as cell proliferation, signaling, and apoptosis.

Super-enhancers cover stretches of DNA that are 10- to 100-fold longer and about 10-fold less abundant in the genome than typical enhancer regions.

Genome-wide association studies (GWAS) have found that disease- and trait-associated genetic variants often occur in greater numbers in super-enhancers (compared to typical enhancers) in cell types involved in the disease or trait of interest (Cell, 155:934-47, 2013). For example, an enrichment of fasting glucose–associated single nucleotide polymorphisms (SNPs) was found in the stretch-enhancers of pancreatic islet cells (PNAS, 110:17921-26, 2013). Given that some 90 percent of reported disease-associated SNPs are located in noncoding regions, super-enhancer maps may be extremely valuable in assigning functional significance to GWAS variants and identifying target pathways.

Because only 1 to 2 percent of active genes are physically linked to a super-enhancer, mapping the locations of super-enhancers can be used to pinpoint the small number of genes that may drive the biology of that cell. Differential super-enhancer maps that compare normal cells to diseased cells can be used to unravel the gene-control circuitry and identify new molecular targets, in much the same way that somatic mutations in tumor cells can point to oncogenic drivers in cancer. This approach is especially attractive in diseases for which an incomplete understanding of the pathogenic mechanisms has been a barrier to discovering effective new therapies.

Another therapeutic approach could be to disrupt the formation or function of super-enhancers by interfering with their associated protein components. This strategy could make it possible to downregulate multiple disease-associated genes through a single molecular intervention. A group of Boston-area researchers recently published support for this concept when they described inhibited expression of cancer-specific genes, leading to a decrease in cancer cell growth, by using a small molecule inhibitor to knock down a super-enhancer component called BRD4 (Cancer Cell, 24:777-90, 2013).  More recently, another group showed that expression of the RUNX1 transcription factor, involved in a form of T-cell leukemia, can be diminished by treating cells with an inhibitor of a transcriptional kinase that is present at the RUNX1 super-enhancer (Nature, 511:616-20, 2014).

Fungal effector Ecp6 outcompetes host immune receptor for chitin binding through intrachain LysM dimerization 
Andrea Sánchez-Vallet, et al.   eLife 2013;2:e00790 http://elifesciences.org/content/2/e00790#sthash.LnqVMJ9p.dpuf

LysM effector

LysM effector


While host immune receptors

  • detect pathogen-associated molecular patterns to activate immunity,
  • pathogens attempt to deregulate host immunity through secreted effectors.

Fungi employ LysM effectors to prevent

  • recognition of cell wall-derived chitin by host immune receptors

Structural analysis of the LysM effector Ecp6 of

  • the fungal tomato pathogen Cladosporium fulvum reveals
  • a novel mechanism for chitin binding,
  • mediated by intrachain LysM dimerization,

leading to a chitin-binding groove that is deeply buried in the effector protein.

This composite binding site involves

  • two of the three LysMs of Ecp6 and
  • mediates chitin binding with ultra-high (pM) affinity.

The remaining singular LysM domain of Ecp6 binds chitin with

  • low micromolar affinity but can nevertheless still perturb chitin-triggered immunity.

Conceivably, the perturbation by this LysM domain is not established through chitin sequestration but possibly through interference with the host immune receptor complex.

Mutated Genes in Schizophrenia Map to Brain Networks
From www.nih.gov –  Sep 3, 2013

Previous studies have shown that many people with schizophrenia have de novo, or new, genetic mutations. These misspellings in a gene’s DNA sequence

  • occur spontaneously and so aren’t shared by their close relatives.

Dr. Mary-Claire King of the University of Washington in Seattle and colleagues set out to

  • identify spontaneous genetic mutations in people with schizophrenia and
  • to assess where and when in the brain these misspelled genes are turned on, or expressed.

The study was funded in part by NIH’s National Institute of Mental Health (NIMH). The results were published in the August 1, 2013, issue of Cell.

The researchers sequenced the exomes (protein-coding DNA regions) of 399 people—105 with schizophrenia plus their unaffected parents and siblings. Gene variations
that were found in a person with schizophrenia but not in either parent were considered spontaneous.

The likelihood of having a spontaneous mutation was associated with

  • the age of the father in both affected and unaffected siblings.

Significantly more mutations were found in people

  • whose fathers were 33-45 years at the time of conception compared to 19-28 years.

Among people with schizophrenia, the scientists identified

  • 54 genes with spontaneous mutations
  • predicted to cause damage to the function of the protein they encode.

The researchers used newly available database resources that show

  • where in the brain and when during development genes are expressed.

The genes form an interconnected expression network with many more connections than

  • that of the genes with spontaneous damaging mutations in unaffected siblings.

The spontaneously mutated genes in people with schizophrenia

  • were expressed in the prefrontal cortex, a region in the front of the brain.

The genes are known to be involved in important pathways in brain development. Fifty of these genes were active

  • mainly during the period of fetal development.

“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” King says. “Mutations can lead to loss of integrity of a whole pathway,
not just of a single gene.”

These findings support the concept that schizophrenia may result, in part, from

  • disruptions in development in the prefrontal cortex during fetal development.

James E. Darnell’s “Reflections”

A brief history of the discovery of RNA and its role in transcription — peppered with career advice
By Joseph P. Tiano

James Darnell begins his Journal of Biological Chemistry “Reflections” article by saying, “graduate students these days

  • have to swim in a sea virtually turgid with the daily avalanche of new information and
  • may be momentarily too overwhelmed to listen to the aging.

I firmly believe how we learned what we know can provide useful guidance for how and what a newcomer will learn.” Considering his remarkable discoveries in

  • RNA processing and eukaryotic transcriptional regulation

spanning 60 years of research, Darnell’s advice should be cherished. In his second year at medical school at Washington University School of Medicine in St. Louis, while
studying streptococcal disease in Robert J. Glaser’s laboratory, Darnell realized he “loved doing the experiments” and had his first “career advancement event.”
He and technician Barbara Pesch discovered that in vivo penicillin treatment killed streptococci only in the exponential growth phase and not in the stationary phase. These
results were published in the Journal of Clinical Investigation and earned Darnell an interview with Harry Eagle at the National Institutes of Health.

Darnell arrived at the NIH in 1956, shortly after Eagle  shifted his research interest to developing his minimal essential cell culture medium, still used. Eagle, then studying cell metabolism, suggested that Darnell take up a side project on poliovirus replication in mammalian cells in collaboration with Robert I. DeMars. DeMars’ Ph.D.
adviser was also James  Watson’s mentor, so Darnell met Watson, who invited him to give a talk at Harvard University, which led to an assistant professor position
at the MIT under Salvador Luria. A take-home message is to embrace side projects, because you never know where they may lead: this project helped to shape
his career.

Darnell arrived in Boston in 1961. Following the discovery of DNA’s structure in 1953, the world of molecular biology was turning to RNA in an effort to understand how
proteins are made. Darnell’s background in virology (it was discovered in 1960 that viruses used RNA to replicate) was ideal for the aim of his first independent lab:
exploring mRNA in animal cells grown in culture. While at MIT, he developed a new technique for purifying RNA along with making other observations

  • suggesting that nonribosomal cytoplasmic RNA may be involved in protein synthesis.

When Darnell moved to Albert Einstein College of Medicine for full professorship in 1964,  it was hypothesized that heterogenous nuclear RNA was a precursor to mRNA.
At Einstein, Darnell discovered RNA processing of pre-tRNAs and demonstrated for the first time

  • that a specific nuclear RNA could represent a possible specific mRNA precursor.

In 1967 Darnell took a position at Columbia University, and it was there that he discovered (simultaneously with two other labs) that

  • mRNA contained a polyadenosine tail.

The three groups all published their results together in the Proceedings of the National Academy of Sciences in 1971. Shortly afterward, Darnell made his final career move
four short miles down the street to Rockefeller University in 1974.

Over the next 35-plus years at Rockefeller, Darnell never strayed from his original research question: How do mammalian cells make and control the making of different
mRNAs? His work was instrumental in the collaborative discovery of

  • splicing in the late 1970s and
  • in identifying and cloning many transcriptional activators.

Perhaps his greatest contribution during this time, with the help of Ernest Knight, was

  • the discovery and cloning of the signal transducers and activators of transcription (STAT) proteins.

And with George Stark, Andy Wilks and John Krowlewski, he described

  • cytokine signaling via the JAK-STAT pathway.

Darnell closes his “Reflections” with perhaps his best advice: Do not get too wrapped up in your own work, because “we are all needed and we are all in this together.”

Darnell Reflections - James_Darnell

Darnell Reflections – James_Darnell


Recent findings on presenilins and signal peptide peptidase

By Dinu-Valantin Bălănescu

γ-secretase and SPP

γ-secretase and SPP

Fig. 1 from the minireview shows a schematic depiction of γ-secretase and SPP


GxGD proteases are a family of intramembranous enzymes capable of hydrolyzing

  • the transmembrane domain of some integral membrane proteins.

The GxGD family is one of the three families of

  • intramembrane-cleaving proteases discovered so far (along with the rhomboid and site-2 protease) and
  • includes the γ-secretase and the signal peptide peptidase.

Although only recently discovered, a number of functions in human pathology and in numerous other biological processes

  • have been attributed to γ-secretase and SPP.

Taisuke Tomita and Takeshi Iwatsubo of the University of Tokyo highlighted the latest findings on the structure and function of γ-secretase and SPP
in a recent minireview in The Journal of Biological Chemistry.

  • γ-secretase is involved in cleaving the amyloid-β precursor protein, thus producing amyloid-β peptide,

the main component of senile plaques in Alzheimer’s disease patients’ brains. The complete structure of mammalian γ-secretase is not yet known; however,
Tomita and Iwatsubo note that biochemical analyses have revealed it to be a multisubunit protein complex.

  • Its catalytic subunit is presenilin, an aspartyl protease.

In vitro and in vivo functional and chemical biology analyses have revealed that

  • presenilin is a modulator and mandatory component of the γ-secretase–mediated cleavage of APP.

Genetic studies have identified three other components required for γ-secretase activity:

  1. nicastrin,
  2. anterior pharynx defective 1 and
  3. presenilin enhancer 2.

By coexpression of presenilin with the other three components, the authors managed to

  • reconstitute γ-secretase activity.

Tomita and Iwatsubo determined using the substituted cysteine accessibility method and by topological analyses, that

  • the catalytic aspartates are located at the center of the nine transmembrane domains of presenilin,
  • by revealing the exact location of the enzyme’s catalytic site.

The minireview also describes in detail the formerly enigmatic mechanism of γ-secretase mediated cleavage.

SPP, an enzyme that cleaves remnant signal peptides in the membrane

  • during the biogenesis of membrane proteins and
  • signal peptides from major histocompatibility complex type I,
  • also is involved in the maturation of proteins of the hepatitis C virus and GB virus B.

Bioinformatics methods have revealed in fruit flies and mammals four SPP-like proteins,

  • two of which are involved in immunological processes.

By using γ-secretase inhibitors and modulators, it has been confirmed

  • that SPP shares a similar GxGD active site and proteolytic activity with γ-secretase.

Upon purification of the human SPP protein with the baculovirus/Sf9 cell system,

  • single-particle analysis revealed further structural and functional details.

HLA targeting efficiency correlates with human T-cell response magnitude and with mortality from influenza A infection

From www.pnas.org –  Sep 3, 2013 4:24 PM

Experimental and computational evidence suggests that

  • HLAs preferentially bind conserved regions of viral proteins, a concept we term “targeting efficiency,” and that
  • this preference may provide improved clearance of infection in several viral systems.

To test this hypothesis, T-cell responses to A/H1N1 (2009) were measured from peripheral blood mononuclear cells obtained from a household cohort study
performed during the 2009–2010 influenza season. We found that HLA targeting efficiency scores significantly correlated with

  • IFN-γ enzyme-linked immunosorbent spot responses (P = 0.042, multiple regression).

A further population-based analysis found that the carriage frequencies of the alleles with the lowest targeting efficiencies, A*24,

  • were associated with pH1N1 mortality (r = 0.37, P = 0.031) and
  • are common in certain indigenous populations in which increased pH1N1 morbidity has been reported.

HLA efficiency scores and HLA use are associated with CD8 T-cell magnitude in humans after influenza infection.
The computational tools used in this study may be useful predictors of potential morbidity and

  • identify immunologic differences of new variant influenza strains
  • more accurately than evolutionary sequence comparisons.

Population-based studies of the relative frequency of these alleles in severe vs. mild influenza cases

  • might advance clinical practices for severe H1N1 infections among genetically susceptible populations.

Metabolomics in drug target discovery

J D Rabinowitz et al.

Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ.
Cold Spring Harbor Symposia on Quantitative Biology 11/2011; 76:235-46.

Most diseases result in metabolic changes. In many cases, these changes play a causative role in disease progression. By identifying pathological metabolic changes,

  • metabolomics can point to potential new sites for therapeutic intervention.

Particularly promising enzymatic targets are those that

  • carry increased flux in the disease state.

Definitive assessment of flux requires the use of isotope tracers. Here we present techniques for

  • finding new drug targets using metabolomics and isotope tracers.

The utility of these methods is exemplified in the study of three different viral pathogens. For influenza A and herpes simplex virus,

  • metabolomic analysis of infected versus mock-infected cells revealed
  • dramatic concentration changes around the current antiviral target enzymes.

Similar analysis of human-cytomegalovirus-infected cells, however, found the greatest changes

  • in a region of metabolism unrelated to the current antiviral target.

Instead, it pointed to the tricarboxylic acid (TCA) cycle and

  • its efflux to feed fatty acid biosynthesis as a potential preferred target.

Isotope tracer studies revealed that cytomegalovirus greatly increases flux through

  • the key fatty acid metabolic enzyme acetyl-coenzyme A carboxylase.
  • Inhibition of this enzyme blocks human cytomegalovirus replication.

Examples where metabolomics has contributed to identification of anticancer drug targets are also discussed. Eventual proof of the value of

  • metabolomics as a drug target discovery strategy will be
  • successful clinical development of therapeutics hitting these new targets.

 Related References

Use of metabolic pathway flux information in targeted cancer drug design. Drug Discovery Today: Therapeutic Strategies 1:435-443, 2004.

Detection of resistance to imatinib by metabolic profiling: clinical and drug development implications. Am J Pharmacogenomics. 2005;5(5):293-302. Review. PMID: 16196499

Medicinal chemistry, metabolic profiling and drug target discovery: a role for metabolic profiling in reverse pharmacology and chemical genetics.
Mini Rev Med Chem.  2005 Jan;5(1):13-20. Review. PMID: 15638788 [PubMed – indexed for MEDLINE] Related citations

Development of Tracer-Based Metabolomics and its Implications for the Pharmaceutical Industry. Int J Pharm Med 2007; 21 (3): 217-224.

Use of metabolic pathway flux information in anticancer drug design. Ernst Schering Found Symp Proc. 2007;(4):189-203. Review. PMID: 18811058

Pharmacological targeting of glucagon and glucagon-like peptide 1 receptors has different effects on energy state and glucose homeostasis in diet-induced obese mice. J Pharmacol Exp Ther. 2011 Jul;338(1):70-81. http://dx.doi.org:/10.1124/jpet.111.179986. PMID: 21471191

Single valproic acid treatment inhibits glycogen and RNA ribose turnover while disrupting glucose-derived cholesterol synthesis in liver as revealed by the
[U-C(6)]-d-glucose tracer in mice. Metabolomics. 2009 Sep;5(3):336-345. PMID: 19718458

Metabolic Pathways as Targets for Drug Screening, Metabolomics, Dr Ute Roessner (Ed.), ISBN: 978-953-51-0046-1, InTech, Available from: http://www.intechopen.com/books/metabolomics/metabolic-pathways-as-targets-for-drug-screening

Iron regulates glucose homeostasis in liver and muscle via AMP-activated protein kinase in mice. FASEB J. 2013 Jul;27(7):2845-54.
http://dx.doi.org:/10.1096/fj.12-216929. PMID: 23515442

Metabolomics and systems pharmacology: why and how to model the human metabolic network for drug discovery

Drug Discov. Today 19 (2014), 171–182     http://dx.doi.org:/10.1016/j.drudis.2013.07.014


  • We now have metabolic network models; the metabolome is represented by their nodes.
  • Metabolite levels are sensitive to changes in enzyme activities.
  • Drugs hitchhike on metabolite transporters to get into and out of cells.
  • The consensus network Recon2 represents the present state of the art, and has predictive power.
  • Constraint-based modelling relates network structure to metabolic fluxes.

Metabolism represents the ‘sharp end’ of systems biology, because changes in metabolite concentrations are

  • necessarily amplified relative to changes in the transcriptome, proteome and enzyme activities, which can be modulated by drugs.

To understand such behaviour, we therefore need (and increasingly have) reliable consensus (community) models of

  • the human metabolic network that include the important transporters.

Small molecule ‘drug’ transporters are in fact metabolite transporters, because

  • drugs bear structural similarities to metabolites known from the network reconstructions and
  • from measurements of the metabolome.

Recon2 represents the present state-of-the-art human metabolic network reconstruction; it can predict inter alia:

(i) the effects of inborn errors of metabolism;

(ii) which metabolites are exometabolites, and

(iii) how metabolism varies between tissues and cellular compartments.

However, even these qualitative network models are not yet complete. As our understanding improves

  • so do we recognise more clearly the need for a systems (poly)pharmacology.

Introduction – a systems biology approach to drug discovery

It is clearly not news that the productivity of the pharmaceutical industry has declined significantly during recent years

  • following an ‘inverse Moore’s Law’, Eroom’s Law, or
  • that many commentators, consider that the main cause of this is
  • because of an excessive focus on individual molecular target discovery rather than a more sensible strategy
  • based on a systems-level approach (Fig. 1).
drug discovery science

drug discovery science

Figure 1.

The change in drug discovery strategy from ‘classical’ function-first approaches (in which the assay of drug function was at the tissue or organism level),
with mechanistic studies potentially coming later, to more-recent target-based approaches where initial assays usually involve assessing the interactions
of drugs with specified (and often cloned, recombinant) proteins in vitro. In the latter cases, effects in vivo are assessed later, with concomitantly high levels of attrition.

Arguably the two chief hallmarks of the systems biology approach are:

(i) that we seek to make mathematical models of our systems iteratively or in parallel with well-designed ‘wet’ experiments, and
(ii) that we do not necessarily start with a hypothesis but measure as many things as possible (the ’omes) and

  • let the data tell us the hypothesis that best fits and describes them.

Although metabolism was once seen as something of a Cinderella subject,

  • there are fundamental reasons to do with the organisation of biochemical networks as
  • to why the metabol(om)ic level – now in fact seen as the ‘apogee’ of the ’omics trilogy –
  •  is indeed likely to be far more discriminating than are
  • changes in the transcriptome or proteome.

The next two subsections deal with these points and Fig. 2 summarises the paper in the form of a Mind Map.

metabolomics and systems pharmacology

metabolomics and systems pharmacology


Metabolic Disease Drug Discovery— “Hitting the Target” Is Easier Said Than Done

David E. Moller, et al.   http://dx.doi.org:/10.1016/j.cmet.2011.10.012

Despite the advent of new drug classes, the global epidemic of cardiometabolic disease has not abated. Continuing

  • unmet medical needs remain a major driver for new research.

Drug discovery approaches in this field have mirrored industry trends, leading to a recent

  • increase in the number of molecules entering development.

However, worrisome trends and newer hurdles are also apparent. The history of two newer drug classes—

  1. glucagon-like peptide-1 receptor agonists and
  2. dipeptidyl peptidase-4 inhibitors—

illustrates both progress and challenges. Future success requires that researchers learn from these experiences and

  • continue to explore and apply new technology platforms and research paradigms.

The global epidemic of obesity and diabetes continues to progress relentlessly. The International Diabetes Federation predicts an even greater diabetes burden (>430 million people afflicted) by 2030, which will disproportionately affect developing nations (International Diabetes Federation, 2011). Yet

  • existing drug classes for diabetes, obesity, and comorbid cardiovascular (CV) conditions have substantial limitations.

Currently available prescription drugs for treatment of hyperglycemia in patients with type 2 diabetes (Table 1) have notable shortcomings. In general,

Therefore, clinicians must often use combination therapy, adding additional agents over time. Ultimately many patients will need to use insulin—a therapeutic class first introduced in 1922. Most existing agents also have

  • issues around safety and tolerability as well as dosing convenience (which can impact patient compliance).

Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics,

  • the quantification and analysis of metabolites produced by the body.

It refers to the direct measurement of metabolites in an individual’s bodily fluids, in order to

  • predict or evaluate the metabolism of pharmaceutical compounds, and
  • to better understand the pharmacokinetic profile of a drug.

Alternatively, pharmacometabolomics can be applied to measure metabolite levels

  • following the administration of a pharmaceutical compound, in order to
  • monitor the effects of the compound on certain metabolic pathways(pharmacodynamics).

This provides detailed mapping of drug effects on metabolism and

  • the pathways that are implicated in mechanism of variation of response to treatment.

In addition, the metabolic profile of an individual at baseline (metabotype) provides information about

  • how individuals respond to treatment and highlights heterogeneity within a disease state.

All three approaches require the quantification of metabolites found

relationship between -OMICS

relationship between -OMICS


Pharmacometabolomics is thought to provide information that

Looking at the characteristics of an individual down through these different levels of detail, there is an

  • increasingly more accurate prediction of a person’s ability to respond to a pharmaceutical compound.
  1. the genome, made up of 25 000 genes, can indicate possible errors in drug metabolism;
  2. the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed;
  3. and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions.

Pharmacometabolomics complements the omics with

  • direct measurement of the products of all of these reactions, but with perhaps a relatively
  • smaller number of members: that was initially projected to be approximately 2200 metabolites,

but could be a larger number when gut derived metabolites and xenobiotics are added to the list. Overall, the goal of pharmacometabolomics is

  • to more closely predict or assess the response of an individual to a pharmaceutical compound,
  • permitting continued treatment with the right drug or dosage
  • depending on the variations in their metabolism and ability to respond to treatment.

Pharmacometabolomic analyses, through the use of a metabolomics approach,

  • can provide a comprehensive and detailed metabolic profile or “metabolic fingerprint” for an individual patient.

Such metabolic profiles can provide a complete overview of individual metabolite or pathway alterations,

This approach can then be applied to the prediction of response to a pharmaceutical compound

  • by patients with a particular metabolic profile.

Pharmacometabolomic analyses of drug response are

Pharmacogenetics focuses on the identification of genetic variations (e.g. single-nucleotide polymorphisms)

  • within patients that may contribute to altered drug responses and overall outcome of a certain treatment.

The results of pharmacometabolomics analyses can act to “inform” or “direct”

  • pharmacogenetic analyses by correlating aberrant metabolite concentrations or metabolic pathways to potential alterations at the genetic level.

This concept has been established with two seminal publications from studies of antidepressants serotonin reuptake inhibitors

  • where metabolic signatures were able to define a pathway implicated in response to the antidepressant and
  • that lead to identification of genetic variants within a key gene
  • within the highlighted pathway as being implicated in variation in response.

These genetic variants were not identified through genetic analysis alone and hence

  • illustrated how metabolomics can guide and inform genetic data.


Benznidazole Biotransformation and Multiple Targets in Trypanosoma cruzi Revealed by Metabolomics

Andrea Trochine, Darren J. Creek, Paula Faral-Tello, Michael P. Barrett, Carlos Robello
Published: May 22, 2014   http://dx.doi.org:/10.1371/journal.pntd.0002844

The first line treatment for Chagas disease, a neglected tropical disease caused by the protozoan parasite Trypanosoma cruzi,

  • involves administration of benznidazole (Bzn).

Bzn is a 2-nitroimidazole pro-drug which requires nitroreduction to become active. We used a

  • non-targeted MS-based metabolomics approach to study the metabolic response of T. cruzi to Bzn.

Parasites treated with Bzn were minimally altered compared to untreated trypanosomes, although the redox active thiols

  1. trypanothione,
  2. homotrypanothione and
  3. cysteine

were significantly diminished in abundance post-treatment. In addition, multiple Bzn-derived metabolites were detected after treatment.

These metabolites included reduction products, fragments and covalent adducts of reduced Bzn

  • linked to each of the major low molecular weight thiols:
  1. trypanothione,
  2. glutathione,
  3. g-glutamylcysteine,
  4. glutathionylspermidine,
  5. cysteine and
  6. ovothiol A.

Bzn products known to be generated in vitro by the unusual trypanosomal nitroreductase, TcNTRI,

  • were found within the parasites,
  • but low molecular weight adducts of glyoxal, a proposed toxic end-product of NTRI Bzn metabolism, were not detected.

Our data is indicative of a major role of the

  • thiol binding capacity of Bzn reduction products
  • in the mechanism of Bzn toxicity against T. cruzi.



Read Full Post »

Older Posts »